Did artificial intelligence write this editor’s note?

As someone who’s been involved in journalism since middle school, I’d like to think that the research, analysis and language skills needed to craft a news article are unique to humans. ChatGPT, a new artificial intelligence tool, is challenging my assumptions.

Since the company OpenAI debuted ChatGPT late last year, people around the world have been testing its eerie ability to create text that reads like it was human-made, unlike the stilted customer-service chatbots that work off a limited script. OpenAI is already upgrading ChatGPT, and other tech companies have been racing to release their own chatbots with similar capabilities. This cavalcade of chatbots is poised to transform any endeavor that requires producing original text.

In this issue, we report on how ChatGPT is already impacting education and explore whether the potential benefits — as a writing coach or helping a student working in a second language — outweigh the risk of sitting back and letting the bot do the work. As freelance contributor Kathryn Hulick reports, teachers say the chatbot’s writing is often good enough that it’s hard to tell it’s not coming from a student. And since the chatbot churns out original text, it’s not technically plagiarism.

We decided to test ChatGPT’s skills by pitting it against real live students from across the United States. We reached out to teachers in our Science News Learning program, which delivers the magazine to more than 5,600 middle and high schools, to ask students to respond to questions such as “What effect do greenhouse gases have on the Earth?” We asked ChatGPT to respond to the same questions. More than 100 students provided answers. When I tried to guess which answers were written by the bot, I got only 20 percent right; worse than random luck.

“It was really hard to distinguish the ChatGPT responses from student responses,” Anna Pawlow, director of Science News Learning and a former high school chemistry teacher, told me. With ChatGPT or similar AI tools, she worries that it will become harder for teachers to assess student learning and progress through their writing, making it even more crucial that educators have time and resources to support individual students in the classroom.

Of course, the bots are coming for journalism too. CNET, a website that covers technology, confessed in January to using AI to produce articles that were riddled with factual errors, including one that said a savings account with $10,000 paying 3 percent interest would accrue $10,300 in interest in the first year. ChatGPT and other chatbots don’t do research or fact-check the accuracy of their work, as we do at Science News. And they don’t think about what our readers want and need to know. In fact, they don’t think at all.

Still, there’s no question that the bots are here to stay (even though I did indeed write this column myself.) The big question is how humans will rely on their powers and whether we’ll be honest about it.

Nancy Shute is editor in chief of Science News Media Group. Previously, she was an editor at NPR and US News & World Report, and a contributor to National Geographic and Scientific American. She is a past president of the National Association of Science Writers.