Are chatbots dumbing us down? Edtech to the rescue
When it comes to looking up information online, Googling still seems to beat chatbot summaries for developing deep knowledge, a new study reports.
Stanislav Kyrylash/Getty Images
By Susanna Camp
Never underestimate the human capacity for shortcuts. Ask any student today, and they’re likely to tell you they’re using AI to help them meet looming academic deadlines and streamline school workloads. But lo and behold, in the long run, these timesavers are setting us back. New research from experts at the University of Pennsylvania suggests that the superficial nature of LLMs may be hurting people who rely upon it to learn something new. Payal Dhar reports for SN on the shortcomings of AI in learning.
🤷♀️ The cognitive cost of convenience
A research team from the University of Pennsylvania conducted a series of experiments involving over 10,000 participants to compare learning outcomes when researching new topics (like gardening or healthy living) using either a traditional web search engine (Google) or an AI chatbot (ChatGPT). The participants were then asked to write about their newfound knowledge to teach it to someone else. They found that participants who analyzed results from their own web searches developed “deeper knowledge,” as measured by natural language processing and human judges, and were more invested in the advice they gave. The researchers concluded that the ease of receiving synthesized information from LLMs came at the cost of developing genuine understanding.
🎓 Education means proving evidence of understanding
In response to students using chatbots to generate plausible-but-shallow assignments, educators are moving away from grading the final product (which AI can fake) and toward evaluating the process and the evidence of thinking (which AI cannot easily replicate). That means annotated drafts, detailed research logs showing query evolution, in-class discussions and unique, highly contextualized assignments that an LLM cannot generalize from the public web. Furthermore, educators are prioritizing tasks that require synthesis across disparate modalities — for example, analyzing a specific local dataset or responding to a real-time, non-digital event — forcing students to demonstrate original, human-driven critical thinking rather than just output generation.
✅ The verification economy: investing in the ‘how’
Enter EdTech. Since the rise of AI in education, new tools have emerged and existing tech has evolved to serve researchers looking to increase their ability to verify authenticity, and educators seeking to scaffold deeper reasoning. Interestingly, some of these tools also use AI. Here are a few worth learning about:
- Elicit: This platform focuses on automating scientific workflows by answering research questions using high-quality academic papers and summarizing key takeaways. While Elicit’s product uses AI, its focus on providing robust citations and filtering out retracted papers makes it a critical tool for verifying information integrity. Elicit secured a $9 million seed round in 2023 to scale its research-focused AI.
- Turnitin: While an established player in spotting plagiarism, Turnitin is rapidly expanding its core “verification suite” to include AI writing detection. The investment focus here is on integrated assessment tools in the platform that help educators redesign assignments to be “AI-resistant” while maintaining academic integrity. Turnitin was acquired by Advance Publications for $1.75 billion in 2019.
- Harmonic offers an AI-driven mathematical engine with applications in theorem proving, software verification and education. Trained with reinforcement learning, a trial-and-error process where an AI gets virtual rewards for doing something right, Harmonic’s platform is designed with the aim of having an interpretable step-by-step process that could be used in safety-critical systems in fields where software failure can be catastrophic, such as aerospace (flight control systems), automotive (self-driving car algorithms), and medical devices. They’ve raised $295 million to date, including a $120 million series C round last month with investment by Sequoia Partners and Kleiner Perkins.
Harnessing AI to cultivate authentic critical thinking? We’re here for it.
Disclaimer: The Science News Investors Lab newsletter is for informational purposes only and does not constitute investment advice. Society for Science and Science News Media Group assumes no liability for any financial decisions or losses resulting from the use of the content in this newsletter. Society for Science and Science News Media Group do not receive payments from, and do not have any ownership or investment interest in, the companies mentioned in this newsletter. Please consult a qualified financial advisor before making any investment decisions.