A recent study conducted by MIT, titled “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing,” found that using ChatGPT reduced brain function, forming what is known as “cognitive debt,” particularly in young people. Not a big shocker, really. In fact, it seems incredibly naive not to inherently understand that this would be the case. Indeed, generative AI might speed up productivity, but it is far-fetched to assume it will make students or the workforce any smarter or infuse critical thinking.

The abstract examined 54 participants who used OpenAI’s ChatGPT for writing essays. They were divided into three groups: brain-only users, search engine users, and large language model (LLM) users. The study then utilized EEG scans to monitor brain activity across 32 distinct brain regions. The moment of truth:

“Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning.”

Remember, these students will one day be the future managers, executives, and leaders of the world and will surely need the skill of critical thinking. The MIT study shows that overreliance on AI risks eroding the essential thinking skills of younger generations, potentially creating a future workforce of leaders who lack depth and problem-solving skills. While AI may boost short-term productivity, this “hollowing out” of cognitive skills could lead to a less capable future generation of decision-makers, trading immediate headways for long-term shortages in decision-making and invention.

The MIT Media Lab has been exploring the impacts of generative AI, with prior studies linking the use of ChatGPT to increased loneliness. This study, led by researcher Nataliya Kosmyna, focused on the use of AI in schoolwork, given its growing prevalence among students, which—let’s not forget—was rapidly accelerated during the fear and tyranny that drove the COVID-19 pandemic. Participants wrote 20-minute SAT essays on topics like philanthropy, ethics and decision overload.

The ChatGPT group produced similar, unoriginal essays deemed “soulless” by English teachers, with EEGs showing low executive control and attention. By the third essay, many merely fed prompts to ChatGPT, minimizing their effort. In contrast, the no-tools, brain-only group exhibited high neural connectivity in creativity- and memory-related brain waves. Even better, they reported greater satisfaction and ownership. The search engine group also showed active brain function and satisfaction, highlighting a difference between AI and traditional search tools.

In a follow-up task, the ChatGPT group struggled to rewrite essays without the aid of AI, revealing weak memory retention and reduced brain activity. The no-tools group, now allowed to use ChatGPT, performed well, suggesting AI could enhance learning if used selectively, according to the study.

Kosmyna noted that she released the not-yet peer-reviewed study early to initiate urgent dialogue, as peer review can delay findings by over eight months. She calls for education on responsible AI use and legislation to evaluate tools before widespread adoption. Psychiatrist Dr. Zishan Khan warns that excessive reliance on AI in young people may impair memory, resilience, and cognitive abilities. Indeed, and let’s not forget that addictive screen use, whether on social media, mobile phones, or video games, is also associated with suicidal behaviors and damaged mental health.

But back to the study. Social media users attempting to summarize their paper with LLMs fell into Kosmyna’s deliberate “AI traps,” which limited the AI’s insights. She notes that these “echo chambers” represent “a significant phenomenon in both traditional search systems and LLMs, where users become essentially trapped in self-reinforcing bubbles that limit exposure to diverse perspectives. Besides that, LLMs also fabricated details, falsely claiming the study used GPT-4o, despite there being no such specification.

Kosmyna’s team is now exploring the effects of AI on programming, with early data demonstrating even more significant cognitive risks. These risks could impact industries replacing entry-level coders with AI, potentially diminishing creativity and problem-solving in the workforce. Yet, opposing studies highlight AI’s dual nature: a Harvard study found it increases productivity but reduces motivation, while MIT distanced itself from a paper touting AI’s workplace benefits. OpenAI, which has remained silent on the study, has previously collaborated with Wharton to guide educators on integrating AI.

Without a doubt, the study, along with a healthy dose of common sense, emphasizes the urgent need to balance AI’s efficiency with safeguarding cognitive growth, particularly for young learners. After all, diminished brain activity and the loss of critical thinking skills sounds awful for humanity. Still, it’s right on track for the transhuman agenda (according to Grok, the transhuman movement aims to enable people to be immortal but will likely erode values like empathy and critical thinking, leading to a posthuman condition), and who better to manipulate than our younger generations? The study concludes:

“As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with LLM integration in educational and information contexts. While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking, and intellectual independence demands a very careful consideration and continued research. 

We believe that the longitudinal studies are needed in order to understand the long-term impact of the LLMs on the human brain, before LLMs are recognized as something that is net positive for the humans.” 

Generic avatar

Tracy Beanz & Michelle Edwards

Tracy Beanz is an investigative journalist, Editor-in-Chief of UncoverDC, and host of the daily With Beanz podcast. She gained recognition for her in-depth coverage of the COVID-19 crisis, breaking major stories on the virus’s origin, timeline, and the bureaucratic corruption surrounding early treatment and the mRNA vaccine rollout. Tracy is also widely known for reporting on Murthy v. Missouri (Formerly Missouri v. Biden,) a landmark free speech case challenging government-imposed censorship of doctors and others who presented alternative viewpoints during the pandemic.