In other words, whatever the people using ChatGPT felt was going on inside their brains, the scans showed there wasn’t much happening up there.
The study’s participants, who were all enrolled at MIT or nearby universities, were asked, right after they had handed in their work, if they could recall what they had written. “Barely anyone in the ChatGPT group could give a quote,” Kosmyna says. “That was concerning, because you just wrote it and you do not remember anything.”
Kosmyna is 35, trendily dressed in a blue shirt dress and a big, multicoloured necklace, and she speaks faster than most people can think. As she observes, writing an essay requires skills that are important in our wider lives: the ability to synthesise information, consider competing perspectives and construct an argument. You use these skills in everyday conversations. “How are you going to deal with that? Are you going to be, like, ‘Err … can I just check my phone?’” she says.
The fundamental issue, Kosmyna says, is that as soon as a technology becomes available that makes our lives easier, we’re evolutionarily primed to use it. “Our brains love shortcuts, it’s in our nature. But your brain needs friction to learn. It needs to have a challenge.”
In the ever-expanding, frictionless online world, you are first and foremost a user: passive, dependent. In the dawning era of AI-generated misinformation and deepfakes, how will we maintain the scepticism and intellectual independence we’ll need? By the time we agree that our minds are no longer our own, that we simply cannot think clearly without tech assistance, how much of us will be left to resist?
The complication is, if technology is truly making us cleverer – turning us into efficient, information-processing machines – why do we spend so much time feeling dumb?
Some years ago, Matt Miles, a psychology teacher at a high school in Virginia in the US, was sent on a training programme on tech in schools. The teachers were shown a video in which a schoolgirl is caught checking her phone during lessons. In the video, she looks up and says, “You think I’m just on TikTok or playing games. I’m actually in a research room talking to a water researcher from Botswana for a project.”
Step into the Massachusetts Institute of Technology (MIT) Media Lab in Cambridge, US, and the future feels a little closer. Glass cabinets display prototypes of weird and wonderful creations, from tiny desktop robots to a surrealist sculpture created by an AI model prompted to design a tea set made from body parts. In the lobby, an AI waste-sorting assistant named Oscar can tell you where to put your used coffee cup. Five floors up, research scientist Nataliya Kosmyna has been working on wearable brain-computer interfaces she hopes will one day enable people who cannot speak, due to neurodegenerative diseases such as amyotrophic lateral sclerosis, to communicate using their minds.
Kosmyna spends a lot of her time reading and analysing people’s brain states. Another project she is working on is a wearable device – one prototype looks like a pair of glasses – that can tell when someone is getting confused or losing focus. Around two years ago, she began receiving out-of-the blue emails from strangers who reported that they had started using large language models such as ChatGPT and felt their brain had changed as a result. Their memories didn’t seem as good – was that even possible, they asked her? Kosmyna herself had been struck by how quickly people had already begun to rely on generative AI. She noticed colleagues using ChatGPT at work, and the applications she received from researchers hoping to join her team started to look different. Their emails were longer and more formal and, sometimes, when she interviewed candidates on Zoom, she noticed they kept pausing before responding and looking off to the side – were they getting AI to help them, she wondered, shocked. And if they were using AI, how much did they even understand of the answers they were giving?
Human intelligence is too broad and varied to be reduced to words such as “stupid”, but there are worrying signs that all this digital convenience is costing us dearly. Across the economically developed countries of the Organisation for Economic Co-operation and Development (OECD), Pisa scores, which measure 15-year-olds’ reading, maths and science, tended to peak around 2012. While over the 20th century IQ scores increased globally, perhaps due to improved access to education and better nutrition, in many developed countries they appear to have been declining.
Falling test and IQ scores are the subject of hot debate. What is harder to dispute is that, with every technological advance, we deepen our dependence on digital devices and find it harder to work or remember or think or, frankly, function without them. “It’s only software developers and drug dealers who call people users,” Kosmyna mutters at one point, frustrated at AI companies’ determination to push their products on to the public before we fully understand the psychological and cognitive costs.
Michael Gerlich, head of the Centre for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School, began studying the impact of generative AI on critical thinking because he noticed the quality of classroom discussions decline. Sometimes he’d set his students a group exercise, and rather than talk to one another they continued to sit in silence, consulting their laptops. He spoke to other lecturers, who had noticed something similar. Gerlich recently conducted a study, involving 666 people of various ages, and found those who used AI more frequently scored lower on critical thinking. (As he notes, to date his work only provides evidence for a correlation between the two: it’s possible that people with lower critical thinking abilities are more likely to trust AI, for example.)
Like many researchers, Gerlich believes that, used in the right way, AI can make us cleverer and more creative – but the way most people use it produces bland, unimaginative, factually questionable work. One concern is the so-called “anchoring effect”. If you post a question to generative AI, the answer it gives you sets your brain on a certain mental path and makes you less likely to consider alternative approaches. “I always use the example: imagine a candle. Now, AI can help you improve the candle. It will be the brightest ever, burn the longest, be very cheap and amazing looking, but it will never develop to the lightbulb,” he says. To get from the candle to a lightbulb you need a human who is good at critical thinking, someone who might take a chaotic, unstructured, unpredictable approach to problem solving. When, as has happened in many workplaces, companies roll out tools such as the chatbot Copilot without offering decent AI training, they risk producing teams of passable candle-makers in a world that demands high-efficiency lightbulbs.
“It’s laughable. You show it to the kids and they all laugh, right?” Miles says. Alarmed at the disconnect between how policymakers view tech in education and what teachers were seeing in the classroom, in 2017 Miles and his colleague Joe Clement, who teaches economics and government at the same school, published Screen Schooled, a book that argued that technology overuse is making kids dumber. In the years since, smartphones have been banned from their classrooms, but students still work from their laptops. “We had one kid tell us, and I think it was pretty insightful, ‘If you see me on my phone, there’s a 0% chance I’m doing something productive. If you see me on my laptop, there’s a 50% chance,’” Miles says.
During the pandemic, Miles says, he found his young son weeping over his school-issued tablet. His son was doing an online maths program and he had been tasked with making six using the fewest number of one, three and five tokens. He kept suggesting using two threes, and the computer kept telling him he was wrong. Miles tried one and five, which the computer accepted. “That’s kind of the nightmare you get with a non-human AI, right?” Miles observes: students often approach topics in unanticipated and interesting ways, but machines struggle to cope with idiosyncrasy. Listening to his story, however, I was struck by a different kind of nightmare. Maybe the dawn of the new golden era of stupidity doesn’t begin when we submit to super-intelligent machines; it starts when we hand over power to dumb ones.
Illustration: Justin Metz/The GuardianView image in fullscreen Illustration: Justin Metz/The Guardian
Illustration: Justin Metz/The GuardianView image in fullscreen Illustration: Justin Metz/The Guardian
Illustration: Justin Metz/The GuardianView image in fullscreen Illustration: Justin Metz/The Guardian
From brain-rotting videos to AI creep, every technological advance seems to make it harder to work, remember, think and function independently …
From brain-rotting videos to AI creep, every technological advance seems to make it harder to work, remember, think and function independently …
From brain-rotting videos to AI creep, every technological advance seems to make it harder to work, remember, think and function independently …
Last year, “brain rot” was named Oxford University Press’s word of the year, a term that captures both the specific feeling of mindlessness that descends when we spend too much time scrolling through rubbish online and the corrosive, aggressively dumb content itself, the nonsense memes and AI garble. When we hold our phones we have, in theory, most of the world’s accumulated knowledge at our fingertips, so why do we spend so much time dragging our eyeballs over dreck?
With some MIT colleagues, Kosmyna set up an experiment that used an electroencephalogram to monitor people’s brain activity while they wrote essays, either with no digital assistance, or with the help of an internet search engine, or ChatGPT. She found that the more external help participants had, the lower their level of brain connectivity, so those who used ChatGPT to write showed significantly less activity in the brain networks associated with cognitive processing, attention and creativity.



You must be logged in to post a comment.