Throughout this semester, how many times have you used ChatGPT to refine your essay? How about asking it to write it from scratch, and trying to “humanize” it later? I assume this has happened at least once. It may seem that the class that you are taking right now is boring, and it would indeed be faster to just use AI, and forget at the end of the semester what the topic of the class was, even if it means being glued to the AI models every other day. However, the implications of this behavioral pattern are dire for the way we live.
Ever since the boom of ChatGPT it appeared as, to a large extent, AI has been glorified. In the age of hustle culture, this is exactly the tool humanity needed; or is it? While one may levy criticism of AI and its data centers because of their negative environmental impact, another facet is what AI does to our minds and the way we think. What many students do not realize is that using AI to perform writing tasks for you is actually connected to a large cognitive debt.
Cognitive debt could be defined as a situation “where you forgo the thinking in order just to get the answers, but have no real idea of why the answers are what they are.”
This term was used in a
study conducted by the Massachusetts Institute of Technology and published this year on how ChatGPT impacts the human brain and its capabilities specifically in terms of essay writing. We live in the era of infinite shortcuts and never enough time, so ChatGPT seems tempting, even though it comes at a cost. The study included 54 subjects, aged 18-39, who were then divided into three groups. They were asked to write SAT essays by themselves, with the use of ChatGPT or with the use of Google Search Engine.
Through EEG tests, MIT researchers showed that the group which used ChatGPT had the lowest levels of brain engagement and of its connectivity, consistently underperforming. Moreover, as the Time reports, “over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the
study.” Even though there is some obvious criticism of the paper’s research design that could generate bias, such as small sample size and the location of the study (Boston, USA), it still presents worrying results, especially if we consider that all the younger generations will be brought up in the world where AI in learning is not an option, but the standard.
Further decline in learning skills is scary as we already experience increasing problems with our attention
spans, overstimulation and lack of enjoyment in learning new things. As the researchers themselves contend, LLMs in education can be very useful, for instance in fostering autonomous learning. Nevertheless, excessive reliance on solutions provided by LLMs diminishes our critical thinking and independent problem solving. We stop being active information seekers and rather transform into passive information receivers, which in today’s world oversaturated with misinformation and advertisements instead of credible information can have devastating results for how we will function in the future.
One more aspect of this brain function decline worth mentioning is the standardization of the way we speak: we literally start to sound how AI models do. LLMs certainly favor some forms of expression over the others, privileging the American English linguistic conventions, which ultimately can lead to some form of cultural erasure. In a research paper on artificial intelligence and the standardization of English, Amin et al. show the bias results from the datasets on which the models are being trained, which undermines its character as a neutral or objective educational aid. This translates into the way we talk to each other. Through “implicit learning” of the recurring phrases and storing them in our memory, we are much more likely to use these words, incorporating words like “meticulous” or “boast” into daily conversations.
This phenomenon was
studied by Florida State University researchers who after investigating over 22 million words from unscripted spoken language proved that AI buzzwords are recurring off-screen. The principal investigator Tom Juzek claimed that “[w]hat stands out is the breadth of change: so many words are showing notable increases over a relatively short period. Given that these are all words typically overused by AI, it seems plausible to conjecture a
link.” This change in language, however, poses larger challenges to deal with in the future. If the LLMs are biased, and it has already been seen that they are, they might start influencing human behaviors and the way we think about the world – and the way we perceive it as well. This “seep-in effect” of AI is definitely something which remains understudied, but can have significant implications, for example, on the formalization of the spoken language and thus more neutral, emotionless manner of speaking that ends up sounding
artificial.
So next time you excuse yourself by saying “I can do the same thing as AI does, the AI just does it faster,” think about not only your brain, but also your identity. Fight the urge of depending on the LLMs, do not be afraid to be yourself and make mistakes – the aspect of not knowing is what makes learning fun in the first place. At the end of the day, is the efficiency really worth your brain?
Anna Lipiec is a Staff Writer. Email them at feedback@thegazelle.org.