Read the conclusion of the recent Media Lab paper about LLMs. It’s a Non-Friction Nightmare.
No, that’s not a typo in my title.
I’ve just had my first look at the MIT Media Lab paper that is making the rounds: “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.”
This paper is disturbing, to say the least. What the authors call “friction” is what we used to call thinking, or it’s at least an essential element of thinking, the effort of it. That effort includes the give and take of inquiry, the difficulty of dialogue, the sweat of education, the work of human language and human encounter.
The paper’s conclusion only scratches the surface of this problem when it addresses “ethical considerations.”
Consider what is probably the most alarming sentence here, which describes what happens when you reduce friction: people reach the conclusions the algorithm wants them to reach – or, rather, the algorithm reaches conclusions for them; people reach for nothing at all.
It’s surrender. Not just to machines, mind you, not just to the algorithm, but also to the interests (“the priorities”) the algorithm represents.
By surrendering to the these priorities, allowing ourselves to be guided by them, we’re also throwing in the towel on shared human experience, co-coordination and mutual guidance, reliance on each other and shared commitment — which is the only way we can work out our own priorities.
Finally, I can’t post this on my blog (a little center of friction in its own right) without saying something about the writing here.
I know this is a draft paper, but this conclusion sure could use another going-over. It’s not just the typo in the penultimate paragraph (“theis” instead of “their”) that needs correcting; there’s also that awkward bit about “net positive for the humans” in the final paragraph (which sounds like it came straight from an LLM) and the resort to cliche (“technological crossroads”) and industry jargon (“unprecedented opportunities for enhancing learning and information access”). The findings here deserve more clarity.
Last, I’d like to see a little more about the social and political consequences that would seem to follow inevitably from the “cognitive consequences” the authors document. But maybe that’s a matter for another paper.
As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with LLM integration in educational and informational contexts. While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking, and intellectual independence demands a very careful consideration and continued research.
The LLM undeniably reduced the friction involved in answering participants’ questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users’ inclination to critically evaluate the LLM’s output or ”opinions” (probabilistic answers based on the training datasets). This highlights a concerning evolution of the ‘echo chamber’ effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM’s shareholders….
Only a few participants in the interviews mentioned that they did not follow the “thinking” [124] aspect of the LLMs and pursued their line of ideation and thinking.
Regarding ethical considerations, participants who were in the Brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to other groups. Essays written with the help of LLM carried a lesser significance or value to the participants (impaired ownership, Figure 8), as they spent less time on writing (Figure 33), and mostly failed to provide a quote from theis [sic] essays (Session 1, Figure 6, Figure 7).
Human teachers “closed the loop” by detecting the LLM-generated essays, as they recognized the conventional structure and homogeneity of the delivered points for each essay within the topic and group.
We believe that the longitudinal studies are needed in order to understand the long-term impact of the LLMs on the human brain, before LLMs are recognized as something that is net positive for the humans.
Type your email…
Subscribe