There is a familiar anxiety circulating among writers, academics, and professionals in the age of large language models. People worry that their work will be mistaken for machine output. AI detection tools promise to identify the smoothness of generated prose. Teachers urge students to preserve their own voice. Writers are told to keep a trace of roughness, idiosyncrasy, and personal style so that they do not sound like the model.
That anxiety is real, but it is pointed in the wrong direction.
The more serious problem is not that human writing might be flagged as AI-generated. It is that human thinking is increasingly being shaped in the image of the machine.
This is the proposition. As AI-generated summaries, explanations, reports, analyses, and first drafts become a primary medium through which people encounter ideas, humans are starting to think through the cognitive architecture of large language models. We are not only consuming AI outputs. We are internalising their vocabulary, their argumentative habits, their way of smoothing complexity, and then reproducing those patterns as if they were our own thought. In that limited but important sense, humans are beginning to function as LLMs.
That phrase should not be read mechanically. Humans are not prediction engines in the same way machines are. But structurally, something important is changing. More and more reasoning is now being routed through systems that generate plausible continuations from statistically weighted patterns in a corpus. When those outputs become the raw material through which people write, study, summarise, explain, and decide, the average does not remain outside thought. It starts to organise thought from within.
This is not just a cultural story about new tools. It is a political story about what kinds of knowledge survive, what kinds of language become dominant, and what forms of intelligence are gradually pushed to the margins in a world that increasingly thinks through the average.
Humans were never fully original to begin with
Before overstating what is new, it is worth saying clearly what is not new. Human language has always been social before it is individual. No speaker invents grammar. No writer begins from nothing. Every sentence is shaped by accumulated reading, conversation, pedagogy, and habit. Long before AI, human thought already moved through inherited structures.
This is why the comparison between humans and language models has some superficial force. Human beings also learn from large corpora, except our corpora are lived: classrooms, books, gossip, institutions, everyday speech, conflict, memory. We too rely on patterns, analogy, prediction, and repetition. In that narrow sense, there has always been an element of pattern completion in human cognition.
Nor is media influence new. Print standardised language. Television narrowed idiom and emotional range. Search engines reorganised what people thought was worth asking. Every dominant medium leaves cognitive residues. It shapes not only what people know, but how they move from question to answer.
So the important question is not whether AI influences thought. All major media do. The question is whether this medium intervenes differently enough to produce a qualitatively distinct effect.
I think it does.
What is new is where AI enters the process
Older media shaped the content people consumed. AI increasingly shapes the way people produce thought in real time. That difference matters.
When someone watches television or reads a newspaper, influence enters mainly through inputs. With LLMs, influence enters at the point of production itself. A writer hesitates in the middle of an argument, asks the model to continue, and receives not just information but structure. A student with a weak intuition about a topic asks for an explanation and gets not only content but a ready-made way of ordering that content. A policy professional asks for a briefing note and receives a preformatted argument that already contains a hierarchy of relevance, a preferred framing, and a tone of reasonableness.
The model is not in the background. It is inside the generative moment.
This is what makes the problem deeper than simple assistance. The issue is not only that AI helps people write faster. It is that AI can begin to mediate the struggle by which unclear thought becomes clear thought. That struggle matters. It is often where the unexpected idea appears, where the writer realises what they actually think, where confusion becomes discovery. If that process is increasingly scaffolded by a model that specialises in producing smooth, plausible continuations, then thought itself begins to inherit the model’s preferences.
The politics of the average
A large language model does not produce truth. It produces plausible language based on weighted patterns in a training corpus. That is what gives it power and that is what gives it danger.
The output of an LLM often feels persuasive because it approximates the centre of a distribution. It is grammatically stable, rhetorically balanced, and structurally familiar. It tends toward formulations that feel reasonable because they have been distilled from many prior formulations judged acceptable, useful, or likely. The result is a rhetoric of the average.
This matters because original thought rarely appears first in the middle of the distribution. It often appears at the edges, in awkward formulations, incomplete intuitions, strange analogies, excessive specificity, or ideas that initially sound wrong. The polished plausibility of model output does not just help communication. It can also suppress the friction through which new thought is produced.
What gets underweighted in this process is not only stylistic weirdness. It is epistemic difference. The more people rely on AI-generated language to organise their own expression, the more intellectual life risks converging around arguments that are statistically likely rather than substantively challenging. The danger is not simply homogenised prose. It is homogenised reasoning.
Model collapse as a cultural process
Machine learning researchers use the term model collapse to describe what happens when generative systems are trained on their own synthetic outputs. Over time, the tails of the distribution begin to disappear. Rare forms become rarer. The output grows narrower, flatter, more repetitive. Confidence increases even as diversity degrades.
Used carefully, this is also a useful cultural metaphor.
If AI-assisted writing increasingly becomes part of the textual environment people read, and if that writing feeds future models, then a feedback loop emerges. People absorb AI-smoothed language. They reproduce it in their own writing, often with AI assistance. That writing becomes part of the broader textual field. Future models learn from an increasingly homogenised corpus. New users then think through that narrower distribution. The loop tightens.
What disappears first in such a loop is not random. The most vulnerable forms of knowledge are precisely those least represented in the original corpus: oral traditions, informal worker vocabularies, regional conceptual worlds, embodied skills, tactical reasoning developed in struggle, and forms of political intelligence that were never fully digitised in the first place. If the average is built from an archive already skewed toward formal, published, English-dominant knowledge, then recursive reliance on that average deepens the exclusions already embedded in the corpus.
This is why the problem is political, not merely technical.
Whose knowledge built the model?
The corpus from which LLMs learn is not the world. It is a partial archive of the world, assembled through the hierarchies of digitisation, language, capital, and institutional recognition.
What is overrepresented? English-language text. Corporate documentation. Academic publishing. News media. Software repositories. Policy language. Professional communication. The forms of writing produced by those with stable internet access, formal education, and institutional visibility.
What is underrepresented? The practical intelligence of informal workers. Oral knowledge systems. Everyday political reasoning outside publication circuits. Vernacular traditions that never entered digitised archives. Forms of thought carried through community, memory, and embodied practice rather than searchable text.
This means that when an LLM produces a neutral-looking answer, it is not neutral at all. It is reproducing the weighted average of a deeply unequal archive. The concepts that feel obvious, the framings that feel balanced, and the styles that feel authoritative are all shaped by that archive.
If people increasingly think through these outputs, then they are not just borrowing a tool. They are moving within a hierarchy of knowledge that has already been organised for them.
This is where the issue becomes especially important for the Global South. Much of the world appears inside these systems only in translated, filtered, or administratively legible form. The knowledge that survives tends to be the knowledge that institutions found worth recording. The result is that already-marginal ways of knowing are not only excluded from the corpus. They are further marginalised by the authority of systems trained on that exclusion.
Cognitive offloading and the loss of productive uncertainty
There is also a more intimate dimension to this problem. Human beings have always offloaded cognition onto tools. Writing externalised memory. Calculators externalised arithmetic. GPS externalised navigation. Every such tool changes what the human does internally, and what is not exercised often weakens.
The cognitive faculty being offloaded to LLMs is different. It is not just memory or calculation. It is the capacity to stay with uncertainty long enough for thought to take shape.
Anyone who writes seriously knows this experience. You begin with a vague intuition. The argument is not yet there. The sentence is wrong three times before it becomes right once. You circle around the point. You hit confusion. Then, in the process of trying to clarify yourself, you arrive somewhere you had not anticipated at the beginning. This is not a defect in thought. It is often the condition of thinking.
If AI systems are routinely used to bridge that gap too quickly, then the danger is not simply laziness. It is atrophy. People become less practiced at enduring conceptual ambiguity, less capable of finding structure without assistance, less able to generate unexpected connections without a machine offering plausible ones first.
That matters because productive uncertainty is where intellectual originality often begins. If that space is routinely collapsed by AI scaffolding, then what is lost is not only effort. It is a particular human capacity for discovery.
Resistance has to be epistemic as well as political
None of this means AI should simply be rejected. The point is not technophobia. The point is that AI-mediated cognition is becoming a political formation, and like any political formation it has to be contested.
One site of resistance is methodological. Fieldwork, ethnography, participatory research, and grounded enquiry matter even more under conditions of AI epistemic smoothing. These methods generate knowledge in encounter with people and places, not from the pre-filtered archive alone. They produce concepts that the model could not easily infer because the model was never built from that lived texture in the first place.
Another site is infrastructural. Data governance matters because the loop of homogenisation runs through data. Who gets scraped, who gets represented, who gets averaged, and who controls the archive are not technical afterthoughts. They are political questions. Community archives, data cooperatives, and forms of epistemic sovereignty are therefore not peripheral concerns. They are part of the struggle over whether AI will deepen existing hierarchies of knowledge or be forced to confront them.
A third site is pedagogical. If AI use weakens tolerance for productive uncertainty, then education has to actively rebuild that capacity. Students need spaces where thought is allowed to remain unfinished, where drafts are not judged by polish alone, and where the difficult work of clarification is not immediately outsourced. Intellectual life depends on preserving those spaces.
Who gets to remain outside the distribution?
At the deepest level, the question is this: who gets to be cognitively strange in the age of AI?
The strange thought, the unexpected analogy, the concept that initially appears implausible, the experience that does not fit the dominant archive, these are not marginal to knowledge production. They are often how knowledge moves forward. Every serious intellectual break begins as something the distribution could not easily predict.
A culture that increasingly thinks through LLMs risks optimising for plausibility at the expense of discovery. That is dangerous not only because it flattens style, but because it narrows what can be thought.
And yet the irony is sharp. The people most likely to remain outside the model’s average are often those least represented in the corpus to begin with: workers whose knowledge comes from labour rather than documentation, communities whose history is carried orally, organisers whose political intelligence was built in struggle rather than seminar rooms, people whose lives entered the data regime only partially or not at all.
That exclusion has long been a form of violence. But in this moment it also means something else. It means that some of the most important resources for resisting AI epistemic monoculture may lie precisely in forms of knowledge the model never fully managed to absorb.
The task, then, is not to bring everyone more fully into the average. It is to build conditions under which excluded ways of knowing can challenge the average itself.
Humans are beginning to think like LLMs. But they do not have to. And the most important forms of resistance may come from those whose intelligence was never fully captured by the corpus in the first place.
Abhinav Kumar is a researcher working on platform labour, AI and labour, and the political economy of technology, with a focus on India and the Global South. His work examines gig work, algorithmic management, worker resistance, and the changing forms of labour under digital capitalism.