Capital has always sought to reduce its dependence on labour. What is new with AI is that this project now extends deep into domains of cognitive work that had remained difficult to mechanise. For a long time, even under highly unequal conditions, capital still depended on living workers for a wide range of tasks involving judgment, interpretation, coordination, writing, coding, and analysis. That dependence mattered. It gave labour a residual strategic significance. As AI systems become increasingly capable across these domains, that balance begins to shift. Labour does not disappear, but it is increasingly reorganised as the slower and more constraining element within production.

This is the emerging significance of AI.

The argument here is not that human labour disappears overnight. Nor is it that every benchmark result should be read as proof of immediate job destruction. The point is more structural. What is changing is the balance of dependence between capital and labour. In earlier phases of capitalism, firms could mechanise, fragment, and discipline labour, but they still remained dependent on living workers for a wide range of cognitive, interpretive, coordinative, and communicative tasks. That dependence mattered. It gave labour a residual strategic importance, even under highly unequal conditions. AI changes this terrain because it opens a path—partial, uneven, but increasingly real—toward reducing that dependence in domains where human intelligence had previously remained difficult to bypass.

This is where the language of labour as bottleneck becomes useful, not as a celebration of technological progress, but as a way of naming how capital begins to see labour under new conditions. Once machine intelligence becomes sufficiently capable across coding, writing, research, analysis, and routine decision support, the human worker increasingly appears as the part of the workflow that is slower, more expensive, more variable, and less scalable. That does not mean human labour is immediately removed. It means it is reorganised.

AI does not eliminate labour first. It narrows and repositions it.

The first effect of AI in many workplaces is not total substitution. It is the reorganisation of labour into more residual forms. A growing amount of work now follows a recognisable sequence: a human gives a prompt, an AI system generates output or performs a task, and the human reviews, approves, corrects, or rejects the result. This pattern is already visible in software development, writing, design, customer support, administrative work, legal drafting, and research assistance. The worker remains in the loop, but increasingly as a supervisor of machine output rather than as the primary producer of it.

That matters because it changes the content of work itself. Human labour becomes concentrated in oversight, verification, and exception handling. The worker is retained for those parts of the process the machine still handles unevenly, or for those parts that firms still want a human to be formally responsible for. The result is a new kind of narrowing. The worker does not disappear, but is repositioned into what we might call residual labour: labour that remains necessary, but on altered and often weaker terms.

This shift is already visible in the way firms and developers talk about AI systems. The problem is increasingly framed not as whether a model can produce enough output, but whether humans can review and validate that output fast enough. In other words, the bottleneck begins to move. It no longer lies only in generating text, code, or analysis. It lies in human approval, quality control, compliance, and responsibility. That is why the “human-in-the-loop” model is spreading across enterprise settings. The machine does more of the production; the human is repositioned as verifier, approver, and bearer of downstream risk.

The pace of capability growth matters politically

The significance of this shift is intensified by the pace of frontier model development. The strongest models are now updated on a cycle measured in months, not decades. Capabilities in coding, tool use, long-context reasoning, multimodal processing, and task orchestration have advanced quickly enough to change production decisions before labour institutions have had time to respond. More demanding coding evaluations such as SWE-bench suggest that frontier systems are no longer operating only at the level of toy examples or isolated function completion. They are already capable enough, in some domains, to alter workflow design and staffing expectations.

The point is not that benchmark scores directly translate into job loss. They do not. But they do matter because they indicate a threshold of practical competence. Once AI systems become good enough to perform a substantial portion of first-pass cognitive work, firms no longer need full automation to change labour demand. They only need enough machine competence to reduce their dependence on workers for routine drafting, coding, searching, organising, and analysis. That alone is sufficient to thin entry-level work, compress training pathways, and intensify output expectations for those who remain.

This is where the temporal asymmetry becomes politically important. Human beings cannot expand their knowledge base, skills, and adaptive capacity at the speed of quarterly model updates. Workers cannot re-skill every few months to keep up with rapidly shifting model capabilities. Unions, regulatory institutions, education systems, and professional norms move even more slowly. Capital, by contrast, can upgrade its productive systems at machine speed, so long as infrastructure, investment, and ownership are in place. The result is a widening mismatch: machine capability accelerates, while labour’s capacity to understand, contest, and reorganise around these changes remains socially and institutionally delayed.

Marx helps clarify what is historically new

Marx’s analysis of machinery remains useful here because it was never only about replacing muscle with metal. The deeper issue was always capital’s effort to reduce its dependence on living labour while increasing control over the labour process. Machinery mattered because it objectified capacity in forms owned by capital and confronted workers as something external to them. AI extends this logic into domains that had remained more stubbornly dependent on human cognition.

If industrial machinery mechanised physical effort, and digital platforms reorganised fragmented labour through algorithmic management, AI agents begin to mechanise portions of cognitive labour itself. What is historically significant is not simply that AI is “smart.” It is that capital now possesses an increasingly capable, capital-owned, non-agentic form of intelligence that can perform many tasks for which it previously depended on workers. Labour has agency. It can resist, refuse, organise, bargain, slow down, or withdraw. AI systems do none of these things. They do not have interests of their own. They are productive systems owned and directed by capital.

That distinction is central. The issue is not intelligence in the abstract. It is intelligence without agency. From the standpoint of capital, that is precisely what makes AI so attractive. A worker can become inconvenient because he demands wages, resists control, needs rest, learns slowly, and can challenge authority. A model does not. It can be updated centrally, replicated cheaply, and deployed continuously. This does not mean capital can fully dispense with workers. It means that in more and more areas, capital can begin to reduce the degree to which it relies on them.

That changes labour’s leverage.

The first effects may be quiet, but they are real

This is why the labour-market evidence matters, even where it remains partial. The most important signal so far is not necessarily mass unemployment. It is the quieter deterioration of entry pathways. Recent research, including Anthropic’s labour-market analysis, suggests that younger workers in more AI-exposed occupations are seeing slower hiring, even where aggregate unemployment effects remain muted. That matters because labour markets do not only deteriorate through visible layoffs. They also deteriorate when fewer new workers are brought in, when skill ladders narrow, and when professions remain intact formally while becoming harder to enter materially.

That pattern is especially important for cognitive and computer-mediated work. If AI absorbs growing portions of junior coding, drafting, research support, administrative, or analytical labour, then the first casualties may be the training grounds through which workers once became skilled. This is one reason the bottleneck problem is not just about productivity. It is about the shrinking of social pathways into work.

The global political economy of AI makes this even sharper. ILO–World Bank work suggests that developing economies may face disruption before productivity gains, especially where digital divides and task structures limit the benefits of AI adoption. In such contexts, jobs vulnerable to AI may include precisely those relatively better white-collar or administrative roles that have served as pathways into more stable employment. The result could be a kind of white-collar bypass: disruption arrives early, while productivity gains remain uneven and concentrated.

The human remains, but on more subordinate terms

For all these reasons, the most plausible near-term outcome is not a world without labour. It is a world in which human labour increasingly survives in narrower roles: prompt-giving, monitoring, verification, exception handling, and responsibility-bearing. This can make work feel less substantial, even where workers remain formally employed. It can also intensify control. If output is increasingly machine-generated, the worker may be judged less by what he produces directly and more by how quickly he validates, edits, or manages machine output. The result is a labour process that is both accelerated and thinned out.

This is why the bottleneck language should be taken seriously, but not literally. The human is not a natural bottleneck. Labour becomes a bottleneck relative to a production system reorganised around machine-paced cognition, centralised updating, and capital-owned intelligence. The comparison is not neutral. It reflects a labour regime in which speed, scalability, and cost minimisation are privileged above all else.

That is the political point. AI does not simply introduce a new technology into the workplace. It changes the structure of dependence between capital and labour. It weakens labour’s strategic position by reducing the range of tasks for which capital must rely on living workers, while still retaining humans for responsibility, compliance, and the management of failure. In that arrangement, labour remains present, but increasingly as residual labour.

The danger, then, is not just that jobs disappear. It is that workers are retained on more subordinate terms: less autonomous, less indispensable, and less able to contest the process that organises their work. If that is where AI-mediated production is heading, then the real question is not whether labour survives the age of intelligent machines. It is what kind of labour survives—and with how much power left.


Abhinav Kumar is a researcher working on platform labour, AI and labour, and the political economy of technology, with a focus on India and the Global South. His PhD at CISLS, JNU examined digital labour platforms and Zomato workers in Delhi.