The new ILO–World Bank warning on AI and jobs gets one major thing right: the labour shock will be uneven, and developing economies are likely to absorb disruption before they receive productivity gains. That is an important correction to the optimism that usually dominates technology policy. But the policy conversation is still lagging the technology. The report is framed around generative AI exposure, while firms are already moving into agentic AI deployment. That difference is no longer a technical detail; it is now the centre of the labour question.
Generative AI changed how quickly firms could produce drafts, summaries, code snippets, and communication outputs. Agentic AI changes something more fundamental: it can increasingly execute multi-step tasks, coordinate tools, and complete workflow segments with limited supervision. Once systems move from output generation to process execution, the unit of substitution is no longer an isolated task. It is the workflow itself. This is where disruption accelerates. In practical terms, the question is no longer whether AI can write. The question is whether AI can run the process.
That is why the current moment is much more disruptive than most “GenAI and jobs” framings admit. Entry-level and mid-layer white-collar roles are vulnerable not because they are low-skill, but because they are process-intensive. A large part of their value historically came from linking steps across documents, systems, and decisions. Agentic systems now target exactly these chains. Even where full replacement is not immediate, firms can compress teams, reduce new hiring, and shift surviving workers into higher-intensity oversight roles. The likely outcome is not a simple replacement story; it is a restructuring story with fewer entry pathways, tighter managerial control, more residualized labour, and weaker bargaining leverage for workers.
This is where the Global South problem becomes sharper than standard exposure metrics suggest. The ILO–World Bank concern about digital divides remains valid, but the deeper issue is institutional asymmetry. Firms can import new organizational models quickly. States cannot build social protection, labour inspection capacity, and bargaining institutions at the same speed. So disruption arrives fast through firm strategy, while adjustment support arrives slowly through public policy. In that gap, transition costs are privatized. Workers pay through unstable incomes, longer search periods, debt-financed survival, and movement into lower-quality employment.
India is especially exposed to this sequencing problem. We have already seen a version of it in platform labour: technology scaled rapidly, labour protections lagged, and workers were asked to absorb volatility as an individual burden. Agentic AI can reproduce that architecture in clerical, service, and analytical work, only faster. If policy treats this as a skilling problem alone, it will fail. Skills are necessary, but they do not substitute for bargaining power, wage protection, and enforceable deployment rules. Training workers for roles that are simultaneously being structurally compressed is not a transition strategy. It is a political deferral.
The policy sequence has to change. Adoption-first and protection-later is the wrong order in the agentic phase. Distribution-first policy means building protections before large-scale deployment hardens firm behaviour. That includes pre-deployment labour impact assessment, worker consultation rights when AI reorganizes jobs, income protection tied to AI-linked displacement, and clear accountability where algorithmic systems affect hiring, pay, discipline, and termination. It also means protecting labour-market entry ladders. If early-career pathways collapse, the long-term social cost will exceed any near-term productivity gains captured by firms.
What is at stake now is not whether AI raises productivity in the abstract. It is who governs the transition and who bears the risk. Agentic AI increases managerial capacity to reorganize work. Without institutional counterweights, that capacity will translate into one-sided distributional outcomes. The real divide in the next phase is not between workers who can use AI and workers who cannot. It is between economies that can socialize transition risk and those that offload it onto households.
The ILO–World Bank intervention should therefore be read as a baseline warning, not a complete map of what is coming. It correctly identifies uneven impact, but the frontier has already moved beyond GenAI. The labour politics of this decade will be shaped by systems that can execute, not merely generate. If governments continue regulating for yesterday’s AI, workers will face tomorrow’s shock alone.