The start of 2025 heralds a difficult time for many Western nations, with the outlook for the public sector particularly bleak. Demographic shifts are boosting demand for public services just as tax revenues plateau and the labour force starts to contract. The result? Governments are under pressure to do more with fewer resources.
Many of the usual policy fixes no longer look viable. Tax rates are already at post-war highs. Public debt hovers near record levels. Large-scale immigration – once considered a welcome safety valve – faces growing electoral opposition. And now, as if things weren’t already bad enough, the bond markets have begun to lose confidence.
Faced with these pressures, government leaders are once again turning towards technology as their “get out of jail free” card. If social care, administration, and other civic functions require staff and funding that are no longer available, why not swap or supplement people with software that works 24/7 without overtime or demands to work from home?
The allure of technology
Given this landscape, perhaps it’s no surprise the UK government has launched a 50-step plan designed to turn the UK into a powerhouse for artificial intelligence (AI).
Yet many questions are already being asked about how it will be achieved. Appeals to the magic of technology are hardly new. Governments have been promising an imminent and radical “digital transformation” of the public sector for over 30 years (Figure 1).
But perhaps this time is different?
Opportunities and risks
Amid the sea of grim news and growing policy challenges, one potential bright spot stands out – the emergence of a new generation of AI, with the potential to help support and improve the work of the public sector. Its proponents contend these systems will soon replace – or radically augment – most types of knowledge work.
The pitch to government is straightforward. Artificial intelligence is the only ace left in a deck full of bad cards. It offers a potential escape from looming labour shortages and, more importantly, a way for governments to maintain – or even expand – vital services despite budgetary pressures and a shrinking workforce.
However, the correct response to this techno-solutionist optimism is a cautious “maybe”. Rolling out advanced technology is not in itself going to deliver public sector reform.
Government departments and agencies still operate on principles that developed in the age of heavy industry. The problem-solving approaches baked into the core of state planning, policymaking, decision-making, and administration don’t align well with modern technologies and practices.
While it’s possible AI could contribute to a transformation of the public sector, it’s not going to happen unless there’s also an overhaul of government culture, organisation, and design processes. Otherwise it risks becoming yet another in a long series of promising technological fixes that fail to deliver.
Linear versus circular
For over three decades, the UK government has tried to modernise its operations with digital tools and practices. But one reason these “digital transformation” initiatives have fallen short of their full potential is because governments have not modernised their structures and operating models. They continue to plod along in a traditional, linear fashion.
To gain value from technologies like AI, the state must move on from its paper-era, top-down, one-shot planning. Governments need to learn from best digital practice and embrace a more effective, iterative approach to policymaking – experimenting, learning, and adapting.
The challenge of public sector adoption of AI mirrors well-known issues with earlier technologies, such as web and mobile. Instead of genuine transformation, government has simply replicated its existing organisations, processes, and transactions online – missing opportunities to rethink how policies and public administration are conceived, designed, delivered, and continuously improved.
Governments remain structured around a model that’s barely changed since Henry Ford developed the linear assembly line in Detroit shortly before World War One. Everything happens stepwise. Governments follow a carefully choreographed routine, reminiscent of Ford-era production lines. Each team handles a predefined task before passing the work forward, leaving no room to revisit earlier assumptions as the organisation learns.
Manifestos set out broad policy promises, often influenced by a political party’s favoured “think tank” or the latest attention-grabbing tabloid headlines. These policies are turned into legislation and fleshed out by departmental policy specialists before being handed off to operational and commercial teams.
It can take months or even years before the first technologist becomes involved, and longer still before a policy is ready for the rude awakening of public testing. This rigid, “waterfall” progression stifles the iterative process of “learning by doing” that is standard in successful digital organisations.
The digital iteration model
Digital organisations are structured around a completely different model. They implement an initial solution as soon as possible, and then iterate and improve it based on users’ interactions and feedback. Instead of trying to predict every outcome at the start, they experiment and learn from real-world experience.
Unfortunately, this iterative approach clashes with the rigid, top-down nature of government policymaking. While digital organisations rely on continuous testing, feedback loops, and outcome-focused learning, most public institutions remain bound by linear, policy-first processes. Manifestos shape legislation before any real-world validation occurs, and hierarchical structures stifle the experimentation essential for genuine transformation.
With two such different, profoundly opposed models, it’s little wonder the past three decades of digital transformation programmes have made such slow progress.
Fundamental conflict
Why do the two models differ so fundamentally? At root, it’s about the illusion of predictability.
The outlook of a politician or policymaker rests on a shared belief that the outcomes of design decisions can be anticipated in advance. Manifestos rarely contain hypotheses or shades of grey. They assume a stable, mechanical reality that can be manipulated with a top-down, often ideological “solution” agreed in advance.
This mindset is partly founded on the logic of the electoral system. In theory, parties ask voters to agree to policies in the abstract before delivering their practical outcomes.
The digital world, on the other hand, operates on William Goldman’s principle: “Nobody knows anything”. Since no-one can fully anticipate what will work in practice, digital organisations rely on extensive testing and feedback. Insights into real-world users’ experiences enable them to continuously improve their products and services, and to fine tune their own internal organisational structures, operations, and processes.
No wonder attempts to insert iterative thinking into the state’s linear approach have failed repeatedly. The two systems’ basic assumptions are fundamentally opposed.
Blockers to the adoption of AI
So, why does this long-standing mismatch matter to the adoption of AI in government? Because it further amplifies the conflict between old school predict-and-control and the newer model of experiment-and-learn.
Because AI’s outputs – and the user behaviours that shape them – are probabilistic and inherently unpredictable, developers can’t specify outcomes in a one-shot plan. They must gather real-world feedback, refine the model’s prompt or training data, and course-correct in response to how users interact.
This iterative process is the exact opposite of trying to predefine everything in a manifesto. It relies on testing and adapting in real time rather than adopting a dogmatic solution at the start.
To benefit from this technology, policymakers and delivery teams have no choice but to embrace an iterative, evidence-based approach. They must give up the discredited conceit they can specify final outcomes at the very beginning, before the process of “learning by doing” ever gets started.
Policymaking’s missed opportunities
Governments’ linear approach to policymaking inevitably locks in questionable assumptions and constraints long before policy ever makes contact with the real world.
It’s an approach that generates huge missed opportunities – generalist politicians and officials often have little idea of how technology could create alternative ways of designing and delivering better policy outcomes.
Worse, the state’s linear mindset amplifies risk. Far too often the unintended consequences of policy decisions only come into focus much later. By that point, policies have long been fixed, making it extremely difficult to rethink the strategy to mitigate emerging harms.
Governments are understandably concerned with ensuring fairness, equality, and accountability when adopting emerging technologies. This can also best be managed with a “learning by doing” approach that embeds legal and ethical review and user feedback loops into every stage of the process.
Why it matters
Governments’ top-down, department-centric, project-based approach to procurement exacerbates these problems. Funds are allocated for a one-off silo effort. But technologies like AI require ongoing investment, tuning, and adaptation. Every initiative must navigate the never-ending flow of new and improved models with ever-evolving capabilities. These rapid improvement cycles mean there’s no such thing as “job done”.
In short, the machinery of democratic government – manifesto promises, top-down policymaking, one-off budgets, department- and project-centric procurement, and lengthy implementation cycles that often only engage technical expertise well downstream – is fundamentally mismatched with the technologies remaking our world.
The state’s mindset is still anchored to the age of heavy industry and linear process automation rather than transformation and reform. Meanwhile, synthetic intelligence is catapulting an unreformed state into a future that, until recently, seemed confined to the pages of science fiction.
If there is good news here, it’s that technology businesses and democratic governments share at least one thing in common: they both seek to discover what people need, and to provide it to them quickly and effectively (Figure 2).
The real value of new technologies like AI isn’t in automating yesterday’s bureaucracy, but in reimagining and democratising the policymaking process from the ground up.
How governments should respond
If governments are to harness AI’s potential to address their social and economic challenges, they need to:
Show humility in political discourse: political leaders and parties should present policy ideas as questions or hypotheses – rather than promises. Politicians should cease to pretend they know everything in advance. “Learning by doing” is essential to designing and delivering better policies and public administration.
Involve technologists from the beginning: move technical experts and delivery teams into the earliest stages of policy ideation, conception and design, ensuring that practical feasibility and iterative testing and learning shape and inform policy options.
Implement continuous, adaptive funding: move away from one-off lump-sum allocations. Develop budgeting models that fund initiatives on an ongoing basis, allowing for continuous iteration and improvement independent of current organisational silos.
Build cross-functional teams: break down bureaucratic silos so that policy, operations, commercial, and technology specialists work together from day one, fostering a shared culture of experimentation and learning.
Iterate policy development: treat policy proposals as hypotheses to be tested, rather than foregone conclusions. Introduce milestones for real-world feedback and incorporate that data into policy refinement to improve policy outcomes.
Develop user-centred performance metrics: shift from static targets to metrics that measure how well policies meet user needs and deliver the intended outcomes. Continuously adjust strategies based on real-world performance data.
Technologies don’t exist in a vacuum. Each new wave brings its own organisational and social implications. If Western governments are serious about delivering the reforms they’ve been promising us since the 1990s, they need to embrace technology into the heart of a much more fundamental structural transformation of the machinery of government.
Linear-sequential methods were transformative for manufacturing in 1913, but ill-suited to a digital world powered by technologies like AI. It’s time to abandon the century-old assembly line mindset and adopt modern, iterative processes at every level.
Perhaps the biggest contribution that AI will make is in triggering the long-promised overhaul of governments’ structure and operations. By forcing leaders to confront and shed their industrial-era assumptions, it could prove to be the catalyst for delivering the long-promised “digital transformation” of the state – helping governments meet not only the formidable challenges they face in 2025, but well beyond.