The Prime Minister's call for the “Complete Rewning of the British State“Has put the ons on the civil service to match the demands placed upon it by Rapid Technological Advances – Most notable Rise of Generative Artificial Intelligence (Ai).

The question is not if or when ai will change how policy is made, but how policy makers can use it to improves outs for citizens. The impact will be extended but not total. There some parts of the policy making process where, the role of the policy maker is relatively unafeted – like Officials using their Judgement to Navigate the compensation interests and idols of development ings don.

But in other areas, the effect will be more apparent and immediati. Tools like redbox can dramatically reduce the time it takes for a minister to learn about a new topic – as well as commissioning an official, they can ask a large language model (Llm). This challenges the traditional ways officials manage the flow of information into ministers.

LLMS will also change the intellectual process by which policy is constructed. In Particular, they are Increasingly useful – and so increase the use – to synthesise existing evidence and sugges a policy intervention to achieve a goal.

Policy Work Across Whitehall is alredy being used usefully augmented by llms, the most common form of generative ai. The tools available include:

RedboxWhich can summarise the policy recommendations in submissions and other policy documents and has more than 1,000 users across the cabinet office and department for science, innovation and technology.

ConsultWhich the government says summaries and groups responses to public consultations a Thousand Times Faster Than Human Analysts. Similar tools are used by governments abroad, for example in Singapore.

A live demonstration of Redbox at the 2024 Civil Service Policy Festival Showed It Analysing A Document Outlining Problems with the Operation of the Operation of the National Grid and Summarising Ideas from Summarising Ideas from Anra of the immortal

Llms have limits

While llms are advancing Quickly and some of their current shortcomings might only be temporary, there remain limits to what they can do.

They can synthesise a wide range of sophisticated information, but their subsequent output can be wrang, obcicyly wildly so – Known as hallucination. LLM Outputs Might also contain bies for which officials need to correct, Including Unfair Assumptions About Certain Demographic Groups.

Because llms are trained on available written information, their outputs can remain the nuance and context human experience can provide. Designing New Policy to Increase, Say, The Efficiency With Which Hospitals are Run Requires Possessing Advanced Knowledge About Healthcare Policy, of the Sort LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS ART LLMS AREST LLMS ART LLMS is Capable of Capable of Summerizing.

But it also also requires insight into Spond to any changes.

LLMS also tend to provide “Standard” answers, Struggling to Capture Information at the cutting edge of a field and provide novel ideas. Unless Stretched by the user, they are unlikely to sugges more radical answers and this has consumers, particularly in fast-moving areas of policy. Ironically, AI policy is one such area.

Finally, over-credulously incorporating llm outputs into the policy making process can be dangerous. Evidence, Whether Scientific, Social or other, Rarely Points in One Direction and An LLM Summarising Evidence Might Implicitly Elevate Eme Political Principal Principles Over Tors. If done badly, a policy maker incorporating that output into advice to a minister risk building assumptions into their recommendations which run contrary to that ministers.

Policy Makers' Role Will Change

These are all good reasons for caution. But the potential benefits of using llms are large. In an AI-Augmented Policy Making Process, The Policy Maker's Key Role will be to introduce the knowledge that an llm cannot.

Policy Makers' Added Value will likely manifest in two main ways. The first is using their expertise to edit and shape llm “first drafts” – Including checking for and correcting hallucines and untoward biasses. This is not that dismire to what the best policy makers currently do – Humans, Too, Get Things Wrong or Exposes Bieses through their work.

The second is by layering policy makers' ideas on top of llm outputs, sometimes being prepared to push them in a more radical direction. This could involve an interactive process, in which an llm is asked to provide feedback on ideas produced by a policy maker. The time freed up by using llms to perform traditionally time-invalid tasks could give policy making makers the options to gather and deploy new types of information, which can help CARAFT CRARPT CRAN HELPS of Information.

Particularly Important will be the kind of hyper-specific or real-time insurance which llms struggle to capture, which would be accepted in New and Creative Ways Line, Building a Professional Network Why Can Give Real-Time Reactions To new developments, or sometising differentty.

Building Skills

However, integrating llms Into government might make it harder for policy makers to acquire important skills. If Domain Expertise and Insider Insights are the things for which policy makers are Increasingly valued, they musters the commensurate skills.

But this presents something of a paradox – llm adoption might not only make domain expertise even more important to position, but also harder to acquire. It is precisely the activities that llms are so efficient at performing – gathering and synthesising existing evidence, and using it as the basis for policy solutions – that policy makers from Building Blocks of Expertise.

This also has consequences for policy makers' ability to gather insider insights. It is all very well freeing up time for policy makers to collect information in new ways, but they do not have a baseline level of expertise they will find it hard to know with to find for it to look for it.

This leaves the civil service with two options. The first is to preserve some basic tasks for more Junior Officials SO they can build the domain Expertise needed to intelligent use llms.

The second is to reinvent the way policy makers Acquire Expertise, Reducing Reliance on the Now AI-AI-AUGMENTED Traditional Methods. For example, the type of official who is currently a junior In whitehall once they get more Senior.

Perhaps the best approach would be for the civil service to start by ringfension tasks, but actively commission “Test and learn” projects To explore more imaginative approaches, and scale there, they work. This single take place place along with implementing more traditional solutions. For example, The Civil Service has a problem with excess turnover and officials who moves between policy areas less frequently old Find it Easier to Develop Expertise.

Conclusion

Policy Making is among the most important and hardest jobs the civil service does, and improving how it is done is a substantiial prize. A policy making process which blends human expertise with llms will not just be more efficient, but more Insightful and Connected to Citizens' Concerns.

Channelling the adoption of llms in the most productive way, maximising the benefits while mitigating the risks, is crucial for the civil service to get right. Just letting change happy should not be an option – it must be proactively shaped.

Jordan Urban is a Senior Researcher at the Institute for Government.

Leave a Reply

Your email address will not be published. Required fields are marked *