THE AWS INSTITUTE

Cloud for CEOs: Measure Innovation with One Metric

Demystifying generative AI for government

Demystifying generative AI for government

The UK Government's Generative AI Framework aims to guide ethical and responsible AI deployment. Alan W. Brown reviews the 10 principles and suggests additional guidance that may be useful.

By Alan W. Brown, Strategy advisor, entrepreneur, and professor in digital economy at the University of Exeter

The role that artificial intelligence (AI) plays in government service transformation is driving strong expectations of improved experiences and significant cost savings, from healthcare and education through to tax and welfare systems.

As a result, there is rapidly rising interest and use of AI in the public sector. The Alan Turing Institute survey of almost a thousand public sector professionals in December 2023 reported that almost half of them were aware of generative AI and more than one in five of them already use it.  

At the same time experience with AI, and particularly generative AI, shows the need for ethical and responsible use of a technology that can produce seemingly endless streams of text, images, or other data using large language models (LLMs) in response to user-defined prompts.

So the UK Government's Generative AI Framework, published on January 24, 2024, is a timely and crucial step in guiding the controlled use of this powerful technology. The UK Framework’s straightforward approach is organized as 10 principles that form a foundation for ethical and responsible AI deployment. While quite broad and high level in nature, each of these principles focuses attention on core concerns that must be addressed in any public sector generative AI use case.

Let's take a look at the UK Framework's key points:

Understanding and limits: Principle 1 rightly emphasizes the need for clear knowledge about generative AI's capabilities and limitations. Public sector leaders must be aware of potential biases, inaccuracies, and security needs

Responsible use: Principles 2 to 5 delve into responsible use, encompassing legal, ethical, and security aspects. Engaging compliance professionals, mitigating bias, and ensuring data security are crucial steps towards building trust and preventing harm.

Human control and collaboration: Principles 4 and 7 highlight the importance of human oversight and collaboration. Keeping humans in the loop for quality control and decision-making, and embracing transparency, are vital for accountability and public trust.

Lifecycle management and skills: Principles 5 and 9 address the full lifecycle of generative AI solutions, from procurement and deployment to maintenance and skill development. Utilizing existing government resources like the Technology Code of Practice and investing in acquiring necessary skills will be critical for successful use.

Right tool for the job: Principle 6 reminds us to choose the right tool for the specific task. Understanding use cases and evaluating tools like LLMs wisely are critical to achieving desired outcomes.

Beyond the Framework: additional considerations

From a conceptual perspective, it is difficult to find fault with the UK Government’s Generative AI Framework. Its 10 principles bring clarity to several concerns that are priorities for those considering generative AI. However, in a rapidly-evolving landscape where digital leaders face considerable daily operational challenges to deliver effective public services, the UK Framework needs elaboration. In my experience, several additional considerations could strengthen its impact:

Focus on impact, not technology: While the UK Framework aptly cautions against technology-driven solutions, emphasizing the need for a clear problem statement and user-centric approach would further solidify this point. The UK government's service manual can be a valuable tool to ensure this focus on solving the right problems.

Continuous monitoring and adaptation: The UK Framework highlights the need for continuous monitoring and review to ensure generative AI solutions remain ethical, unbiased, and effective. However, the costs of providing flexibility and adaptability can be substantial, which is a significant challenge that is frequently under-resourced in the public sector. The Framework needs strengthening with the addition of clear metrics and feedback mechanisms for this ongoing evaluation and adaptation.

Public awareness and education: The public needs to be informed and engaged in the use of generative AI in the public sector. There has to be more focus on this, with maximum transparency and accessibility of information about how the technology’s tools are used and their potential impact. Our society is only recently learning about AI’s disruptive impact on our lives and livelihoods: trust and legitimacy depend on us understanding how we may reap the rewards while managing challenges.

With these updates in mind, I’d recommend an additional principle for the UK Framework:

Principle 11: You build public trust through ongoing dialogue You discuss the challenges and opportunities of generative AI openly with the public, through regular town halls, public forums, and interactive platforms.  

You encourage ongoing constructive dialogue and engage all stakeholders, including those from traditionally hard-to-access parts of society. 

With this additional principle, the UK Framework would be more complete as a guide ethical, responsible adoption of this powerful technology in the public sector.

Implementing the principles: From theory to practice

The UK Framework provides a clear roadmap, but translating it into action is the true test.

Effective implementation starts with embedding the UK Framework's principles into the DNA of every project. Investment in training public service professionals must cover every stage from understanding the principles to deployment. Guardrails such as checklists and decision-making matrices for every project that generative AI touches could support this.  

Furthermore, robust governance structures are essential. Public sector agencies are already appointing dedicated AI leads and ethics committees to oversee generative AI projects. These should now be tasked with ensuring compliance with the UK Framework and fostering a culture of ethical decision-making. Regular assessments and audits should be conducted to identify potential issues and ensure ongoing adherence to the principles. Transparency becomes paramount here, with clear communication channels established to inform stakeholders about how generative AI is being used and its potential impact.

In practice, the success of generative AI in the public sector requires a proactive, multifaceted approach. The UK Government’s Generative AI Framework is a welcome advance. Embedding its principles into everyday practice, instituting strong governance structures, and prioritizing transparency are key pillars for success. By taking these steps, digital leaders can leverage the power of generative AI for good and build trust with citizens and stakeholders.

Alan Brown is a technologist, entrepreneur, and professor in the digital economy. He has a PhD in computational science and is a Fellow of both the Alan Turing Institute and the British Computer Society. He has spent 30 years focused on agile approaches to business transformation, and the relationship between technology and business innovation in today’s rapidly-evolving digital economy. He writes, teaches, and leads research teams in digital transformation topics, and consults with startups and established organizations in the public and private sector to help them figure out what’s going on, and how to ask better questions and learn quickly from their experiences.