How our AI governance framework is enabling responsible use of AI

From: Digital trade
Published: Tue Aug 13 2024


Back in March we wrote about the challenge of creating a process to ensure the safe and responsible use of AI within DBT. This looked at establishing the groundwork for the AI governance process. In July we provided an update on our work. We detailed our support for the delivery of 2 tools underpinned by Large Language Models (LLMs). We also shared the new impact assessment that we're using to help understand whether to make tools like this available outside of a trial.

This third post will look at the first impressions of the AI and Data Governance team in implementing our governance process.

Implementation

Twenty-eight submissions have been received through the AI governance framework at the time of writing. The underlying tools are assessed for potential data protection and cybersecurity issues, to ensure they can be deployed on our data and analysis platform. This is vital to ensuring responsible deployment of the technology on the DBT tech estate. Both general LLMs and more specialised tools have been requested.

Submissions have come from across the department, covering both day to day and more specialised or analytical tasks. None of the use cases being developed involve automated decision-making about individuals. Some examples include:

  • forecasting global trade
  • audio transcription of ministerial interviews and background briefings
  • identifying topics and trends in Free Trade Agreements (FTA) texts to assist negotiators
  • reviewing of job descriptions and adverts for Diversity, Equity, and Inclusion to make them attractive and accessible to a broader range of candidates
  • sentiment analysis of archived text data

Submissions have been split approximately 2:1 between various forms of generative AI tools and more traditional machine learning or Natural Language Processing (NLP) approaches. There is a more even split between AI types for those submissions which either have or are close to receiving approval. ChatGPT is overwhelmingly the most suggested generative AI tool, though a range of more specialised tools have been mentioned in submissions for more focused applications.

Our governance framework has enabled us to collaborate quickly with Cabinet Office on Redbox. This is a generative AI tool which allows civil servants to summarise and search civil service documents. By having a thorough understanding of the data flows and technical aspects of using Open AI, we were able to get the Redbox tool through our internal governance processes as a trial.

The AI enablement team is looking closely at requests arriving both from the AI governance form and an internal transformation fund. We are planning to scale our infrastructure to support self-hosted LLM solutions, allowing for more expansive data to be used. We are also assessing the viability of supporting the requested use cases with Redbox, for cases that involve document retrieval and chat.

Lessons learned

Differing risk factors between generative AI and Machine Learning

The approach to AI governance has varied depending on whether the submission is describing generative AI or more traditional machine learning techniques. The machine learning techniques can have more easily interpreted outcomes and can be run on smaller datasets in a closed environment. This means they can often be considered lower risk. The department has extensive experience using traditional machine learning techniques, such as statistical models used to identify patterns. Any risk is understood and mitigated. Therefore, submissions of this category can be safely, easily and quickly approved.

Submissions which include the use of generative AI undergo further scrutiny to investigate risks unique to generative AI models, like hallucination and privacy concerns. A "hallucination" is where a generative AI model presents false or misleading information as fact. Generative AI models are multifaceted, so the AI governance procedure will be used not just for new AI tools, but for new use cases of already approved tools.

Submission refinement

Every submission receives some feedback from the team, which in many cases has led to a change in approach. In a few cases the feedback has encouraged submitters to change their choice of LLM. More commonly, it has guided submitters toward taking the data flows involved in their proposed use cases more seriously, even where personal data is not involved. Data Protection Impact Assessment (DPIA) screening forms are submitted to confirm that the intended use case is compliant with departmental policy in this area.

Iterative process

We are still in the early stages of using this framework and are learning new ways to develop the robust AI governance process every day. This is an iterative process, learning from the submissions received and adjusting the AI governance framework form accordingly. Initially we planned to review the process after fifty submissions had been received. However, after receiving input from colleagues in the Technical Design Authority and Cyber teams, we are already looking into adjustments. This will continue as we learn more about how the process works. The process will be consistently adjusted to ensure we meet the standard necessary for a government department in terms of trustworthiness, fairness, accuracy and impartiality.

Integration into existing processes

The AI governance process does not act as an approvals body which prohibits the use of AI, but as an enabler of safe AI within the department. The process facilitates conversations, signposts users to relevant teams and protocols, and fosters a culture of responsible AI deployment. To achieve this, we have worked closely with experts in cybersecurity, the Technical Design Authority (TDA), and the Information Risk Assurance Process (IRAP) which covers data protection.

Some of the priorities which have been identified during this work include:

  • ensuring the AI governance form is the first port of call by establishing a clear 'hook' to the form
  • re-wording the form to ensure submitters are not directed to processes which they have already started (for example, Data Protection Impact Assessments)

Transparency

Transparency is key to the success of the AI governance framework, and we are working towards ensuring that, both internally and externally. Internally, a departmental-wide Teams channel was launched in March to provide a forum for information and discussion. We also intend to publish a register on the internal data and analytics platform, Data Workspace. This will allow anyone in the department to see the types of use case being submitted for consideration, along with their approval status.

Externally, we will be publishing details of public facing algorithms and tools through the cross-government Algorithmic Transparency Recording Standard. This will help to meet the National Data Strategy's commitment to greater transparency on algorithm-assisted decision making in the public sector.

Future plans

DBT is still in the early stages of AI implementation, and as this evolves, so will the work of the AI governance team. The iterative process noted above will continue to develop in a clear and robust way that enables the safe implementation of the technology within the department. As more tools are approved and deployed on DBT systems, we anticipate an increasing number of submissions. We will focus on grouping tools into categories so we can see if existing tools can cater to new submissions and requests. Through this process, we will streamline the process and minimise any obstacles that surface, so the department can responsibly use AI to its full potential.

Company: Digital trade

Visit website »