EXPERT COMMENT
The Bletchley Park summit - and the UK's new AI Safety Institute - will not deliver a new international regulatory framework. But they can be important first steps.
On 1-2 November, the UK will host its AI Safety Summit at Bletchley Park, bringing AI powerhouses like the US and China together with industry leaders, civil society and experts, in an attempt to lead on managing AI risks on an international level.
Today, UK Prime Minister Rishi Sunak previewed the summit by announcing a new UK AI Safety Institute, which would monitor AI development and risks and share its findings worldwide.
When the UK first announced the summit in June 2023, there was some criticism that it added another process to an already crowded landscape.
While there is a need to coordinate across these efforts, especially the existing Global Partnership on AI, the summit will have a distinct focus on ‘frontier' AI risks - that is the concern that the most powerful AI models could either be used for dangerous purposes or act in unanticipated ways.
The UK government highlights the potential to synthesize new bioweapons, others the likelihood AI could develop sophisticated disinformation at scale, or evade human supervision once deployed.
Some are sceptical about these warnings, arguing their proponents have not outlined how they would occur in practice, and that they shift the focus from other risks - including to jobs, and of discrimination.
Click here to continue reading the full version of this Expert Comment on the Chatham House website.