“Frontier AI Taskforce: Safeguarding the UK’s Future with Industry, Academia, and National Security”


The UK government’s artificial intelligence (AI) taskforce advisory board is expanding its ranks with professionals from various fields to enhance its capabilities. Originally named the AI Foundation Model Taskforce, the taskforce was established in April 2023 with a £100 million government funding to conduct AI safety research and provide advice on the opportunities and risks associated with the technology. The taskforce has undergone a rebranding and is now known as the Frontier AI Taskforce.

According to the initial progress report published by the Department for Science, Innovation and Technology (DSIT) on 7 September 2023, the taskforce is described as a “startup inside government.” One of its core goals is to provide public sector researchers with resources equivalent to those available at companies like Anthropic, DeepMind, and OpenAI to work on AI safety. The report emphasized the potential risks posed by advancing AI systems, such as increased cyber security threats and biosecurity threats. To manage these risks, the report argues that technical evaluations need to be developed by a neutral third party rather than being left to AI companies themselves.

In addition to addressing safety concerns, the Frontier AI Taskforce will focus on assessing AI systems that have significant risks to public safety and global security. While these AI models have the potential to drive economic growth and scientific progress, responsible development is crucial to mitigate safety risks.

To strengthen the taskforce’s advisory capabilities, seven new appointments have been made to its board. The appointees include Yoshua Bengio, a Turing Prize laureate; Paul Christiano, co-founder of the Alignment Research Centre (ARC); Anne Keast-Butler, director of GCHQ, the UK’s signals intelligence agency; and Helen Stokes-Lampard, chair of the Academy of Medical Royal Colleges. Other appointments include Alex Van Sommeren, the UK’s chief scientific adviser for national security; Matt Collins, the UK’s deputy national security adviser for intelligence, defence, and security; and Matt Clifford, prime minister Rishi Sunak’s joint representative for the upcoming AI Safety Summit.

Furthermore, Oxford academic Yarin Gal has been appointed as the first research director of the taskforce, while Cambridge academic David Kreuger will contribute to the taskforce’s research program in a consultative role. Ollie Ilott, who previously led Sunak’s domestic private office and the Cabinet Office’s Covid strategy team, has been appointed as the taskforce’s director.

The progress made by the taskforce has been significant, as highlighted in the report. The team has grown rapidly with AI researchers who have a collective experience of over 50 years at the forefront of AI. The taskforce now includes researchers with backgrounds from reputable institutions such as DeepMind, Microsoft, Redwood Research, the Center for AI Safety, and the Center for Human Compatible AI.

Ian Hogarth, the chair of the Frontier AI Taskforce and an angel investor and tech entrepreneur, expressed his satisfaction with the appointment of the taskforce’s external advisory board. He emphasized the experts’ diverse expertise in AI research and national security from academia, industry, and government. Hogarth outlined that the taskforce’s efforts aim to ensure the safe and reliable development of foundation models while also strengthening the leading AI sector in the UK and delivering benefits for society as a whole.

The formation of the advisory board coincides with the establishment of the Trades Union Congress (TUC)’s AI taskforce, which focuses on advocating for new laws to protect workers’ rights and foster broad social benefits from AI technology. The TUC’s taskforce brings together specialists in law, technology, politics, HR, and the voluntary sector. They plan to publish an AI and Employment Bill early in 2024 and actively lobby for its incorporation into UK law. Notably, Labour MP Mick Whitley introduced a worker-focused AI bill to Parliament in May 2023, intending to promote non-discrimination, worker participation in decision-making, and transparency in data usage in workplaces. The bill’s second reading is scheduled for 24 November 2023.

The initiatives taken by both the Frontier AI Taskforce and the TUC’s AI taskforce demonstrate the UK’s commitment to addressing the opportunities and challenges presented by AI. By leveraging the expertise of professionals from various backgrounds, the government aims to ensure the safe, responsible, and beneficial development and deployment of AI technologies. These efforts aim to foster economic growth, scientific progress, and improved societal outcomes while mitigating potential risks to public safety, cyber security, and global security.