An aerial view of the San Francisco city skyline and the Golden Gate Bridge in California, October 28, 2021.
Carlos Barria | Reuters
LONDON – The British government is expanding its facilities for testing ‘groundbreaking’ artificial intelligence models to the United States, in a bid to boost its image as a top global player tackling the risks of the technology and strengthen cooperation with US governments around technology. the world is competing for AI leadership.
The administration announced Monday that it would open a U.S. counterpart to its AI Safety Summit this summer in San Francisco, a state-backed body focused on testing advanced AI systems to ensure they are safe.
The American version of the AI Safety Institute will focus on recruiting a team of technical staff led by a research director. The institute currently has a team of thirty people in London. It is chaired by Ian Hogarth, a prominent British tech entrepreneur who founded the music concert discovery site Songkick.
In a statement, British Technology Secretary Michelle Donelan said the U.S. rollout of the AI Safety Summit “represents British leadership in AI in action.”
“It is a pivotal moment in Britain’s ability to study both the risks and potential of AI from a global lens, strengthening our partnership with the US and paving the way for other countries to leverage our expertise as we continue to lead the world in technology.” AI safety.”
The expansion “will enable the UK to tap into the wealth of tech talent available in the Bay Area, connect with the world’s largest AI labs headquartered in both London and San Francisco, and strengthen relationships with the United States to promote AI safety for the public. interest,” the government said.
San Francisco is home to OpenAI, the Microsoft-backed company behind viral AI chatbot ChatGPT.
The AI Safety Institute was founded in November 2023 at the AI Safety Summit, a global event in England’s Bletchley Park, home of World War II codebreakers, that aimed to boost cross-border collaboration on AI safety.
The expansion of the AI Safety Institute to the US comes on the eve of the AI Seoul Summit in South Korea, which was first presented at the UK summit at Bletchley Park last year. The Seoul summit takes place on Tuesday and Wednesday.
The government said that since establishing the AI Safety Institute in November, it has made progress in evaluating groundbreaking AI models from some of the industry’s leading players.
It said on Monday that several AI models completed cybersecurity challenges but struggled to tackle more advanced challenges, while several models demonstrated PhD-level knowledge of chemistry and biology.
Meanwhile, all models tested by the institute remained highly vulnerable to ‘jailbreaks’, where users trick them into producing comments that their content guidelines don’t allow, while some would produce malicious results even without attempts to bypass security.
According to the government, the models tested were also unable to perform more complex, time-consuming tasks without supervision.
The AI models being tested were not mentioned. The government previously got OpenAI, DeepMind and Anthropic to agree to open up their coveted AI models to the government to help inform research into the risks associated with their systems.
The development comes as Britain has been criticized for not introducing formal regulations for AI, while other jurisdictions such as the European Union are leading the way with AI-aligned laws.
The EU’s landmark AI law, the first major AI legislation of its kind, is expected to become a blueprint for global AI regulation once it is adopted by all EU member states and enters into force.