A two-day international summit held in the UK has concluded with an agreement on AI safety, with 28 countries that represent most of the leading forces in AI development getting on board.
The Bletchley Declaration on AI was the end result of the AI Safety Summit, held in the town made famous for Allied code breaking efforts during World War II. Signatories agreed that AI has the potential for serious and catastrophic harm, particularly in certain fields such as biotechnology and cybersecurity, and that international cooperation is required in studying these risks and developing methods to manage them. The US and UK signed on to the agreement along with a number of EU states, Brazil, China, India, Japan, Saudi Arabia and the UAE.
Summit on AI safety to be held annually
The Bletchley Declaration represents a shared international understanding on AI safety and the potential scale of risks and problems that could emerge as the technology develops. In addition to sharing information for the purpose of greater visibility into frontier AI risks, the commitment includes a promise of increased international talks and scientific collaboration. The next summit is already set to take place in France in a year, and a virtual summit will be hosted by South Korea and held in six months.
In terms of concrete immediate developments, the summit looks to have successfully built off the UK Prime Minister’s proposed AI Safety Institute, a pioneering body that would be backed by the G7, United Nations and Global Partnership on AI among others. This looks to be a central clearinghouse for scientific research on frontier AI safety to be shared between international participants. The US has since announced that it will establish its own AI Safety Institute.
Prime Minister Rishi Sunak characterized the summit as a diplomatic coup, and King Charles made a pretaped appearance to call for unity in facing AI safety challenges presented by cutting-edge and untested technology.
Poppy Gustafsson, CEO at Darktrace, sees this as an encouraging initial development but also just the first step in a long journey: “AI safety and AI innovation are not in conflict. They go hand in hand. In this journey to build these exciting new technologies that can substantially benefit society, the safer we make AI, the faster we’ll be able to realise the opportunities. AI is already a broad toolkit, with a wide range of applications. Ensuring AI is safe is not a one-size-fits-all challenge: it needs to be tailored to the use case. It is up to humans to decide when, how and where to use AI. It is not something that is being done to us. While I am mindful of the risks, I remain an AI optimist. In Darktrace’s world of cyber security, AI is already essential as it’s the only way to spot novel attacks. I’m excited to see how the conversation evolves – and what action it will drive.”
AI safety conference sees appearances from tech luminaries
King Charles was far from the only person of prominence that put in an appearance at the summit. Elon Musk, who has long been among the loudest voices calling for international attention to be paid to AI safety, also attended and closed out the second day of the summit with a personal conversation with Sunak. Musk touched on a number of AI safety topics during this talk, from the feasibility of including manual “kill switches” to override AI systems that might go off the rails, to whether or not humanity can expect to see AI grow to become more intelligent than it is. Musk has previously stated that he believes AI can become an existential threat to humanity, something he reiterated just ahead of the summit, but the subject of an “AI apocalypse” did not come up during the talk (which did not include questions from the audience).
Fears of a “Terminator scenario” were not shared by all conference participants, with an unsurprising amount of overlap with those that are presently developing large “frontier” AI systems. Meta’s Nick Clegg, president of global affairs for the company, urged participants to focus on immediate and everyday AI safety risks like bias in decision-making systems. Another luminary that was present at the conference, Turing Award-winning researcher Yoshua Bengio, was asked to head up a body tasked with producing a report on the risks and possibilities of frontier AI systems.
Joseph Thacker, researcher at AppOmni, notes that a “doom for humanity” scenario is still a matter of great debate among AI experts: “Experts are split on the actual concerns around AI destroying humanity, but it’s clear that AI is an effective tool that can (and will) be used by forces for both good and bad … The declaration doesn’t cover adversarial attacks on current models or adversarial attacks on systems which let AI have access to tools or plugins, which may introduce significant risk collectively even when the model itself isn’t capable of anything critically dangerous. The declaration’s goals are possible to achieve, and the companies working on frontier AI are familiar with this problem set. They spend a lot of thinking about it, and being concerned about it. The biggest challenge is that the open source ecosystem is really close to enterprises when it comes to making frontier AI. And the open source ecosystem isn’t going to adhere to these guidelines – developers in their basement aren’t going to be fully transparent with their respective governments.”
Much of the current sense of urgency about AI safety was sparked by the release of ChatGPT just about one year ago. The “large language model” prompted fears with both what it could and could not do; its text-generation capabilities immediately inspired fears of massive job losses in a variety of fields, but its propensity to “hallucinations” and confident declarations of false information also demonstrated that the young technology was still prone to serious and unanticipated errors.
United Nations Secretary General Antonio Guterres warned that there will never be a better time to put firm regulation into place and take proactive control of AI safety. The technology is expected to build upon itself exponentially, with development speed only increasing as AI becomes more capable and autonomous. Summit participants did not directly call for specific regulation as of yet, however, with Sunak characterizing the goal of this initial meeting as being to “understand” AI first. AI safety is not the only concern on the table, as both nations and private industry seek to be first in line to harness its financial and tactical advantages.
On the developer end, major players like Google and OpenAI have already promised to improve government access to the inner workings of AI models and their training processes. The summit did not establish new specifics in this area, however. Countries are also not yet on the same page in terms of regulation either, as the host UK is still resisting passing a specific AI safety bill even as the EU prepares to put extensive legislation in place.
But Ted Miracco, CEO of Approov, notes that a blizzard of legislation is likely coming as nations around the world do not want to repeat the mistakes made with social media: “The Bletchley Declaration demonstrates a more proactive approach by governments, signaling a possible lesson learned from past failures to regulate social media giants. By addressing AI risks collectively, nations aim to stay ahead of tech behemoths, recognizing the potential for recklessness. This commitment to collaboration underscores some determination to safeguard the future by shaping responsible AI development and mitigating potential harms. We all certainly harbor doubts regarding the ability of governments and legal systems to match the speed and avarice of the tech industry, but the Bletchley Declaration signifies a crucial departure from the laissez-faire approach witnessed with social media companies. We should applaud the proactive effort of these governments to avoid idle passivity and assertively engage in shaping AI’s trajectory, while prioritizing public safety and responsible governance over unfettered market forces.”