OpenAI and DeepMind to Open Models to UK Government?
UK Prime Minister Rishi Sunak announced at London Tech Week on Monday that Google’s DeepMind, OpenAI, and Anthropic have agreed to open their artificial intelligence models to the UK government for research and security purposes.
Sunak said that this would enable the UK government to “help build better assessments and therefore better understand the opportunities and risks of these systems.” Sunak’s speech primarily advocated the potential for AI to reshape fields such as education and healthcare, indicating the potential for the UK to be a “island of innovation.”
“AI is definitely one of the biggest opportunities in front of us,” Sunak said. “The possibilities are extraordinary by combining AI models with the power of quantum.”
- Can blockchain technology companies “protect themselves” while project parties use them to scam investors?
- Gas Ticketing: Ethereum Block’s Backend Pass
- Full text of OpenAI CEO’s first speech in China: Will attempt to create a GPT-5 model, but it won’t be done quickly.
In March 2023, the UK government released an AI White Paper advocating a “pro-innovation” approach, but Sunak recently emphasized the necessity of “guardrails.”
Sunak said on Monday that the UK not only wants to be the core region of scientific knowledge, but also wants to be the geographical home of global AI safety regulation, but the UK has not yet listed specific regulatory proposals and regulations.
The key to these regulations and plans is the Global AI Safety Summit, which will be held in the UK this fall. Sunak compared the summit to the “Artificial Intelligence version of the United Nations Climate Change Conference.”
In addition, a foundation-style working group has been established, with approximately £100 million in funding currently supporting research in the field of AI safety. The UK first said that its flexible and balanced regulatory approach will continue to make it an attractive investment destination.
Anthropic and OpenAI recently established their European headquarters in the UK, and Blockinglantir announced last week that it would establish an AI research center in the UK.
It is worth noting that OpenAI CEO Sam Altman visited various European countries in May 2023 and met with regulatory agencies in various countries, including government leaders in Spain, Poland, France, and the UK.
Altman first met with Spanish Prime Minister Pedro Sanchez because Spain is the rotating presidency of the European Council for six months this summer, meaning that Spain has considerable leeway to influence discussions on the AI rules manual. In France, Poland, and Spain, regional government leaders are likely to exert influence on the final form of EU AI regulations through the European Council.
Ultraman also participated in a discussion at University College London and was interviewed on stage, mentioning his preference for “regulatory preferences between traditional European and traditional American methods.”
In May 2023, members of two key committees of the European Parliament proposed a series of amendments to the committee’s initial (April 2021) risk-based artificial intelligence regulatory framework, aimed at ensuring that general artificial intelligence, basic models, and generative artificial intelligence are not subject to the rules.
The European Parliament supports providers of basic models having an obligation to apply security checks, data governance measures, and risk mitigation measures before launching the models on the market, including a requirement to consider “foreseeable risks to health, safety, fundamental rights, the environment, and democracy and the rule of law.”
The amendments will also require basic model manufacturers to reduce the energy consumption and resource use of their systems, as well as register in the EU database, while providers of generative artificial intelligence technologies (such as OpenAI’s ChatGPT) will have transparency obligations, meaning that AI technology providers must ensure that users are informed that the content is machine-generated and take “adequate safeguards” for the content generated by their systems; and provide summaries of any copyrighted materials used to train their AI.
However, the CEO of OpenAI took time out of his European tour to meet with UK Prime Minister Sunak and had a meeting with Demis Hassabis of rival DeepMind and Dario Amodei of Anthropic and Sunak. Of course, the UK remains a major economic power in Europe. There are also rumors that OpenAI has been considering setting up a local headquarters in the country. For OpenAI, at least in terms of regulation, the UK is no longer a member of the EU, which means it cannot participate in the EU’s Artificial Intelligence Act.
In addition, the attitude of the Sunak government is that it does not intend to enact any new domestic legislation to regulate artificial intelligence. A recent UK government white paper proposed relying solely on existing regulatory bodies, such as the Competition and Markets Authority and the Privacy Supervision Authority, to provide guidance for the safe development of artificial intelligence, rather than legislating to regulate the use of the technology.
Britain is likely to benefit from its geographical advantages and its domination of the important field of AI security to exert its influence. For example, if AI giants support a dialogue led by the UK around AI security research, it would go a long way in shaping UK AI rules that are applicable to their businesses and the UK would benefit as a result.
However, if the UK’s AI security summit is to achieve real and credible success, it must unite researchers from around the world and actively invite independent researchers, civil society groups, and technical organizations to participate, rather than just limiting it to plans for cooperation between “outstanding AI companies” and local scholars.
And a global summit on AI security must reject prejudice and differences and seek the well-being of the entire human race for security in the AGI world.