The Regulation Tortoise and the AI Hare

The range of applications of artificial intelligence (AI) to education is increasing ceaselessly, although its generalization still seems far away. Despite the enormous opportunities that AI can offer to support teaching and learning, the development of applications for higher education carries numerous implications and also ethical risks. Credit: UNESCO

By Robert Whitfield
LONDON, Jun 16 2023 – Regulation of a technology typically emerges sometime after it has been used in a product or service, or, worse, the risks become apparent. This responsive approach is regrettable when real harm is already being done, as now with AI. With existential risk, the approach would risk the end of human existence.

In the past few months, generative artificial intelligence (AI) systems such as ChatGPT and GPT4 became available with no (official) regulatory control at all. This is in complete contrast to new plastic duck toys which need to meet numerous regulations and safety standards. The fact is that the AI hare has been streaking ahead whilst the regulation tortoise is moving but is way behind. This has to change – now.

What has shocked AI experts around the world has been the recent progress from GPT 3.5 to GPT 4. Within a few months, GPT’s capability progressed hugely in multiple tests, for example from performing in the American Bar exams in the 10th percentile range to reaching the 90th percentile with GPT-4.

Why does it matter, you may ask. If the rate of progress were projected forward at the same rate for the next 3, 6 or 12 months this would rapidly lead to a very powerful AI. If uncontrolled, this AI might have the power not only to do much good but also to do much harm – and with the fatal risk that it may no longer be possible to control once unleashed.

There is a wide range of aspects of AI that needs or will need regulation and control. Quite apart from the new Large Language Models (LLMs), there are many examples already today such as attention centred social media models, deep fakes, the existence of bias and the abusive use of AI controlled surveillance.

These may lead to a radical change in our relationship with work and to the obsolescence of certain jobs, including office jobs, hitherto largely immune from automation. Expert artificial influencers seeking to persuade you to buy something or think or vote in a certain way are also anticipated soon – a process that some say has already started.

Credit: NicoElNino / Shutterstock.com

Without control, the progress towards more and more intelligent AI will lead to Artificial General Intelligence (AGI – equivalent to the capability of a human in a wide range of fields) and to Superintelligence (vastly superior intelligence). The world would enter an era that would signal the decline and likely demise of humanity as we lose our position as the apex intelligence on the planet.

This very recent rate of progress has caused Yoshua Bengio and Geoffrey Hinton, so called “godfathers of AI / Deep Learning” to completely reassess their anticipated time frame for developing AGI. Recently, they have both radically brought forward their estimates and they now assess AGI being reached in 5 to 50 and 5 to 20 years respectively.

Humanity must not knowingly run the risk of extinction, meaning that humanity needs to put controls in place before Advanced AI is developed. Solutions for controlling Advanced AI have been proposed, such as Stuart Russell’s Beneficial AI, where the AI is given a goal of implementing human preferences. It would need to observe these preferences and since it would appreciate that it might not have interpreted them precisely, it would be humble and be prepared to be switched off.

The development of such a system is very challenging to realise in practice. Whether such a solution would be available in time was questionable even before the latest leap forward by the hare. Whether one will be available in time is now critical – which is why Geoffrey Hinton has recommended that 50% of all AI research spend should be on AI Safety.

Quite apart from these comprehensive but challenging solutions, there are several pragmatic ideas that have recently been proposed to reduce the risk, ranging from a limit on the access to computational power for a Large Language Model to the creation of an AI agency equivalent to the International Atomic Energy Agency in Vienna. In practice, what is needed is a combination of technical solutions such as Beneficial AI, pragmatic solutions relating to AI development and a suitable Governance Framework.

As AI systems, like many of today’s software services in computer clouds, can act across borders. Interoperability will be a key challenge and a global approach to governance is clearly needed. To have global legitimacy, such initiatives should be a part of a coordinated plan of action administered by an appropriate global body. This should be the United Nations, with the formation of a UN Framework Convention on Artificial Intelligence (UNFCAI).

The binding agreements that are currently expected to emerge within the next twelve months or so are the EU AI Act from the European Union and a Framework Convention on Artificial Intelligence from the Council of Europe. The Council of Europe’s work is focused on the impact of AI on human rights, democracy, and the rule of law. Whilst participation in Council of Europe Treaties is much wider than the European Union with other countries being welcomed as signatories, it is not truly global in scope.

The key advantage of the UN is that it would seek to include all countries, including Russia and China, which have different value sets from the west. China has one of the two strongest AI sectors in the world. Many consider that a UN regime will ultimately be required – but that term “ultimately” has been completely turned upside down by recent events. The possibility of AGI emerging in 5-years’ time suggests that a regime should be fully functioning by then. A more nimble institutional home could be found in the G7, but this would lack global legitimacy, inclusivity and the input of civil society.

Some people are concerned that by engaging with China, Russia and other authoritarian countries in a constructive manner, you are thereby validating their approach to human rights and democracy. It is clear that there are major differences in policy on such issues, but effective governance of something as serious as Artificial Intelligence should not be jeopardised by such concerns.

In recent years the UN has made limited progress on AI. Back in 2020, the Secretary General called for the establishment of a multistakeholder advisory body on global artificial intelligence cooperation. He is still proposing a similar advisory board three years on. This delay is highly regrettable and needs to be remedied urgently. It is particularly heartening therefore to witness the Secretary General’s robust recent proposals in the past few days regarding AI governance including an Accord on the global governance of AI.

The EU commissioner Margrethe Vestager has called for a three-step process, namely national, then like-minded states and then the UN. The question is whether there is sufficient time for all three. The recent endorsement by the UN Secretary General of the proposed UK initiative to hold a Summit on AI Safety in the UK this autumn is a positive development

The Internet Governance Forum (IGF) was established in 2005 and serves to bring people together from various stakeholder groups as equals, to discuss issues relating to the Internet. In the case of AI, policy making could benefit from such a forum, a Multistakeholder AI Governance Forum (AIGF).

This would provide an initial forum within which stakeholders from around the world could exchange views in relation to the principles to be pursued, the aspects of AI requiring urgent AI Global Governance and ways to resolve each issue. Critically, what is needed is a clear Roadmap to the Global Governance of AI with a firm timeline.

An AIGF could underpin the work of the new high-level advisory body for AI and both would be tasked with the development of the roadmap, leading to the establishment of a UN Framework Convention on AI.

In recent months the AI hare has shown its ability to go a long way in a short period of time. The regulation tortoise has left the starting line but has a lot to catch up. The length of the race has just been shortened so the recent sprint by the hare is of serious concern. In the Aesop’s Fable, the tortoise ultimately wins the race because the over-confident hare has taken a roadside siesta. Humanity should not assume that AI is going to do likewise.

A concerted effort is needed to complete the EU AI Act and the Council of Europe’s Framework Convention on AI. Meanwhile at the UN, stakeholders need to be brought together urgently to share their views and work with states to establish an effective, timely and global AI governance structure.

The UN Accord on the governance of AI needs to be articulated and the prospect of effective and timely global governance ushering in an era of AI Safety needs to be given the highest global priority. The proposed summit on AI Safety in the UK this autumn should provide the first checkpoint.

Robert Whitfield is Chair of the One World Trust and Chair of the World Federalist Movement / Institute for Government Policy’s Transnational Working Group on AI.

IPS UN Bureau

 


!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?’http’:’https’;if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+’://platform.twitter.com/widgets.js’;fjs.parentNode.insertBefore(js,fjs);}}(document, ‘script’, ‘twitter-wjs’);