EU’s AI regulation vote looms. We’re still not sure how unrestrained AI should be

The European Union’s long-expected law on artificial intelligence (AI) is expected to be put to the vote at the European Parliament at the end of this month. 

But Europe’s efforts to regulate AI could be nipped in the bud as lawmakers struggle to agree on critical questions regarding AI definition, scope, and prohibited practices. 

Meanwhile, Microsoft’s decision this week to scrap its entire AI ethics team despite investing $11 billion (€10.3bn) into OpenAI raises questions about whether tech companies are genuinely committed to creating responsible safeguards for their AI products.

At the heart of the dispute around the EU’s AI Act is the need to provide fundamental rights, such as data privacy and democratic participation, without restricting innovation. 

How close are we to algocracy?

The advent of sophisticated AI platforms, including the launch of ChatGPT in November last year, has sparked a worldwide debate on AI systems. 

It has also forced governments, corporations and ordinary citizens to address some uncomfortable existential and philosophical questions. 

How close are we to becoming an _algocracy -_— a society ruled by algorithms? What rights will we be forced to forego? And how do we shield society from a future in which these technologies are used to cause harm? 

The sooner we can answer these and other similar questions, the better prepared we will be to reap the benefits of these disruptive technologies — but also steel ourselves against the dangers that accompany them.

The promise of technological innovation has taken a major leap forward with the arrival of new generative AI platforms.

Robin Li, CEO of search giant Baidu, talks about AI in Beijing in 2018AP Photo/Ng Han Guan

The promise of technological innovation has taken a major leap forward with the arrival of new generative AI platforms, such as ChatGPT and DALL-E 2, which can create words, art and music with a set of simple instructions and provide human-like responses to complex questions.

These tools could be harnessed as a power for good, but the recent news that ChatGPT passed a US medical-licensing exam and a Wharton Business School MBA exam is a reminder of the looming operational and ethical challenges. 

Academic institutions, policy-makers and society at large are still scrambling to catch up.

ChatGPT passed the Turing Test — and it’s still in its adolescence

Developed in the 1950s, the so-called Turing Test has long been the line in the sand for AI. 

The test was used to determine whether a computer is capable of thinking like a human being. 

Mathematician and code-breaker Alan Turing was convinced that one day a human would be unable to distinguish between answers given by a real person and a machine. 

He was right — that day has come. In recent years, disruptive technologies have advanced beyond all recognition. 

AI technologies and advanced machine-learning chatbots are still in their adolescence, they need more time to bloom. 

The optimists among us are quick to point to the enormous potential for good presented by these technologies.

Robin Li, CEO of search giant Baidu, talks about AI in Beijing in 2018AP Photo/Ng Han Guan

But they give us a valuable glimpse of the future, even if these glimpses are sometimes a bit blurred. 

The optimists among us are quick to point to the enormous potential for good presented by these technologies: from improving medical research and developing new drugs and vaccines to revolutionising the fields of education, defence, law enforcement, logistics, manufacturing, and more. 

However, international organisations such as the EU Fundamental Rights Agency and the UN High Commissioner for Human Rights have been right to warn that these systems can often not work as intended. 

A case in point is the Dutch tax authority’s SyRI system which used an algorithm to spot suspected benefits fraud in breach of the European Convention on Human Rights.

How to regulate without slowing down innovation?

At a time when AI is fundamentally changing society, we lack a comprehensive understanding of what it means to be human. 

Looking to the future, there is also no consensus on how we will — and should — experience reality in the age of advanced artificial intelligence. 

We need to get to grips with the implications of sophisticated AI tools that have no concept of right or wrong, tools that malign actors can easily misuse. 

Robin Li, CEO of search giant Baidu, talks about AI in Beijing in 2018AP Photo/Ng Han Guan

So how do we go about governing the use of AI so that it is aligned with human values? I believe that part of the answer lies in creating clear-cut regulations for AI developers, deployers and users. 

All parties need to be on the same page when it comes to the requirements and limits for the use of AI, and companies such as OpenAI and DeepMind have the responsibility to bring their products into public consciousness in a way that is controlled and responsible. 

Even Mira Murati, the Chief Technology Officer at OpenAI and the creator of ChatGPT, has called for more regulation of AI. 

If managed correctly, direct dialogue between policy-makers, regulators and AI companies will provide ethical safeguards without slowing innovation.

One thing is for sure: the future of AI should not be left in the hands of programmers and software engineers alone. 

In our search for answers, we need an alliance of experts from all fields

The philosopher, neuroscientist and AI ethics expert Professor Nayef Al-Rodhan makes a convincing case for a pioneering type of transdisciplinary inquiry — Neuro-Techno-Philosophy (NTP). 

NTP makes a case for creating an alliance of neuroscientists, philosophers, social scientists, AI experts and others to help understand how disruptive technologies will impact society and the global system. 

We would be wise to take note. 

We need to ensure that ethical and legal frameworks are in place to protect us from the dark sides of AI.

Robin Li, CEO of search giant Baidu, talks about AI in Beijing in 2018AP Photo/Ng Han Guan

Al-Rodhan, and other academics who connect the dots between (neuro)science, technology and philosophy, will be increasingly useful in helping humanity navigate the ethical and existential challenges created by these game-changing innovations and their potential impacts on consequential frontier risks and humanity’s futures.

In the not-too-distant future, we will see robots carry out tasks that go far beyond processing data and responding to instructions: a new generation of autonomous humanoids with unprecedented levels of sentience. 

Before this happens, we need to ensure that ethical and legal frameworks are in place to protect us from the dark sides of AI. 

Civilisational crossroads beckons

At present, we overestimate our capacity for control, and we often underestimate the risks. This is a dangerous approach, especially in an era of digital dependency. 

We find ourselves at a unique moment in time, a civilisational crossroads, where we still have the agency to shape society and our collective future. 

We have a small window of opportunity to future-proof emerging technologies, making sure that they are ultimately used in the service of humanity. 

Let’s not waste this opportunity.

Oliver Rolofs is a German security expert and the Co-Founder of the Munich Cyber Security Conference (MCSC). He was previously Head of Communications at the Munich Security Conference, where he established the Cybersecurity and Energy Security Programme.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *