Can we regulate artificial intelligence? (2/2)

Translation:

Control must be planned from the design stage of the system, but the risk of infringement of fundamental rights remains high.

Maryse Artiguelong

Representative of the League of Human Rights in the Observatory of Freedoms and Digital

After the internet, social networks, smartphones, and other digital technologies, artificial intelligence (AI) has insidiously entered all areas of our daily lives. It can seriously harm fundamental rights, so regulation is necessary.

While some uses of AI are an undeniable progress for complex or repetitive tasks and provide comfort and ease in our daily lives – such as voice assistants, GPS navigation, autonomous vehicles, and more targeted medical care – they can also infringe on our rights and freedoms, particularly through the surveillance of our movements thanks to remote biometric identification, facial recognition, or emotion recognition, for example during job interviews.

The same goes for “social scoring” practices by banks, insurance companies, or even some social services (see the controls triggered by algorithms from the French CAF) or even with judicial decisions.

As for “fake news” (manipulated information) and “deep-fakes” (false photos or videos that are more real than reality) designed to manipulate opinions, they pose real threats to our democracies, which could be amplified by the revolution represented by generative AIs like ChatGPT and its competitors (which also shamelessly pillage the press and authors, without scruples or sanctions).

It is clear that the use of AI systems needs to be regulated, but how to control such complex systems? Verifying what an AI does through algorithms that process enormous amounts of data (from the internet) to make recommendations, forecasts, or decisions based on defined objectives is not within the reach of every citizen.

If it involves self-learning systems in which AI is capable of learning by itself, instead of following pre-established rules, the control is almost impossible (including sometimes by its designers!). Therefore, it must be planned from the design stage of the system.

This is indeed what the EU’s AI regulation recommends, which plans to classify AI systems according to their level of risks, from “negligible”, “limited”, “high” to “unacceptable”, except that it will be the designers themselves who will define these levels so as not to impose too many constraints on startups in the sector!

The regulation requires informing individuals that they are dealing with an AI, and not a person. It also prohibits facial recognition, which can still be authorized in the search for terrorists. This regulation will have the merit of beginning to protect millions of European citizens, but it leaves a lot of risks for fundamental rights.

The ethical and scientific challenge is to find a balance between the opportunities and risks of AI development.

Nicolas Vayatis

Specialist in data science, professor at ENS Paris-Saclay

Every technology has its dark sides. This is true for means of transportation, nuclear energy, drugs, and even for the internet. The impacts of technologies can be measured throughout their lifecycle: in their production and material infrastructure (certainly not least in the case of networks and digital technologies that are the physical underpinnings of AI), in their uses or misuses, and finally, in their longer-term impact on society or the climate. Is AI so different from other technologies that its regulation is impossible?

In the general press, the power of automation it brings (replacing radiologists?), the biases it propagates (Microsoft’s Tay robot tweeting messages of racism and homophobia, or ChatGPT and its gender biases), its ability to spy (facial recognition coupled with social scoring practiced in totalitarian countries from various tracking methods), and even its more fantastical ability to take control (the so-called “strong” AI as in Terminator, or HAL in the more poetic version imagined by Stanley Kubrick) are often mentioned.

In fact, a majority of AI uses fall under existing regulatory principles and frameworks. For the rest, dozens of reports, directives, and other regulatory documents (including the recent EU AI Act) produced in recent months aim to identify and shed light on the “blind spots” in existing legislation to address AI-based solutions.

In the case of autonomous vehicles, French and American legislators have chosen to hold the manufacturer criminally responsible in the event of an accident. As for systems for medical diagnostic aid, the Food and Drug Administration only certifies “locked” AI solutions (i.e., without recalibration possible from new data collected throughout its lifecycle).

It would not be absurd, for critical uses (health, education, industry, defense) and before market launch, for AI solutions to be certified following the model of drugs or medical devices.

One of the main factors of opacity of AI solutions is the generally secret nature of the historical database used to calibrate the “intelligent” decision support system. Moreover, the main ethical, democratic, and regulatory challenges clearly lie in finding a balance between the opportunities and risks of exploiting the invaluable megadatabases for the development of artificial intelligence… Assuming there is a correct qualification of risks and opportunities based on ethical and scientific bases, and this is undoubtedly where the main challenge lies, relying on the development of knowledge and education.



We were unable to confirm your registration.



Your registration is confirmed.

Leave a Reply

Your email address will not be published. Required fields are marked *