California tries to impose limits on artificial intelligence (AI)

The Silicon Valley, California-based company is trying to impose limits on artificial intelligence (AI), inspired by European regulatory developments. In mid-March, the European Parliament approved a text governing artificial intelligence models, which places restrictions on transparency, copyright and privacy. “We're trying to learn from the Europeans and work with them to understand how to set rules for artificial intelligence,” says David Harris, an adviser to the California Initiative for Technology and Democracy.

This organization aims to protect elections and the democratic process from the abuses of emerging technologies. More than 30 proposed laws have been introduced in the California legislature, according to David Harris, who says he has advised US and European officials on the issue. Texts submitted to the California Legislature address various aspects of artificial intelligence. One suggests forcing tech companies to disclose what data was used to develop an AI model.

Another includes banning campaign ads that use, in one way or another, genetic artificial intelligence, an interface that allows content (text, image, audio) to be generated on demand in everyday language. Elected officials also want to ensure that social networks report any content, any image, video or soundtrack created using genetic artificial intelligence. A UC Berkeley poll of California voters in October found that 73 percent supported laws against disinformation or fakery and limiting the use of artificial intelligence during election campaigns. It is, moreover, one of the rare issues on which there is consensus on both the Republican and Democratic sides.

“One step ahead”

For David Harris, catching deepfakes and fake texts created by artificial intelligence is one of the most important questions. A Democrat elected from a constituency that includes part of Silicon Valley, Gail Pellerin supports a bill that would ban deceptive political “deepfakes” during the three months leading up to an election. “Bad actors using this technology are really trying to create chaos during elections,” he argues. The trade association NetChoice, which represents digital companies, warns against the temptation to import EU rules into California. “They are adopting the European approach to artificial intelligence, which wants to ban this technology,” declares Carl Szabo, the agency's legal director, who is campaigning for fewer sanctions.

“Banning artificial intelligence will not stop anything,” the lawyer assures. “It's a bad idea because, by definition, bad actors don't follow the laws. Software publisher Adobe's legal director, Dana Rao, is more measured. It welcomes the distinction the EU has made between low-impact AI, which includes 'deepfakes' and false texts, and 'high-risk' AI, used in particular in critical infrastructure or law enforcement. “The final version of the text suits us,” says Dana Rao. Adobe says it is already conducting impact studies to assess the risks associated with new AI-based products. “We have to worry about nuclear security, cyber security, and all the times artificial intelligence is making important decisions involving human rights,” the lawyer describes.

Adobe has developed, with the Coalition for Content Provenance and Authenticity, an organization whose members include Microsoft and Google, a set of metadata, “content credentials” that provide information about the development and content of an image. California's elected officials plan to put themselves at the forefront of artificial intelligence regulation, just as the state's companies are pushing the technology. “People are watching what's happening in California,” according to Gail Pellerin. “It's a movement that concerns us all,” he adds. “We must stay one step ahead of those who want to wreak havoc in elections. »

Leave a Reply

Your email address will not be published. Required fields are marked *