OpenAI makes images generated by Dall-E even more identifiable

Meta indicates to start identifying AI-produced content, both photos and videos, on Facebook and Instagram, in order to not deceive its users. On the same day, OpenAI announced the addition of the C2PA certification to images generated by its DALL-E 3 tool.

“This image was generated by AI”: when inspecting images generated by OpenAI’s tools, it will now be easier to identify with which artificial intelligence they were generated.

It is OpenAI, the company led by Sam Altman, the creator of ChatGPT, who announced this update in a blog post on February 7, 2024, which is based on a protocol called C2PA (Coalition for Content Provenance and Authenticity).

This coalition, founded in 2021 by Adobe and other companies (BBC, Intel, Microsoft), aims to create a universal and open-source standard to recognize the origin of images published online. The goal is to combat disinformation through visually modified objects to influence public opinion.

Successfully identifying if an image has been altered or not is one of the biggest challenges of online information sharing. In the past, the question was more about tools like Photoshop, but now, generative AI allows for the creation of unprecedented volumes of content, with unparalleled ease and speed.

Source: OpenAI
The addition of additional metadata. // Source: OpenAI

The C2PA advocates for the addition of additional metadata: these are data, visible or invisible, that provide information about other data. In this case, it would be a small visual added to the top left of the image, with the date of its creation and an icon indicating that it is not natural. In the additional data, it is written that the image was created using OpenAI and the tool used is ChatGPT, which is more precise than before.

To date, only still images are affected, not videos.

However, OpenAI specifies that these tools are not infallible: “Since metadata can be removed, their absence will not indicate that an image was not produced by our AI,” the company reminds in a tweet. “Broad adoption of methods for establishing provenance and encouraging users to look for these signals are steps towards increasing the trustworthiness of digital information.

How to use AI in a reasonable way?

Numerama tested the DALL-E tool via ChatGPT 3.5 and ChatGPT 4, but for now, the tool has not incorporated the visual watermark into our images.

OpenAI’s announcement follows that of Meta (Facebook, Instagram), on the same morning, which said it is working on detecting and identifying AI-produced content on its social networks. “In the coming months, we will label the images that users post on Facebook, Instagram, and Threads, when we can detect indicators that correspond to industry standards indicating that they are AI-generated,” wrote president Nick Clegg. Meta is developing internal tools capable of detecting an image generated by AI on competing platforms.

Users will also have the option to declare themselves whether the image they are posting was created by AI or not.

Source: MetaSource: Meta

Leave a Reply

Your email address will not be published. Required fields are marked *