OpenAI Unveils New Tool to Identify AI-Generated Images, Highlights the Need for AI Content Authentication

Technology



OpenAI unveiled its new artificial intelligence (AI) image detection and identification tool on Tuesday. The AI ​​company announced the new tool highlighting the need to authenticate AI-generated content and build awareness around it. The company has also formally joined the Coalition for Content Provenance and Authenticity (C2PA) committee, which has created an open standard for labeling AI-generated content. In particular, OpenAI has been using this standard in its Dall-E generated images since February 2024 and continues to add AI-related information to the images' metadata.

In a blog post, OpenAI highlighted the new challenges that have arisen with the creation of AI-generated content. The company said: “As generated audiovisual content becomes more common, we believe it will be increasingly important for society as a whole to adopt new technologies and standards that help people understand the tools used to create the content they find online.” Additionally, the maker of ChatGPT said it was taking two different steps to help authenticate AI content.

In its first step, OpenAI formally joined the C2PA committee and named it a widely used standard for digital content certification. The company also highlighted that the standard is followed by a wide range of software companies, camera manufacturers and online platforms. Simply put, C2PA advocates adding information to the metadata of images and other types of files to reveal how they were created. While an image taken by a camera will include the name and specifications of the camera, an image generated by AI will include the name of the AI ​​model.

This type of authentication method is used because it is difficult to remove or alter the metadata of an image and it remains even if the image is shared, cropped or altered in any way or form.

Highlighting its second step, OpenAI said it was working on a new tool that can identify AI-generated images. Without naming the tool, the company called it “OpenAI's image detection classifier.” The tool predicts the likelihood that Dall-E will create an image. According to the publication, the tool was able to correctly label 98 percent of the images generated by Dall-E compared to real images, despite using filters or cropping the image. However, the tool struggles when comparing Dall-E AI images to other AI models. The AI ​​firm said that in these cases the tool makes mistakes in 5-10 percent of the sample.

However, OpenAI has now opened up the tool for limited public testing and invited research labs and investigative journalism nonprofits to register with the AI ​​company and access tool


Affiliate links may be automatically generated; see our ethics statement for more information.



Source

Leave a Reply

Your email address will not be published. Required fields are marked *