Menu

Blog

Jun 5, 2023

What will stop AI from flooding the internet with fake images?

Posted by in categories: internet, robotics/AI

One novel approach — that some experts say could actually work — is to use metadata, watermarks, and other technical systems to distinguish fake from real. Companies like Google, Adobe, and Microsoft are all supporting some form of labeling of AI in their products. Google, for example, said at its recent I/O conference that, in the coming months, it will attach a written disclosure, similar to a copyright notice, underneath AI-generated results on Google Images. OpenAI’s popular image generation technology DALL-E already adds a colorful stripe watermark to the bottom of all images it creates.

“We all have a fundamental right to establish a common objective reality,” said Andy Parsons, senior director of Adobe’s content authenticity initiative group. “And that starts with knowing what something is and, in cases where it makes sense, who made it or where it came from.”

In order to reduce confusion between fake and real images, the content authenticity initiative group developed a tool Adobe is now using called content credentials that tracks when images are edited by AI. The company describes it as a nutrition label: information for digital content that stays with the file wherever it’s published or stored. For example, Photoshop’s latest feature, Generative Fill, uses AI to quickly create new content in an existing image, and content credentials can keep track of those changes.

Comments are closed.