Google to Embed Markups in Images Created by AI to Warn Users

Google to Embed Markups in Images Created by AI to Warn Users

Google has announced that it will embed markups within images created by its AI models to warn users that the images were originally created by a computer. The information within the images will not be visible to the human eye, but software like Google Search will be able to read it and display a label warning users. Google will also provide additional information regarding all images in its results to prevent deception and will include when the image was first uploaded to the search engine and if it has been cited by news sites.

This is the most significant effort so far by a big technology company to classify and label output from generative AI. The technology has the ability to create realistic images or fluent text passages that could be used by spammers, scammers, and propagandists to fool people. There are no reliable ways to determine generated images, and there are often some clues such as badly drawn hands, but there isn’t a definitive way to say which images were made by a computer and which were drawn or photographed by a human.

Google’s approach is to label the images when they come out of the AI system, instead of trying to determine whether they’re real later on. The search engine giant has created a new markup approach, and Shutterstock and Midjourney have agreed to support it. The markup will be able to categorize images as either trained algorithmic media, composite image partially made with an AI model, or algorithmic media created by a computer without being based on training data.

At the annual developers conference, Google announced additional AI features for its other products, including an image generator. The company also unveiled a folding phone that costs $1,799.

This new approach will help prevent the spread of fake news and prevent people from being misled by AI-generated images. Google is taking a step towards ensuring that the AI-generated images are labeled correctly, which will help people avoid being fooled by such images. The company’s approach to labeling images when they come out of the AI system is a significant move towards classifying and identifying output from generative AI.

US

Articles You May Like

The Unfolding Drama of the Gaetz Investigation: Ethical Implications and Political Fallout
The Hidden Risks in Everyday Products and Healthcare Practices
Legal Battles and a President’s Immunity: Analyzing Trump’s Conviction Dynamics
The Unfolding Saga of Alec Baldwin: Seeking Truth in Tragedy

Leave a Reply

Your email address will not be published. Required fields are marked *