The Rise of AI-Generated Images and How to Spot Fakes

In today’s digital landscape, the line between real and fake content is blurring everyday. The images produced by AI platforms like Midjourney are so realistic that they are now winning photography competitions and are even being used by prominent organisations like Amnesty International, sparking a lot of controversy. As European communicators, we have a crucial role in helping the public make informed, fact-based decisions, especially with so many elections happening. The flood of AI-generated images is a challenge we must confront head-on by equipping ourselves and others with the skills to identify these fake images.

Why We Need to Recognize AI-Generated Images

The ability of AI to create highly realistic images is both impressive and a potential threat. These images can mislead the public, influence opinions, and even disrupt democratic processes. By learning to spot AI-generated images, we can mitigate their impact and promote media literacy. Here are three tips for identifying fake AI images:

1. Too Good to Be True

One of the hallmarks of AI-generated images is their almost unreal perfection. These images often appear too glossy or polished compared to those captured by human photographers who aim to capture spontaneous moments under less than ideal lighting conditions. The lighting, colours, and details might be impeccably balanced to the point of looking unnatural. Ask yourself: does this image seem almost too perfect? If so, it might be the work of an AI.

2. Distorted Humans

Despite recent advances, AI still struggles with certain parts of the human body. While it’s getting much better at rendering hands with the correct number of fingers, other areas, like faces and limbs, can still appear oddly distorted. Look closely at the people in the image you’re verifying… Do their faces or body parts look slightly off or unnatural? These subtle inconsistencies are often tell-tale signs of AI generation.

3. General Weirdness

AI-generated images are only as good as the prompts they are given, and these prompts can sometimes be misinterpreted by the generative AI models. This can lead to bizarre abnormalities, such as country flags with incorrect colour arrangements or other minor details that don’t quite make sense. Zoom in on the image and scrutinise it for anything that seems out of place. In the case of Amnesty International’s fake images, the reversed colour order on the flags was a dead giveaway.

The AI generated image of a woman struggling against crowd control officers with the incorrect flag depicted.

Spreading Awareness and Building Media Literacy

Once we’ve honed our skills in spotting fake AI images, it’s our duty as responsible communicators to spread this knowledge. By raising awareness and educating others, we can help build a more resilient and media-literate society. Encouraging people to question the authenticity of images and report suspicious content can significantly reduce the influence of fake images.

The current deluge of AI-generated images presents both challenges and opportunities. While they can be used creatively and constructively, there’s also a risk of them being used to deceive. As European communicators, we have a key role in this modern media age of AI of empowering the public with the tools and knowledge they need to navigate this complex landscape. By learning to identify AI-generated images ourselves and then sharing these insights, we can help ensure that truth and integrity remain at the heart of the communications landscape.

Get in touch