All tags

HOME
Company News OSINT OSINT Case Study OSINT Events OSINT News OSINT Tools Product Updates SL Crimewall SL Professional for i2 SL Professional for Maltego Use Сases

How to Recognize AI-Generated Pictures, Videos, and Audio

We actively work with experts in various fields to add value to our articles. This time, the Social Links Center of Excellence team has invited a co-author with vast cybersecurity knowledge to collaborate on an article, CyberDetective. For anyone unfamiliar, CyberDetective has been actively working to spread OSINT know-how and cybersecurity education for a long time, so we’re thrilled to have them collaborate with us.

In this article, we look at the Internet’s most recent obsession: AI-generated content. Specifically, how to recognize when a piece of media has been tampered with. For convenience, we divided the article into three sections. Each part deals with a specific type of content—images, videos, and audio. In each, we cover

  • A brief overview of the capabilities of modern AI.
  • Methods and tools for recognizing AI-generated fakes
  • Additional verification techniques

So, now let us give the word to CyberDetective.

Why is AI such a big deal?

In recent years, fact-checking has transitioned from a challenging task to a tough job. The main reason for this is the emergence of numerous free AI tools for generating images, video, and audio. 

When Photoshop appeared in 1990, it already had massive photo editing and tampering potential. But now, the software comes with Adobe Firefly, a generative AI that allows users to do tasks that used to require an hour of work by an experienced professional in just five seconds. Today, all that's needed is to select an area in an image and describe the necessary changes in writing.

Select an area on the image and describe what you want to see; the generative AI handles the rest (Image Source)

A similar scenario plays out with audio and video content. While programs allowing the manipulation of someone else's words have existed for decades, AI-driven tools have accelerated the creation of fakes by several folds.

Today, OSINT specialists face many tampered images, videos, and audio during investigations. And they are often fakes of relatively high quality. Below, you’ll find practical details and ways to help you recognize such forgeries faster. Unfortunately, no methods can guarantee a 100% success rate.

AI-Generated Images

What Can AI Image Tools Do?

  • Generate any picture by text description. (For example, Stable Diffusion)
  • Edit (add or remove details) images according to the text description. (For example, Instruct Pix2Pix)
  • Edit the selected area in the image according to the text description. (For example, Stable Diffusion XL Inpainting.)
  • Change facial expression, facial position, and age on the original photo of a person, and generate faces based on a sketch or mask. (For example, StyleGANEX).
  • Replace one person's face with another person's face in an image. (For example, Face Swap)
  • Generate full-length portraits or images of people with the ability to select dozens of parameters (hair color, skin color, build, pose, clothing, etc). (For example, Human Generator)
Human Generator can create full portrait images with a large selection of parameters to adjust

Recognizing AI-Generated Fake Images

There are various online tools to recognize AI-generated pictures automatically. Solutions like AI or Not, ContentScale AI Image Detector, and AI Image Detector can do a decent job of identifying tampered images. However, these sites are not entirely reliable, especially when AI only partially edited the picture.

Easily detectable AI-generated photos often exhibit specific characteristics that appear unnatural to the human eye. Focusing on the following aspects is crucial:

  • Excessive 'glossiness' within the photo.
  • Unnatural shadow positioning concerning light sources.
  • Artifacts in the rendering of ears, eyelids, and fingers (although the latest MidJourney and Stable Diffusion updates handle these aspects better, this issue remains relevant).
  • Distorted or pixelated text within street signs.
  • A noticeable absence of diverse objects in images where one would typically expect a variety (e.g., photos of an office or rooms in a house).

However, seasoned investigators naturally pay attention to the listed details. The real challenge is figuring out when neural networks only partially tampered with an image. In such cases, the same tools used for identifying standard photomontages can provide a way forward:

  • Image Zoomer. This tool allows users to enlarge uploaded images by up to 2000%.
  • Forensically. This solution offers features like magnification, clone detection, error level analysis, noise analysis, and level sweep.
Zooming in and analyzing an image can reveal unnatural features, which are a dead giveaway for AI generation

It's important to understand that high-quality fakes can be exceptionally challenging to detect. For instance, if AI tools, like those from Bria Labs, were employed to generate PSD files, each layer may have been meticulously refined. Consequently, it's not advisable to declare a picture authentic based only on visual inspection results alone.

💡
Additional Verification Methods

Reverse Image Search. This involves uploading the photo to search engines like Google Images, PimEyes, TinyEye, and more. Such services help specialists discover where else the image has been published. Moreover, the engine can check both the full version of the image and its parts.

File Name Search. If the file has a long and unique name (not something generic like "image.png" but more like "xxxddd44594540jjjggk.jpg"), searching for mentions of the label on Google can deliver results. Additionally, exploring IP-search engines like Netlas, Fofa, and Censys can work, as they cover a broader range of websites.

Metadata Analysis. Examining the metadata of the file using metadata2go.com can work. Occasionally, experts might be able to discover the name of the AI tool used to generate the image.
Unique names are easier to verify when searching by file name

AI-Generated Audio

What Can AI Audio Tools Do?

  • Clone a person's voice based on an audio recording and generate new audio recordings where that person speaks a specific text input. Examples include Voice Cloning and Bark Voice Cloning.
  • Convert a voice on an audio recording into another person's voice, like Speech T5: Voice Conversion.
  • Generate audio recordings in which voices of non-existent people speak a specific text, as demonstrated by Speech T5: Speech Synthesis.
It’s possible to use an audio recording and convert it to an entirely new (non-existent) person’s voice

Recognizing AI-Generated Audio Fakes

Much like images, there are specialized tools for identifying AI-generated audio, such as AI Voice Detector and Voice Classifier for Detecting AI Voices. However, these tools are still evolving and less effective than human ears. Most people can readily detect unnatural aspects in AI-generated speech.

This unnaturalness becomes particularly evident when you listen to audio at very fast or slow speeds. You can adjust the playback speed in most audio players or use an online tool like the Time Stretch Audio Player. Additionally, you can open the recording in any sound editor to inspect the appearance of sound waves using tools like AudioMass.

The AI-generated audio of the phrase “Hello, how are you?”

If the voice is AI-generated, the sound waves are likely to be very clear and similar to each other because AI sound generation is essentially a sound drawing process.

Human audio of the phrase “Hello, how are you?”

Audio recordings created by a live person tend to sound more natural and varied.

💡
Additional Verification Methods

Metadata Analysis. Examining the metadata of the audio files can reveal mentions of the AI generators used to create them.

File Name Search. Like image files, searching for mentions of the audio file name on search engines like Google or IP-search engines (Netlas, Fofa, Censys) can allow cross-referencing their online presence.

AI-Generated Videos

What Can AI Video Tools Do?

  • Generate videos of someone speaking some text based on a photo of their face. Example: Vidnoz
  • Create videos based on the text descriptions. Example: Zeroscope Text-to-Video
  • Image-based video generation. Example: genmo.ai
  • Edit the video according to the text description. Example: Pix2Pix Video
  • Replace a person's face in a video. Example: deepswap.ai

Note: Online demos, like those available on Hugging Face, are configured to work with short video clips. The platform does this to conserve resources. However, individuals can run these same AI models on their personal servers or computers, allowing them to generate much longer video clips.

AI advancements happen at breakneck speed, so staying on top of things requires active attention


Recognizing AI-Generated Fake Videos

Detecting Deepfake videos, which replace one person's face with another, can be done using online tools like Deepfake Detector and Deepware Deepfake Video Scanner. However, these tools have limitations and may not detect other types of video fakes, such as alterations to uniforms or backgrounds. Therefore, meticulous manual examination is the most effective way to identify fake videos. Pay close attention to unnatural or poorly executed details in the video, as discussed in the image section.

Tools like Anilyzer can help you analyze YouTube or Vimeo videos frame by frame, and you can also download videos and view individual frames using VLC player. Converting video files to image sequences with Clvideo allows for frame-by-frame exploration using the same image analysis tools mentioned earlier.

💡
Additional Verification Methods

Metadata Analysis. Just like with audio and images, examining the file's metadata and searching for the file name using different search engines can provide further clues about the video's authenticity.

Further Thoughts on AI Generation Tools

In recognizing AI-generated and edited media files, your level of attention to small details is crucial. To become proficient in identifying AI-generated content, consider exploring more examples of what modern neural networks can accomplish. Running the services from the first subsection of each section and observing their output will help you develop an intuitive sense for distinguishing AI-generated elements.

AI technology is rapidly evolving, and new tools are continually emerging. The features listed in this article may become outdated quickly. To stay informed about the latest advancements in AI, it's advisable to regularly check the Trending Spaces tabs on platforms like Hugging Face.

Please note that the landscape of AI tools is dynamic. Since the time of writing, which can be more than a month before publication, newer and improved services may have already replaced some of the tools mentioned in this article. Notably, the Hyper Human model sets new standards for generating lifelike images nearly indistinguishable from real ones. The techniques and advice provided in this article may not be effective for such advanced AI-generated content.

The Hyper Human model creates incredibly realistic depictions of people, which may soon be challenging to identify as AI-generated (Image Source)

An impressive tool for AI-generated video is Runaway Gen-2. The solution recently received an update for its text-from-video functionality. The results are promising, with high image quality and the ability to adjust camera position, speed, and improved color reproduction. While elements still need to be improved, the end results are already lifelike. In fact, Ammaar Reshi noted the quality of the resulting work on X (formerly Twitter).

While results still look more artistic than realistic, Runaway Gen-2 shows promise for realistic AI generation (Image source)

It's essential to remain vigilant, adapt to evolving technologies, and continuously enhance your skills in recognizing AI-generated content across various media types. Indeed, advancements in AI will undoubtedly extend to sound and video generation soon.

Share this post

You might also like

You’ve successfully subscribed to OSINT Blog by Social Links | OSINT Investigations
Welcome back! You’ve successfully signed in.
Great! You’ve successfully signed up.
Success! Your email is updated.
Your link has expired
Success! Check your email for magic link to sign-in.