Artificial Intelligence Future Trends Online Marketing Tools

How to Spot AI Deepfakes: A Simple Guide (Regularly Updated)

As the digital world evolves, so does the sophistication of artificial intelligence (AI) technologies like deepfakes. These manipulated videos or audio recordings can make it appear as though individuals are saying or doing things they never did. While deepfakes range from harmless fun to malicious misinformation, it’s crucial to distinguish between authentic and synthetic content. Here’s an enhanced guide to help you spot deepfakes more effectively. Given this is a cat and mouse game, I’ll try and keep this updated regularly, as the tech is advancing pretty rapidly and THERE ARE NO FOOLPROOF TECHNIQUES THAT WORK 100% OF THE TIME.

Types of AI DeepFakes

  • Video deepfakes: These are the most well-known type, and they can be used to make it look like someone is saying or doing something they never did. This is done by superimposing a person’s face onto another person’s body in a video.
  • Audio deepfakes: These can be used to manipulate someone’s voice to make it sound like they are saying something they never said. This is done by training a machine learning model on a person’s voice, and then using the model to generate new speech that sounds like the person.
  • Image deepfakes: These can be used to create fake images of people, or to manipulate existing images. This could include anything from changing a person’s facial expressions to inserting them into compromising or embarrassing situations.
  • Text deepfakes: These are a newer type, but they can be just as dangerous. They involve using AI to generate realistic-looking pieces of writing (like social media posts, news articles, or even emails) that falsely appear to be written by a real person.

Here’s a great example by Tom Graham from MetaPhysic at the TED 2023 conference in Vancouver, where they recreated the popular deepfake of Tom Cruise as well as demoed the capability to lip sync translated voices, plus create deepfakes live (using Chris Anderson as an example).

Important Note: Deepfake technology is in constant evolution. New types and capabilities are continuously being developed. It’s critical to stay informed to identify the potential threats these technologies may pose.

High Profile Examples of AI DeepFakes In The News

There have been plenty of examples in the media recently, with bad actors using rapidly improving AI tech to engineer scams, put out misleading political content or even creating adult material of well known celebrities.

In one example, a multinational company’s finance worker was deceived into transferring $25 million to scammers who used sophisticated deepfake technology to impersonate the company’s CFO and other colleagues during a video conference call. This incident highlights the increasing threat of deepfakes being used in elaborate fraud schemes. See CNN for more.

In another example, the video above discusses the spread of AI-generated explicit images of Taylor Swift on social media, leading to calls for stricter legislation against deepfakes in the U.S. It highlights the challenges in regulating such content and the collective effort of Swift’s fans to mitigate the impact, underscoring the need for both community action and legal reforms to address the issue.

How to spot an AI Generated DeepFake

Here are a few general guidelines to follow:

1. Examine Visual Cues

Deepfakes can often be betrayed by visual imperfections:

  • Uncanny Valley Effect: Look for anything that strikes you as unnaturally off. This might include awkward facial expressions or mismatched emotions that don’t fit the context.
  • Facial Inconsistencies: Pay close attention to unnatural blinking, peculiar eye movements, or distortion around the face’s edges, such as the hairline, jawline, and ears.
  • Blurry Details: Manipulated content might show blurring or pixelation, especially where the subject and background meet.

In the following example of the Ukrainian President, the visual cues are pretty obvious:

2. Listen Carefully

Audio inconsistencies can also indicate a deepfake:

  • Lip-Sync Discrepancies: A mismatch between lip movements and spoken words can be a clear giveaway.
  • Voice Anomalies: Be alert to any oddities in the voice, such as unnatural tone, pitch, or a robotic inflection.
  • Background Noise: Inconsistent or distracting background sounds might suggest audio manipulation.

I’ve tested tools myself to translate and lip-sync videos…watch the following video to see if you can spot the issues, despite how good the AI is:

3. Consider the Context

Contextual clues are vital in identifying deepfakes:

  • Uncharacteristic Behavior: Question the content if it portrays someone acting in ways that seem out of character or too outlandish.
  • Source Verification: Evaluate the reliability of the content’s source. Content from unknown or suspicious sources deserves scrutiny.

4. Utilize Technological Tools

Several tools and platforms have been developed to detect deepfakes:

  • Deepfake Detection Software: A quick search for “deepfake detection tools” online can provide resources designed to identify manipulated content.
  • Reverse Image Search: This can help verify the authenticity of images or videos by finding the original source or similar content.

Specific Tools/Platforms

  • Intel FakeCatcher: This real-time deepfake detector uses physiological analysis to identify subtle inconsistencies in blood flow patterns. It is known for its speed and high accuracy rate (around 96%).
  • Microsoft Video Authenticator: A free tool analyzing videos and images to provide a percentage indicating the likelihood of manipulation. It leverages artifact detection techniques.
  • Sensity.ai: Offers a platform with real-time deepfake detection, liveness detection (distinguishes a real person from a video), and robust security measures.
  • Deepware Scanner: A cloud-based solution focused primarily on artifact detection within images and videos.
  • Reality Defender: This platform incorporates a multi-factor approach with physiological analysis, artifact detection, and source/context analysis for more holistic deepfake detection.

Important Notes:

  • No Single Perfect Solution: Deepfake detection is a cat-and-mouse game as technology on both sides continues to evolve. The best approach often involves a combination of tools and human judgment.
  • Research is Ongoing: Universities and tech companies are heavily invested in developing more robust detection methods. Expect new tools and advancements regularly.

PRO Tips For Online Video Calls:

  • To spot a deepfake during a video call, try asking the person to turn their head sideways. Deepfake technology struggles to accurately recreate side views of faces because it lacks enough data on how people’s profiles look.
  • Another tip is to watch for mismatches between sounds (like coughing or sneezing) and the video. If things don’t line up or if the video quality changes when someone waves their hand in front of their face, you might be dealing with a deepfake.
  • These simple checks can help identify fake videos, especially during online job interviews or meetings.

Remember:

  • Don’t Blindly Trust What You See: As technology advances, spotting deepfakes by eye alone will become more challenging.
  • Maintain Healthy Skepticism: Especially for content that evokes strong emotional reactions. Always verify information before accepting it as true.
  • Stay Updated: Keeping informed about the latest developments in deepfake technology can aid in detection efforts.

Worrying (Upcoming) Tech To Watch

Text (Style) and Audio (Voices) are already pretty easy to clone but the ability to produce high quality video is becoming better and easier. Two such examples:

OpenAI’s Sora, which can now generate very realistic text to video:

Very recently, China’s Shengshu Technology and Tsinghua University have unveiled Vidu AI, a text-to-video model to compete with Sora. It is capable of generating 16-second clips at 1080p resolution and you’ll notice some of the scenes aim to replicate the Sora demo…however it doesn’t quite match its level of quality and length yet.

Alibaba’s EMO, which can now transform still images into video, complete with believable face movements and emotions:

Microsoft, not to be left behind, recently released its Research division’s VASA-1 project, which uses advanced AI to create realistic, audio-synchronized talking faces from just a single still image input. Pretty freaky…luckily this hasn’t been released for public use yet.

Synthesia has also upped its game with Expressive Avatars:

Then there’s this face swapping tech/model…the creator hasn’t revealed the technique or whether this was truly realtime, but it is certainly a bit worrying:

Finally, here’s a recent video where Reid Hoffman (founder of LinkedIn and prolific investor) interviewed his own digital avatar. The quality of this avatar, including the replication of his mannerisms is quite incredible…if you didn’t know any better, you’d think he (it) was the real deal. Maybe we’re getting to a place where we’ve just got to ask people to do things (or reveal things) that a fake avatar just couldn’t do, just so we can tell if they’re real.

Conclusion

There’s no foolproof way to counter Deepfakes…however by adopting some of the above strategies, you’ll be better equipped to discern real from synthetic, protecting yourself against misinformation. As deepfake technology continues to evolve, staying informed and vigilant is our best defense. If you have any new examples, tools or tips to spot Deepfakes, please leave a comment and let us know.

1 comment on “How to Spot AI Deepfakes: A Simple Guide (Regularly Updated)

  1. Khaldoun

    great explanation about a topic which is taking importance!
    Thank you JJ

    Liked by 1 person

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Hotel AI, Marketing, Tech and Loyalty

Subscribe now to keep reading and get access to the full archive.

Continue reading