Deepfakes Are No Longer Science Fiction
A few years ago, deepfake videos — AI-generated or AI-manipulated footage that makes real people appear to say or do things they never did — were a niche technical curiosity. Today they're circulating in political campaigns, social media feeds, and online news. The technology is improving rapidly, and the tools to create them are increasingly accessible to non-experts.
This doesn't mean you should distrust every video you see. It means you need a practical framework for evaluating what you're watching.
Visual Red Flags to Watch For
Even as deepfake quality improves, certain telltale signs remain difficult for AI to fully eliminate:
- Unnatural blinking: Early deepfakes rarely blinked. Newer ones blink, but the timing or frequency can still seem slightly off.
- Facial edge artifacts: Look at the hairline, ears, and jaw. AI-generated faces often have subtle blurring, flickering, or inconsistent detail at the boundaries between face and hair or background.
- Lighting inconsistency: The face may appear to be lit differently from the rest of the scene — a telltale sign of composited imagery.
- Teeth and mouth: AI still struggles with the inside of a moving mouth. Look for teeth that seem unusually smooth, uniform, or that blur when the mouth moves.
- Skin texture: Deepfaked faces often have an unnaturally smooth, almost waxy quality, particularly in lower-resolution videos.
Audio Cues to Listen For
Video deepfakes often pair with AI-generated or cloned audio. Signs of manipulated audio include:
- Slightly robotic or over-smooth vocal quality
- Unusual breathing patterns (or none at all)
- Lip sync that's close but not quite right
- Background noise that cuts out unnaturally when the person speaks
Context Checks Are More Reliable Than Visual Inspection
Here's the uncomfortable truth: as deepfake technology improves, visual and audio inspection alone will become less reliable. The more durable approach is contextual verification:
- Check the source: Where did this video first appear? A credible news organisation with editorial standards, or an anonymous social media account?
- Search for corroborating coverage: If a public figure genuinely said or did something newsworthy on video, it will be covered by multiple independent sources.
- Use reverse video search: Tools like Google Video search or InVID/WeVerify allow you to check where a video has appeared online and when it first surfaced.
- Check the metadata: The upload date, account history, and associated links can all provide context about whether a video is what it claims to be.
Tools That Can Help
| Tool | What It Does | Cost |
|---|---|---|
| InVID / WeVerify | Reverse video search, metadata extraction | Free |
| Google Reverse Image Search | Find where thumbnail images have appeared | Free |
| Sensity AI | Deepfake detection (enterprise-focused) | Paid |
| Microsoft Video Authenticator | Confidence scores for manipulated video | Free (limited access) |
The Most Important Skill: Slowing Down
The single most effective defence against deepfakes and manipulated media isn't a tool — it's a habit. Before sharing a video that triggers a strong emotional response, pause. Strong emotions (outrage, disbelief, excitement) are exactly the conditions under which we're most likely to share something false. Take 60 seconds to verify before you amplify.