How to tell real Minneapolis ICE surge videos from AI deepfakes — 5 key clues

a person recording a video on their phone during a protest
(Image credit: Credit: Bridget Bennett/Bloomberg/Getty)

As federal immigration enforcement activity tied to Operation Metro Surge has unfolded in Minneapolis, social media has been flooded with a mix of real on-the-ground footage and highly realistic AI-generated images and videos.

Unfortunately, it’s getting increasingly difficult to tell what actually happened from what was fabricated in an AI video generation tool, which just about anyone can do with platforms like Grok, Veo 3.2 and Sora.

I'll admit, even I've shared videos that I thought were real. It can be very hard to tell, which is why researchers and fact-checkers have warned that convincing deepfakes tied to these events are spreading rapidly across platforms like X, TikTok and Facebook — sometimes racking up millions of views before being debunked.

If you’re trying to make sense of what you’re seeing online, here are five practical ways to spot AI-generated or manipulated content.

1. Odd movements, lighting or physics

Google Vlogger AI video

(Image credit: Google)

One of the most common giveaways of AI video is subtle visual “weirdness.” It's that uncanny valley effect of people moving in slightly unnatural ways, limbs that look stiff or warped or shadows that don’t quite match the light source. These imperfections can be hard to catch at first glance, but they’re a classic deepfake red flag.

The voices in the videos have a similar effect of sounding almost distant, even if the video is a close-up.

2. Strange or unreadable text in the background

Kling AI video

(Image credit: Kling AI video/Future)

AI often struggles with realistic text. If you spot street signs, badges, uniforms or labels that look misspelled, blurry or nonsensical, that’s a strong hint that the image or video may have been generated rather than filmed. Several viral clips related to Minneapolis have already been flagged for exactly this issue.

You may also notice things like vehicle doors opening in the opposite direction, too many handles or no logo at all on a vehicle.

3. Visible AI watermarks or tool branding

Runway AI video

(Image credit: Runway AI video)

Some deepfakes still carry subtle (or not-so-subtle) watermarks from the AI tools used to create them. If you see logos, faint branding or AI tags overlaid on a clip, that’s a clear sign it didn’t originate from a real camera.

Yet, these are not always visible as sometimes on social media users will add emojis or text over these watermarks or even crop them out entirely.

4. No credible source

OpenAI Sora video AI

(Image credit: OpenAI Sora/August Kamp)

Be wary of posts that make dramatic claims but don’t link to a verified news organization, reporter or official source. Deepfake creators often pair misleading visuals with emotionally charged captions to maximize shares — even when the footage itself is fake.

You can also take a screenshot of the post and upload directly into ChatGPT and ask for it to find the source of the image. Often, ChatGPT can find what the image is referring to while sharing the original context.

5. Mismatch with verified reporting

Stonehenge (AI image)

(Image credit: Recraft AI/Future AI)

If reputable news outlets have already published confirmed footage from a scene, compare what you’re seeing to those clips. AI-altered videos may look similar at a glance, but closer inspection often reveals inconsistencies in angles, people, timing or surroundings.

Bonus tip: look for multiple confirmations

AI generated Otter video

(Image credit: Leonardo.ai/Ryan Morrison)

Before believing — or sharing — a viral clip, check whether at least two credible news organizations have independently verified it. If you can’t find any reliable confirmation, it’s safer to assume the footage could be manipulated.

Bottom line

Deepfakes can muddy the public record, drown out real eyewitness evidence and make it harder to understand unfolding events. As AI tools continue to improve, distinguishing fact from fabrication will only become more challenging, which is why developing a critical eye is more important than ever.


Google News

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.


More from Tom's Guide

Amanda Caswell
AI Editor

Amanda Caswell is an award-winning journalist, bestselling YA author, and one of today’s leading voices in AI and technology. A celebrated contributor to various news outlets, her sharp insights and relatable storytelling have earned her a loyal readership. Amanda’s work has been recognized with prestigious honors, including outstanding contribution to media.

Known for her ability to bring clarity to even the most complex topics, Amanda seamlessly blends innovation and creativity, inspiring readers to embrace the power of AI and emerging technologies. As a certified prompt engineer, she continues to push the boundaries of how humans and AI can work together.

Beyond her journalism career, Amanda is a long-distance runner and mom of three. She lives in New Jersey.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.