I’ve known about deepfakes for years, but I didn’t realize how evolved they have become until recently, when a friend sent me a “news” video to look at.
When I first watched it, it seemed alright. The lighting was okay, the studio familiar, the delivery authentic. The comments section seemed genuine with some outraged and others supportive. No one questioned whether it was real.
I had to watch it several times before I started to see the fakeness. The mouth movements looked slightly off during certain words. The neck was too stiff. Then I thought to check the URL, and found it was not the newscaster’s regular site.
I found it hard to spot even when I knew to look for this. How many people watched once, accepted it as truth, and shared it? The answer matters because what started as interesting technology has become a tool for mass deception. We’re now in a period where seeing is no longer believing—and most people haven’t noticed yet.
What Deepfakes Are
Deepfakes are AI-generated media that mimic real people. They’ve evolved from crude experiments to nearly perfect forgeries in just a few years. The technology is able to swap faces, clone voices, and generate full videos from a text description. What required expert skills and expensive equipment in 2017 now only needs a laptop and freely available software.
The scale of the problem is hard to grasp. According to the cybersecurity firm DeepStrike, the number of deepfakes online surged from approximately 500,000 in 2023 to about 8 million in 2025. Three technical breakthroughs made this acceleration possible.
First, video generation models achieved what researchers call temporal consistency. These systems produce coherent motion, stable facial features, and content that makes sense. Second, voice cloning crossed what computer scientist Siwei Lyu calls the “indistinguishable threshold.” A few seconds of audio sampling is now enough to generate a convincing clone complete with a natural tone, rhythm, emotion, pauses, and breathing noise. The clues that once gave away synthetic voices have mostly disappeared.
Third, consumer tools reduced the technical barrier to zero. Tools from OpenAI’s Sora 2, Google’s Veo 3, and a wave of startups mean anyone can describe an idea and let a large language model draft a script. Within miniutes they have generated polished audiovisual material that looks and sounds real. The capacity to generate coherent, storyline-driven deepfakes at scale is now widely accessible.
Newscasters make ideal targets for deepfakes because we’re conditioned to trust them. Decades of footage exist for training AI models. They speak directly to audiences with authority. A fake news anchor doesn’t just spread a false story. It weaponizes the trust we place in journalism itself.
How to Spot a Deepfake
Detection is getting harder, but several clues remain visible to careful observers. Start with the face and eyes. Watch for unnatural blinking patterns—too frequent, too slow, or absent altogether and eye movements that don’t track properly. Mouth synchronization often breaks down during complex consonants like p, b, and m. Teeth may appear blurred, and tongue movements might not match sounds.
Skin texture frequently looks too smooth or lacks natural pores. Hair boundaries show blurring where hairline meets background, and hair doesn’t move naturally. Lighting inconsistencies give it away with shadows that don’t match light sources, faces lit differently than backgrounds, reflections in eyes that don’t align with the scene. Physical details like hands, ears, and jewelry can appear warped or incomplete. The way someone tilts their head or moves their jaw may feel robotic.
Audio offers its own signs. Listen for a slight robotic quality underneath natural speech. Breathing patterns that don’t align with speech reveal the fakery. Unnatural pauses or mispronunciations of familiar names indicate an AI model working from text.
Context verification matters. Check the URL to look for variations in domain names (NBC-news.net versus NBC.com). Check if the content appears on the official news site or other legitimate sources. Normal elements like station logos and on-screen text may be missing or incorrect. Use reverse image search through Google Images or TinEye on video screenshots. Check whether the news organization or person addressed this content directly.
Trust your instincts – if something feels off, investigate before sharing.
Why Deepfakes Threaten Democracy
The danger extends far beyond individual deception. Deepfakes are involved in over 30% of high-impact corporate impersonation attacks, but the political and knowledge threats run deeper.
Immediate harms can include spreading false information through fake announcements, fabricated statements, and manufactured events. People make financial, political, and personal decisions based on false “news.” Deepfakes target trust specifically because we’re conditioned to believe newscasters. When fake news media mimics authoritative sources, it exploits decades of built credibility.
More insidious is what researchers call the “liar’s dividend.” This phenomenon allows bad actors to dismiss authentic, incriminating evidence as fabricated. For exammple, a politician caught on video taking a bribe can now claim “that’s a deepfake” or “fake news” and create enough doubt to evade accountability. As deepfakes proliferate, authentic evidence becomes questionable. Real footage becomes negotiable. Legitimate journalism loses credibility. Truth is no longer objective.
During the 2024 elections, fears of a deepfake apocalypse didn’t fully materialize, but the groundwork for future interference was laid. A robocall featuring a fake Biden voice told New Hampshire voters not to vote in the Democratic primary. AI-generated images from the hurricane Helene disaster areas spread confusion. Viral deepfake videos and images misrepresented candidates’ actions. While these specific incidents may not have changed election outcomes, they normalized the tactic and tested what works.
Deepfakes pose a significant threat in crisis situations. Fake emergency declarations could trigger panic. Fabricated candidate statements timed for maximum damage would leave no time for rebuttal. Foreign actors already use deepfakes to destabilize democracies. In 2024, Russian intelligence services attempted to use AI to influence U.S. elections.
Social divisions increase when each political side can dismiss inconvenient truths as fakes. Conspiracy thinking finds new fuel. We lose a shared reality with a sufficient factual foundation for democratic debate. Communities retreat into closed information bubbles where only “their” sources merit trust.
The deepfake technology is moving faster than detection methods. By 2026, deepfakes are becoming synthetic performers capable of reacting to people in real time. Real-time deepfakes, interactive fake video calls, and AI-generated entire fake news broadcasts are approaching feasibility. The sheer volume overwhelms fact-checkers, who cannot debunk content fast enough to keep up with its spread.
This creates a knowledge crisis. When nothing can be verified, truth belongs to whoever shouts the loudest and where authority matters more than facts. Authoritarians gain an advantage because confusion and doubt serve those who want to control information. Democracy requires informed citizens. Deepfakes make “informed” nearly impossible.
In 2025, a plaintiff submitted deepfake evidence as testimony in Mendones v. Cushman & Wakefield, resulting in the case being dismissed. Criminal justice systems must now consider the possibility that video, photo, or audio evidence—traditionally considered reliable—could be synthetically generated. This undermines the foundational assumption that courts can establish facts.
What You Can Do
Individual action matters despite the scale of the problem. Try to slow down before sharing content. Use detection methods that include verifying the URL, cross-referencing with official sources and looking for visual and audio clues. Use fact-checking sites like Snopes, FactCheck.org, and news organization verification pages like laura.getfact.ca. Educate your friends and family by sharing this knowledge with them.
Take the time to report suspected deepfakes on platforms like youtube and alerting news organizations. Support quality journalism. Support legislation that demands accountability for malicious deepfakes. Advocate for media literacy education in schools.
We still have a small window when awareness and advocacy can make a difference. The technology will continue to evolve, but informed, skeptical citizens remain the best defence. Real democracy depends on shared and accepted truth. Protecting that requires vigilance from all of us.
Without vigilance, we don’t just lose the ability to distinguish truth from lies. We surrender our capacity to make informed choices, and become prisoners of the best liars.
Postscript: You can check out this example of a deepfake – it’s not the one I was talking about (it has been removed), but this will give you an idea. https://www.youtube.com/watch?v=RCwXD7lFK64 Check the user comments. There is a disclaimer stating that this is AI-generated, but people do not read it, and they believe what they are seeing is real.

Great Article
Thanks Steve!