top of page

Deepfakes: When Seeing Is No Longer Believing

  • Writer: Elle
    Elle
  • 19 hours ago
  • 10 min read

Imagine watching a video of your favorite celebrity confessing to a crime they didn't commit. Or seeing a politician making outrageous statements they never actually said. Or receiving a video call from your best friend asking for money, except it's not really your friend at all.


Welcome to the age of deepfakes, where artificial intelligence can create fake videos, images, and audio so realistic that even experts struggle to tell them from the real thing.


In 2023, a Hong Kong finance worker transferred $25 million to scammers after attending a video call where his CFO and several colleagues instructed him to make the transfer. Every person on that call was fake. Every face, every voice, every gesture was generated by AI. The worker had no idea he was talking to deepfakes until it was too late.


Deepfakes are one of the most fascinating and frightening technologies of our time. They can bring historical figures back to life for education. They can help actors perform in multiple languages. They can create art that's never been possible before.

They can also spread lies, ruin reputations, and undermine democracy.


So what are deepfakes? How do they work? And most importantly, how can you tell what's real and what's fake in a world where your eyes and ears can deceive you?


What Is a Deepfake?

The word "deepfake" combines two ideas: "deep learning" (a type of artificial intelligence) and "fake." It refers to synthetic media (videos, images, or audio) created or manipulated using AI to make something appear real when it's not.


Deepfakes can:

  • Swap one person's face onto another person's body

  • Make someone appear to say words they never spoke

  • Create entirely fictional people who never existed

  • Alter someone's facial expressions or body language

  • Clone someone's voice with just a few seconds of audio


The key difference between deepfakes and traditional photo or video editing is AI. Photoshopping an image requires skill and time. Creating a deepfake can now be done by anyone with basic computer skills using free apps.


Where Did Deepfakes Come From?

The technology behind deepfakes has been developing for decades, but the modern deepfake era began with a specific breakthrough and a troubling moment.

The Technology: GANs (2014)

In 2014, a researcher named Ian Goodfellow invented something called a Generative Adversarial Network, or GAN. This is the technology that makes most deepfakes possible.


Here's how GANs work (simplified):


Imagine two AI systems working together like an art forger and an art detective:

AI #1 (The Generator) tries to create a fake image or video. Its job is to make the fake as convincing as possible.


AI #2 (The Discriminator) examines the fake and tries to figure out what's wrong with it. Its job is to spot the forgery.

The Generator creates a fake. The Discriminator identifies flaws ("the eyes don't blink right" or "the lighting is off"). The Generator tries again, fixing those flaws. The Discriminator finds new problems. They keep going back and forth, thousands or millions of times.


Eventually, the Generator gets so good that the Discriminator can't tell what's fake anymore. And if an AI can't tell the difference, humans definitely can't.


This breakthrough made it possible to create incredibly realistic fake media.


The Moment: Reddit, 2017

In 2017, a person using the username "deepfakes" on Reddit began posting videos created using this technology. Unfortunately, these weren't educational or creative videos. They were pornographic videos where celebrities' faces had been swapped onto adult film actors' bodies without consent.


This was both the birth of the term "deepfake" and the first widespread recognition of how the technology could be misused. Although Reddit eventually banned the community, the word stuck. More importantly, the genie was out of the bottle. The technology was now accessible to anyone.


The Democratization: 2017-Present

Since 2017, deepfake creation has become easier and easier. Free software like DeepFaceLab, Faceswap, and numerous mobile apps put deepfake creation in anyone's hands. No programming knowledge required. No expensive equipment needed. Just download an app, upload some photos or videos, and let AI do the work.


By 2019, China released an app called Zao that could swap your face into movie scenes with a single selfie. Within days, millions of people were using it.


Today, AI tools can create deepfakes in real-time during live video calls. The technology that took researchers years to develop is now available as a smartphone app.


How Deepfakes Are Made Today

Modern deepfakes generally follow this process:

Step 1: Collect training data Gather photos and videos of the target person (the person you want to create or impersonate). The more data, the better the deepfake will be.

Step 2: Train the AI Feed this data into deepfake software. The AI learns the person's facial features, expressions, movements, and (for audio) voice patterns.

Step 3: Generate the fake The AI can now create new images, videos, or audio of that person doing or saying things they never actually did.

Step 4: Refine (optional) Manually edit the deepfake to fix any obvious glitches or artifacts.

This process used to take weeks and powerful computers. Now it can happen in minutes on a smartphone.


The Dark Side: Why Deepfakes Are Dangerous

The harmful uses of deepfakes fall into several categories, each with serious consequences:

1. Misinformation and Fake News

Deepfake videos of politicians, celebrities, and public figures can spread false information incredibly quickly. Examples include:

  • Fake videos of politicians making inflammatory statements

  • False endorsements (a celebrity appearing to promote a scam product)

  • Fabricated evidence of crimes or scandals


In 2022, a deepfake video purportedly showed Ukrainian President Volodymyr Zelensky telling his soldiers to surrender to Russia. The video was crude and quickly debunked, but it demonstrated how deepfakes could be weaponized in wartime.


2. Financial Fraud

Criminals use deepfakes to impersonate executives, friends, or family members to steal money. The Hong Kong case mentioned earlier isn't unique. There have been numerous cases of:

  • Fake video calls from "CEOs" instructing employees to transfer funds

  • Voice clones of family members calling with fake emergencies ("Grandma, I've been in an accident, send money!")

  • Impersonation of financial advisors or investment gurus promoting scams


A UK energy company CEO lost $243,000 after receiving a phone call from someone using a deepfake voice clone of his boss requesting an urgent transfer.


3. Non-Consensual Pornography

This remains one of the most common and damaging uses of deepfakes. Victims (overwhelmingly women) have their faces placed on explicit content without their knowledge or consent. This:

  • Destroys reputations

  • Causes severe psychological trauma

  • Can lead to harassment, blackmail, and career damage

  • Affects celebrities, politicians, and ordinary people alike


One study found that 96% of deepfake videos online are non-consensual pornography, with most targeting women.


4. Identity Theft and Authentication Bypass

Deepfakes can trick facial recognition systems and voice authentication security. Criminals can:

  • Unlock someone's phone using a deepfake of their face

  • Bypass bank security using voice clones

  • Create fake IDs or driver's licenses

  • Impersonate someone in online meetings


5. Bullying and Harassment

Students use deepfake apps to create embarrassing or harmful videos of classmates. This new form of cyberbullying can be devastating, especially when fake videos spread through social media.


6. Undermining Trust in Everything

Perhaps the deepest long-term harm is that deepfakes make people doubt everything they see and hear. Even real videos can be dismissed as "probably deepfakes." This erosion of trust in evidence and media has profound implications for democracy, journalism, and justice.


The Bright Side: Positive Uses of Deepfakes

Despite the dangers, deepfake technology has legitimate and beneficial applications:

1. Education

  • Bringing historical figures to life (imagine learning about ancient Rome from a deepfake of Julius Caesar)

  • Creating interactive educational content

  • Language learning with personalized AI tutors

2. Entertainment and Film

  • De-aging actors (making them look younger without prosthetics)

  • Recreating deceased actors for final scenes (with family permission)

  • Dubbing films into other languages while making actors' lip movements match

  • Creating special effects more affordably

3. Accessibility

  • Giving voice to people who have lost their ability to speak

  • Creating sign language interpreters

  • Translating content for deaf and hard-of-hearing communities

4. Art and Creativity

  • New forms of digital art

  • Creative expression and satire

  • Experimental filmmaking

5. Medical Training

  • Creating realistic medical scenarios for training

  • Simulating rare conditions

  • Practicing difficult conversations

6. Personalization

  • Customized video messages

  • Virtual try-ons for clothing or makeup

  • Avatar creation for gaming or virtual meetings


The technology itself is neutral. It's the use that determines whether it helps or harms.


How to Spot a Deepfake: Detection Tips

As deepfakes get better, detection gets harder. But there are still telltale signs you can look for:

Visual Clues

1. Watch the face closely

  • Is the skin too smooth or too wrinkly for the person's age?

  • Do facial features look weird when the person turns their head?

  • Are there weird shadows or lighting inconsistencies on the face?

2. Check the eyes

  • Do they blink naturally? (Early deepfakes didn't blink correctly)

  • Do the reflections in the eyes make sense for the lighting?

  • Does the person maintain natural eye contact?

3. Look at the mouth and teeth

  • When the person talks, do the lip movements match the words perfectly?

  • Do the teeth look realistic, or are they blurry?

  • Is there weird distortion around the mouth edges?

4. Examine hair and glasses

  • Hair is notoriously difficult for AI. Does it move naturally?

  • Are there weird artifacts or blurring around the hairline?

  • If the person wears glasses, does the glare look right? Does it change realistically when they move?

5. Check the background

  • Does the background look natural, or is it blurry?

  • Are there weird artifacts or distortions at the edges of the person?

  • When the person moves, does the background respond correctly?

6. Look for inconsistencies

  • Does the lighting make sense for the scene?

  • Are there color mismatches between the face and the body?

  • Do shadows fall where they should?


Audio Clues

1. Listen for unnatural patterns

  • Robotic or monotone speech

  • Weird pauses or unnatural rhythm

  • Breathing sounds that don't match speaking

2. Check for consistency

  • Does the voice match the person's known voice in pitch, accent, and speaking style?

  • Are there background noises that seem off?


Context Clues

1. Consider the source

  • Who posted the video? Is it from a verified account?

  • Has anyone else confirmed this video exists?

  • Does the content match what you know about the person?

2. Verify with other sources

  • Has this been reported by reputable news sources?

  • Can you find the original source of the video?

  • Have fact-checkers examined it?

3. Question suspicious content

  • Does this video show the person doing something completely out of character?

  • Is it being shared to provoke strong emotions (outrage, fear, excitement)?

  • Does it seem designed to go viral?


Use Detection Tools

Several tools and websites can help detect deepfakes:

  • Microsoft Video Authenticator: Analyzes videos for manipulation

  • Deepware Scanner: Free online deepfake detector

  • Sensity: Platform for detecting synthetic media

  • Reality Defender: Commercial deepfake detection service


But remember: detection tools aren't perfect. They can give false positives (calling real videos fake) and false negatives (calling deepfakes real).


The Detection Arms Race

Here's the problem: every time detection technology improves, deepfake creation technology improves to evade it.


Early deepfakes didn't blink naturally, so detectors looked for abnormal blinking. Deepfake creators fixed the blinking problem. Detectors found new artifacts to look for. Creators fixed those too.


It's an endless arms race, like computer viruses and antivirus software. And right now, the creators have the advantage. Even experts with advanced AI tools struggle to identify the best deepfakes.


This is why awareness and critical thinking matter as much as technology.


What's Being Done About Deepfakes

Laws and Regulations

Many countries and states are creating laws specifically targeting malicious deepfakes:

  • Federal laws: The U.S. has introduced bills like the Deepfakes Accountability Act

  • State laws: California, Texas, Virginia, and others have laws against deepfake pornography and election interference

  • International efforts: The EU, China, and other countries have regulations addressing synthetic media


But enforcement is challenging, especially when deepfakes originate from other countries.


Platform Policies

Social media companies are taking action:

  • Labeling: Facebook, Twitter/X, and YouTube label AI-generated content

  • Removal: Platforms remove deepfakes that violate policies

  • Detection: Companies invest in AI tools to automatically detect deepfakes


Watermarking and Authentication

Some proposals would require:

  • Digital watermarks embedded in all AI-generated content

  • Blockchain-based verification of authentic media

  • Metadata proving a video's origin and editing history


Education

Perhaps most important is teaching media literacy. People need to:

  • Question what they see online

  • Verify sources before believing or sharing

  • Understand how deepfakes work

  • Know how to check if content is authentic


What You Can Do

As a student in the age of deepfakes, here's how you can protect yourself and others:

1. Stay skeptical: Don't automatically believe videos just because they look real.

2. Verify before sharing: Check multiple sources before sharing something shocking or controversial.

3. Learn the signs: Practice identifying deepfakes using detection websites and quizzes.

4. Report fakes: If you encounter deepfakes being used to harm someone, report them to the platform and authorities.

5. Never create harmful deepfakes: Even as a joke, creating fake explicit content or impersonation videos can have serious legal and ethical consequences.

6. Protect your digital footprint: Be careful about what photos and videos of yourself you post online. They can be used to train deepfakes of you.

7. Speak up: If you see deepfakes being used to bully or harm someone, don't stay silent.


The Bottom Line

Deepfakes represent a fundamental challenge to the phrase "seeing is believing." In a world where AI can create convincing fake videos of anyone saying or doing anything, we can no longer trust our eyes and ears alone.The technology emerged from brilliant AI research (GANs in 2014) but was popularized through harmful uses (non-consensual pornography in 2017). It has since spread to become accessible to anyone with a smartphone.


The dangers are real: misinformation, fraud, harassment, and the erosion of trust in all media. But the technology also has legitimate uses in education, entertainment, accessibility, and art.


Detection is getting better, but creation is getting better faster. Technology alone won't solve the deepfake problem. We need laws, platform policies, authentication systems, and most importantly, a population educated in media literacy.


The next time you see a shocking video online, pause before you believe it. Check the source. Look for the telltale signs. Verify with other sources. Ask yourself: is this really real, or could it be a deepfake? In the age of synthetic media, critical thinking isn't just a useful skill. It's essential for navigating reality in a world where fake can look more real than real itself.


The future of truth depends on all of us learning to question what we see, verify what we hear, and think critically about the media we consume. Deepfakes have changed the game. Now it's up to us to learn the new rules.


Sources

Britannica. (2026). Deepfake. Retrieved from https://www.britannica.com/technology/deepfake

Brookings Institution. (2022). Artificial intelligence, deepfakes, and the uncertain future of truth. Retrieved from https://www.brookings.edu/articles/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/

ESET. How to detect deepfakes: A practical guide to spotting AI-Generated misinformation. Retrieved from https://www.eset.com/blog/en/home-topics/cybersecurity-protection/how-to-detect-deepfakes/

MIT Media Lab. Detect DeepFakes: How to counteract misinformation created by AI. Retrieved from https://www.media.mit.edu/projects/detect-fakes/overview/

Reality Defender. A Brief History of Deepfakes. Retrieved from https://www.realitydefender.com/insights/history-of-deepfakes

Scientific American. (2024). Deepfakes and the New AI-Generated Fake Media Creation-Detection Arms Race. Retrieved from https://www.scientificamerican.com/article/detecting-deepfakes1/

U.S. Government Accountability Office. (2024). Science & Tech Spotlight: Combating Deepfakes. Retrieved from https://www.gao.gov/products/gao-24-107292

Wikipedia. (2026). Deepfake. Retrieved from https://en.wikipedia.org/wiki/Deepfake

Comments


bottom of page