🎭 When Seeing Isn’t Believing: How to Outsmart Deepfakes
Three vivid deepfake stories all seem to tell the same story - seeing is no longer believing. Explore our curated media and information literacy resources to learn how to spot manipulation, verify claims, and stay in control of what you trust.
📊 Reality Check Poll
📧 Deepfake CEO: YouTube Scam Targets Creators
Reports describe a phishing campaign using a “private” YouTube video with a convincing AI-generated version of CEO Neal Mohan announcing monetization changes and directing creators to fake login pages. While framed as a novel threat, it follows the same credential-theft pattern that has driven phishing for decades.
What’s new is presentation: deepfakes lower the skill and cost of making authority-themed scams look professional, and humans are poor at reliably spotting high-quality fakes. YouTube has clarified it will not use private videos for official messages and has issued guidance on verification, showing platforms are building clearer guardrails.
The long-term trend is more polished deception alongside stronger rules, MFA, and automated detection - yet enforcement at scale and public education remain unresolved.
Your Reality Check:
When headlines focus on flashy AI and deepfakes, they miss the real story: the scam itself hasn’t changed. The goal is still the same - create urgency, make you click, log in, or move money fast. The best defenses haven’t changed either. Treat urgency as a warning sign and verify claims through a channel you control.
💸 The $25 Million Video Call: When Your CFO Isn’t Your CFO
Coverage of the British engineering conglomerate Arup incident highlights its scale and drama: a Hong Kong finance employee joined a video call with an AI-generated “CFO” and colleagues and sent $25 million in 15 transfers, now labeled one of the largest documented deepfake scams. That framing is accurate but incomplete - millions of legitimate corporate payments and video calls occur daily, and most fraud still relies on simpler email-based schemes.
Surveys do show rapid growth in deepfake attempts, with many firms reporting incidents by 2024, yet counterexamples exist: a similar attack on WPP failed after staff escalated concerns. The pattern resembles an arms race, not a collapse of trust. Basic controls - callbacks, dual approvals, payment holds - can sharply reduce losses. The open question is how quickly such safeguards will reach mid-size firms, where future damage may concentrate.
Your Reality Check:
High-dollar headlines grab attention, but they rarely tell the whole story. The real question is always: “Out of how many?” In cases like Arup’s, the lesson isn’t that every video call is fake - it’s that without tedious but vital controls like multi-person approvals and out-of-band checks, companies rely heavily on employees never slipping up. Ignore the shock figure. Watch which systems improve afterward.
🏛️ Fake Diplomats on Zoom: Deepfakes Enter High Politics
The Ben Cardin incident is often cited as a warning of a coming deepfake-driven “information apocalypse.” An email appearing to come from former Ukrainian foreign minister Dmytro Kuleba led to a Zoom call with a convincing impersonator who asked about missile strikes and U.S. elections, triggering suspicion and an FBI inquiry. The case shows that politically motivated actors are testing targeted, high-level deepfakes. Yet the call was cut short, alerts were issued, and no policy changed.
Studies suggest AI misinformation is widespread but has had limited, unclear impact so far. At the same time, regulations and institutional safeguards are emerging, even as the long-term threat remains uncertain.
Your Reality Check:
Deepfakes in politics aim to hijack attention as much as elections. When coverage fixates on how realistic a fake looks, it suggests democracy is already being quietly steered. A better lens is to separate capability from impact: Did it change votes, policy, or trust? Were alarms triggered and responses effective? That perspective keeps fear in check while still sharpening vigilance and pushing for stronger safeguards.
