{
	"id": "e1d0f68c-b622-4a52-8ca5-c679fdc62375",
	"created_at": "2026-04-29T02:20:45.41461Z",
	"updated_at": "2026-04-29T08:21:15.841124Z",
	"deleted_at": null,
	"sha1_hash": "061730a63c9a698587009486cb44b05dc5f5ffa5",
	"title": "Generative AI Phishing: How to Defend in 2025",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 1092018,
	"plain_text": "Generative AI Phishing: How to Defend in 2025\r\nBy Adaptive Team\r\nPublished: 2025-08-29 · Archived: 2026-04-29 02:03:15 UTC\r\nYou receive an email from your CFO requesting your approval for a payment. Minutes later, your phone rings. It’s\r\ntheir voice, urgent and familiar, telling you to move fast. Soon, you’re on a Zoom call where the CFO and two\r\ncolleagues nod in agreement as they confirm the request.\r\nTurns out, none of it is real. Every email, voice, and video is the work of generative AI phishing. This is the new\r\nreality of phishing, where hackers now rely on AI technology to carry out cybercrime.\r\nUnfortunately, legacy training built around static email templates and annual refreshers simply can’t keep pace. AI\r\nis making attacks faster, more convincing, and harder to detect. Security leaders need a new playbook and tools to\r\navoid feeling helpless against these new AI phishing tactics.\r\nIn this article, we’ll review how generative AI phishing works, why traditional defenses fall short, and what\r\nmodern training approaches can protect your teams.\r\nWhat is AI Phishing? Defining the Threat\r\nAI phishing (or generative AI phishing) refers to cyberattacks that exploit generative artificial intelligence, such as\r\nlanguage models, voice and video synthesis, or autonomous agent frameworks, to craft and distribute phishing\r\ncampaigns.\r\nIn simple terms, it’s phishing supercharged by artificial intelligence. Instead of copy-paste scams full of spelling\r\nmistakes, AI tools can now:\r\nWrite polished emails that look like they came from your boss, HR, or a trusted vendor.\r\nClone someone’s voice to make a phone call “from the CEO” asking for an urgent payment sound real, a\r\npart of the growing wave of AI vishing and voice spoofing attacks.\r\nGenerate fake videos where an executive appears to give instructions on a Zoom call.\r\nBecause AI can do all this quickly and at scale, attackers no longer have to spend days crafting one convincing\r\nscam. They can generate thousands of unique, personalized ones in minutes.\r\nThis means instead of a generic “Dear customer” email, you might get one addressed directly to you, mentioning a\r\nreal client meeting or your company’s current initiative. That level of personalization makes the scam feel\r\nlegitimate and much harder to ignore.\r\nAnd they work. In one study, AI-generated phishing emails tricked 54% of participants into clicking. That’s the\r\nsame success rate as expert-crafted scams and over 3x higher than generic phishing attempts.\r\nInside a Modern AI Phishing Attack\r\nhttps://www.adaptivesecurity.com/blog/ai-phishing\r\nPage 1 of 9\n\nToday’s attacks aren’t clumsy “Nigerian prince” emails. They’re coordinated cons across multiple channels.\r\nHere’s how it typically unfolds:\r\nStep 1. Scouting the victim\r\nAI can quickly scrape your LinkedIn profile, company site, press releases, or social media activity to learn your\r\nboss’s name, your job title, and even the project you’re working on. \r\nThat makes the “fake” message appear as if it came from someone who knows you and wants to conduct business\r\nwith you.\r\nIn a recent study of AI-driven spear phishing, automated systems were able to build personalized vulnerability\r\nprofiles with 88% accuracy. In other words, the AI correctly gathered and used relevant details about its targets\r\nnearly nine times out of ten. \r\nStep 2. The first hook: A believable email\r\nArmed with confidential details, AI can now generate a convincing spear-phishing email. The email reads like\r\nsomething straight from your boss: “Hi Sarah, per John’s note on the Q3 vendor review, can you process the\r\nattached invoice today?”\r\nIt’s dangerous because it references real names or projects, sidestepping the usual red flags employees are trained\r\nto watch for.\r\nStep 3. Turning up the pressure with a phone call\r\nIf the email fails, attackers have more tricks up their sleeves. Minutes later, you might get a phone call from a\r\nvoice that sounds exactly like your CFO, urging you to approve the payment right away. But it’s not them; it’s an\r\nAI voice clone scam.\r\nThis isn’t science fiction. Researchers at an AI company in Toronto released a demo where they cloned podcaster\r\nJoe Rogan’s voice so convincingly that listeners could barely tell the difference. RealTalk: We Recreated Joe\r\nRogan's Voice Using Artificial Intelligence\r\nAI has already made it possible to replicate tone, accent, and inflection, and scammers are putting it to use.\r\nIn one of the first documented AI voice deepfake scams, the CEO of a U.K. energy firm got a call from what\r\nsounded like his German boss. Following the urgent instructions, he transferred €220,000 (around $243,000) to\r\nwhat he thought was a supplier’s account.\r\nThis shows how a convincing voice can override skepticism and push victims into costly mistakes.\r\nStep 4. Closing the deal with a fake video call\r\nFor high-stakes scams, attackers can even stage a deepfake video meeting. In 2024, an employee in Hong Kong\r\njoined a Zoom call with what looked like several senior colleagues, including the CFO. They were all fakes.\r\nhttps://www.adaptivesecurity.com/blog/ai-phishing\r\nPage 2 of 9\n\nBelieving the instructions were genuine, he authorized a transfer of $25 million.\r\nThe scam succeeded because the video seemed to be proof. Seeing familiar faces nod in agreement can override\r\nany remaining doubts, even when the instructions seem unusual.\r\nExperience the Adaptive platform\r\nTake a free self-guided tour of the Adaptive platform and explore the future of security awareness training\r\nWhere Traditional Security Awareness Training Misses the Mark\r\nMost security awareness training still looks the same as a decade ago. It worked when phishing meant clumsy\r\nmass emails, but it doesn’t prepare people for AI-driven attacks that now use generative text and deepfakes. \r\nTraditional programs fall short because:\r\nTraining cadence is too slow. Annual or quarterly modules can’t keep pace with phishing techniques that\r\nevolve monthly. By the time an employee has their next training cycle, new AI tactics, like deepfake phone\r\ncalls, may already be circulating.\r\nhttps://www.adaptivesecurity.com/blog/ai-phishing\r\nPage 3 of 9\n\nOver-reliance on static templates. Legacy training often uses generic “bank alert” or “password reset”\r\nemails as practice. These are easy to spot and give employees a false sense of confidence. Researchers\r\nfound that hyper-personalized spear phishing emails were far more effective, especially when they included\r\npersonal or company details.\r\nInability to simulate emerging techniques. Most awareness programs focus only on email. But real\r\nattacks now target individuals through multiple channels, including email, phone calls using an AI-cloned\r\nvoice, or even fake video calls where attackers pose as executives. \r\nTraining programs need to evolve with the threat and shift from static templates to modern tools built specifically\r\nfor AI phishing and deepfakes. \r\nPlatforms like Adaptive Security go beyond static templates by simulating deepfake audio, synthetic video, and\r\nAI-crafted spear phishing emails. Instead of theory, employees get hands-on practice handling these threats in a\r\nsafe, realistic environment. When a real attempt occurs, they’re well prepared to deal with it.\r\nHow to Detect AI-Generated Phishing Attempts\r\nAI-generated phishing is designed to look flawless. You won’t find broken English, obvious typos, or “Nigerian\r\nprince” giveaways. But there are still signs to watch for.\r\n#1. Watch for unnatural timing or language\r\nAI can generate convincing text, but it doesn’t always understand human context. That means messages\r\nsometimes arrive or are read in ways that don’t quite fit. \r\nHere are two dead giveaways to look out for: \r\nOdd Timing: A “request from finance” might show up at 3:12 a.m. local time, even though your CFO\r\nnever emails at that hour. Attackers often forget to match time zones when scheduling mass AI-driven\r\nsends.\r\nTone Mismatch: A message that’s grammatically perfect but too formal or too brief compared to the\r\nsender’s usual style.\r\n#2. Validate voice and video with known protocols \r\nDeepfake voicemails and video calls are among the hardest scams to detect because our first instinct is to trust\r\nwhat we can clearly see and hear. \r\nIt’s natural for these deepfake AI and phishing techniques to override judgment, as we’ve already seen in real\r\ncases—from the U.K. energy executive duped by a cloned CEO’s voice to the Hong Kong employee tricked by a\r\ndeepfake Zoom call.\r\nSo, how do you defend against something that feels real? The answer isn’t to rely on gut instinct alone, but to\r\nbuild verification protocols:\r\nhttps://www.adaptivesecurity.com/blog/ai-phishing\r\nPage 4 of 9\n\nConfirm via second channel: Confirm high-risk requests (money transfers, credential resets, contract\r\napprovals) via a second, trusted channel.\r\nPause and verify: Encourage employees to pause, verify, and escalate suspicious requests even if the\r\nrequest appears urgent.\r\nPro tip: Telling your employees about these risks isn’t enough. Make sure you let them experience them as well,\r\nalbeit safely. \r\nPlatforms like Adaptive Security offer training simulations that include deepfake audio and video scenarios.\r\nEmployees hear a cloned voice or see a fake video message in a controlled environment, then practice applying\r\nthe right verification. \r\nThis kind of hands-on exposure will make your team far more likely to pause and verify when the real thing\r\nhappens.\r\nSimulated phone phishing scenario where an attacker poses as AWS support (Source)\r\n#3. Look for over-personalization\r\nhttps://www.adaptivesecurity.com/blog/ai-phishing\r\nPage 5 of 9\n\nAI-driven phishing is infamous for being polished. Attackers use tools that scrape LinkedIn, company bios, and\r\neven leaked data to add personal touches that make emails seem authentic. But that very specificity is often the\r\ngiveaway.\r\nImagine receiving an email like this:\r\n“Hi Amanda, I saw your panel at RSA 2024 in San Francisco on May 7th about cloud security trends—\r\ngreat talk on cloud security. Can you forward the updated vendor contract for Acme Inc.?”\r\nOn the surface, it looks credible. But why would a genuine colleague recap information you both already know?\r\nThis kind of unnecessary detail often indicates that AI has stitched together scraped data to make the message\r\nsound “authentic.”\r\nSecurity teams at companies like Beazley and eBay have warned of exactly this trend, reporting a rising number\r\nof AI-generated phishing emails loaded with personalized details drawn from public profiles and online\r\nfootprints. \r\nSo how do you differentiate between a genuine message and one that’s over-engineered by AI? Here are some red\r\nflags to watch out for:\r\nToo Much Detail: Mentions of your specific role, projects, or public events that feel rehearsed.\r\nContext Feels Forced: The tone doesn’t match how that person would normally write to you.\r\nValidation Phrases: Lines like “Just to confirm…” or “As you might remember…” that feel engineered to\r\nbuild trust.\r\nWhenever a message seems to be trying too hard to prove it “knows” you, verify through a second channel. Watch\r\nout for these red flags on other messaging platforms as well, including Slack, your calendar invite, or a quick\r\ninternal call.\r\n#4. Use anomaly detection tools\r\nEven vigilant employees can miss a well-crafted phishing attempt. That’s why relying only on static filters\r\n(blocking known domains or keywords) isn’t a foolproof method of avoiding AI phishing attempts. \r\nThe smarter approach is to use anomaly detection, which builds a baseline of what “normal” looks like in your\r\norganization and flags behavior that falls outside those patterns.\r\nFor example, if your CFO usually logs in from New York during business hours, but suddenly there’s a login from\r\nEastern Europe at midnight followed by an urgent wire request, anomaly detection will flag it. \r\nPro tip: Pair anomaly detection tools that spot unusual patterns with training tools. For example, Microsoft\r\nDefender or Google Workspace can flag when a login comes from an unexpected location, or when an email\r\nappears different from how the sender usually writes. \r\nAn alert on its own doesn’t guarantee someone will react correctly, however. Your team still needs to know what\r\nto do in the moment. That’s where training platforms like Adaptive Security help. \r\nhttps://www.adaptivesecurity.com/blog/ai-phishing\r\nPage 6 of 9\n\nAdaptive Security helps your team practice with simulated deepfake calls or AI-crafted emails, so when a real\r\nalert comes, the scenario feels familiar, and they know precisely how to protect themselves in that moment.\r\nIs your business protected against deepfake attacks?\r\nDemo the Adaptive Security platform and discover deepfake training and phishing simulations.\r\nWhy Adaptive Security is the Leading Defense Platform Against AI Phishing\r\nGenerative AI phishing scams are no longer clumsy attempts asking people to send money in exchange for a fake\r\nmillion-dollar payout. Today’s generative AI scams involve multi-channel attacks delivered through email, phone,\r\nor even fake video calls. \r\nThat’s where next-generation security awareness training tools from Adaptive Security help. Especially built for\r\nAI phishing, Adaptive simulates the tactics attackers now use, including deepfake voicemail requests and AI-crafted spear phishing emails. \r\nTraining scenarios are role-based and context-specific, so a finance team might see invoice fraud attempts while\r\nIT staff might test credential harvesting lures. This realism prepares employees to practice responding to them in\r\nconditions that feel real.\r\nThis is why forward-thinking organizations are already moving away from legacy platforms like KnowBe4 and\r\nProofpoint. Instead, they’re using Adaptive Security to give their team experience with new AI-generated cyber\r\nthreats. The result is staff who don’t freeze or fall for over-personalized details, but verify and respond correctly.\r\nhttps://www.adaptivesecurity.com/blog/ai-phishing\r\nPage 7 of 9\n\nAdaptive Security AI voice phishing training module (Source)\r\nReady to see how your company can deal with evolving threats in real-time? Request a demo and experience how\r\nAdaptive prepares your teams for the phishing threats of the AI era.\r\nFrequently Asked Questions: AI Phishing \r\nHow can I tell if a phishing email is AI-generated?\r\nLook for overly polished, hyper-personalized details that feel unnecessary, like references to a recent conference\r\ntalk or your exact job title. Tone that’s too formal or phrasing that sounds rehearsed is another red flag.\r\nWhat role do deepfakes play in phishing?\r\nDeepfakes make phishing more convincing by exploiting trust in familiar voices and faces. Attackers can clone a\r\nCEO’s voice to request a wire transfer or use synthetic video to impersonate leaders on a call. To avoid that,\r\nalways confirm high-risk requests via a trusted second channel.\r\nIs security awareness training actually effective against AI phishing attacks?\r\nYes, but only when it evolves with the cybersecurity threats. Modern platforms like Adaptive Security simulate\r\nAI-driven threats so employees practice responding under realistic conditions. That experience makes all the\r\ndifference.\r\nhttps://www.adaptivesecurity.com/blog/ai-phishing\r\nPage 8 of 9\n\nAs experts in cybersecurity insights and AI threat analysis, the Adaptive Security Team is sharing its expertise\r\nwith organizations.\r\nSource: https://www.adaptivesecurity.com/blog/ai-phishing\r\nhttps://www.adaptivesecurity.com/blog/ai-phishing\r\nPage 9 of 9",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"origins": [
		"web"
	],
	"references": [
		"https://www.adaptivesecurity.com/blog/ai-phishing"
	],
	"report_names": [
		"ai-phishing"
	],
	"threat_actors": [
		{
			"id": "08c8f238-1df5-4e75-b4d8-276ebead502d",
			"created_at": "2023-01-06T13:46:39.344081Z",
			"updated_at": "2026-04-29T06:58:56.521699Z",
			"deleted_at": null,
			"main_name": "Copy-Paste",
			"aliases": [],
			"source_name": "MISPGALAXY:Copy-Paste",
			"tools": [],
			"source_id": "MISPGALAXY",
			"reports": null
		}
	],
	"ts_created_at": 1777429245,
	"ts_updated_at": 1777450875,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/061730a63c9a698587009486cb44b05dc5f5ffa5.pdf",
		"text": "https://archive.orkl.eu/061730a63c9a698587009486cb44b05dc5f5ffa5.txt",
		"img": "https://archive.orkl.eu/061730a63c9a698587009486cb44b05dc5f5ffa5.jpg"
	}
}