{
	"id": "7c35793d-4e3d-4104-930d-ebc9465a0358",
	"created_at": "2026-04-06T00:11:26.712907Z",
	"updated_at": "2026-04-10T03:31:39.419586Z",
	"deleted_at": null,
	"sha1_hash": "6a7651eab34cee2cc8375f453a6e23c49b92a1bc",
	"title": "Disrupting a Global Cybercrime Network Abusing Generative AI",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 779026,
	"plain_text": "Disrupting a Global Cybercrime Network Abusing Generative AI\r\nBy Steven Masada\r\nPublished: 2025-02-27 · Archived: 2026-04-05 18:24:10 UTC\r\nIn an amended complaint to recent civil litigation, Microsoft is naming the primary developers of malicious tools\r\ndesigned to bypass the guardrails of generative AI services, including Microsoft’s Azure OpenAI Service. We are\r\npursuing this legal action now against identified defendants to stop their conduct, to continue to dismantle their\r\nillicit operation, and to deter others intent on weaponizing our AI technology.\r\nThe individuals named are: (1) Arian Yadegarnia aka “Fiz” of Iran, (2) Alan Krysiak aka “Drago” of United\r\nKingdom, (3) Ricky Yuen aka “cg-dot” of Hong Kong, China, and (4) Phát Phùng Tấn aka “Asakuri” of Vietnam.\r\nThese actors are at the center of a global cybercrime network Microsoft tracks as Storm-2139. Members of Storm-2139 exploited exposed customer credentials scraped from public sources to unlawfully access accounts with\r\ncertain generative AI services. They then altered the capabilities of these services and resold access to other\r\nmalicious actors, providing detailed instructions on how to generate harmful and illicit content, including non-consensual intimate images of celebrities and other sexually explicit content.\r\nThis activity is prohibited under the terms of use for our generative AI services and required deliberate efforts to\r\nbypass our safeguards. We are not naming specific celebrities to keep their identities private and have excluded\r\nsynthetic imagery and prompts from our filings to prevent the further circulation of harmful content.\r\nStorm-2139: A global network of creators, providers and end users.\r\nIn December 2024, Microsoft’s Digital Crimes Unit (DCU) filed a lawsuit in the Eastern District of Virginia\r\nalleging various causes of action against 10 unidentified “John Does” participating in activities that violate U.S.\r\nlaw and Microsoft’s Acceptable Use Policy and Code of Conduct. Through this initial filing, we were able to\r\ngather more information about the operations of the criminal enterprise.  \r\nStorm-2139 is organized into three main categories: creators, providers, and users. Creators developed the illicit\r\ntools that enabled the abuse of AI generated services. Providers then modified and supplied these tools to end\r\nusers often with varying tiers of service and payment. Finally, users then used these tools to generate violating\r\nsynthetic content, often centered around celebrities and sexual imagery.  \r\nBelow is a visual representation of Storm-2139, which displays internet aliases uncovered as part of our\r\ninvestigation as well as the countries in which we believe the associated personas are located.    \r\nhttps://blogs.microsoft.com/on-the-issues/2025/02/27/disrupting-cybercrime-abusing-gen-ai/\r\nPage 1 of 5\n\nStorm-2139’s organizational structure.\r\nScreenshot of “Fiz’s” LinkedIn profile\r\nThrough its ongoing investigation, Microsoft has identified several of the above-listed personas, including, but not\r\nlimited to, the four named defendants. While we have identified two actors located in the United States—\r\nspecifically, in Illinois and Florida—those identities remain undisclosed to avoid interfering with potential\r\ncriminal investigations. Microsoft is preparing criminal referrals to United States and foreign law enforcement\r\nrepresentatives. \r\nhttps://blogs.microsoft.com/on-the-issues/2025/02/27/disrupting-cybercrime-abusing-gen-ai/\r\nPage 2 of 5\n\nCybercriminals react to Microsoft’s website seizure and court filing.\r\nAs part of our initial filing, the Court issued a temporary restraining order and preliminary injunction enabling\r\nMicrosoft to seize a website instrumental to the criminal operation, effectively disrupting the group’s ability to\r\noperationalize their services. The seizure of this website and subsequent unsealing of the legal filings in January\r\ngenerated an immediate reaction from actors, in some cases causing group members to turn on and point fingers at\r\none another. We observed chatter about the lawsuit on the group’s monitored communication channels,\r\nspeculating on the identities of the “John Does” and potential consequences.  \r\nScreenshot of online chatter discussing “Fiz’s” real name.\r\nIn these channels, certain members also “doxed” Microsoft’s counsel of record, posting their names, pDoxing can\r\nresult in real-world harm, ranging from identity theft to harassment.   \r\nScreenshot from post on online channels providing information about the case lawyers.\r\nAs a result, Microsoft’s counsel received a variety of emails, including several from suspected members of Storm-2139 attempting to cast blame on other members of the operation.  \r\nhttps://blogs.microsoft.com/on-the-issues/2025/02/27/disrupting-cybercrime-abusing-gen-ai/\r\nPage 3 of 5\n\nScreenshots of emails received by counsel of record.\r\nhttps://blogs.microsoft.com/on-the-issues/2025/02/27/disrupting-cybercrime-abusing-gen-ai/\r\nPage 4 of 5\n\nThis reaction underscores the impact of Microsoft’s legal actions and demonstrates how these measures can\r\neffectively disrupt a cybercriminal network by seizing infrastructure and create a powerful deterrent impact among\r\nits members. \r\nContinuing our commitment to combatting the abuse of generative AI.\r\nWe take the misuse of AI very seriously, recognizing the serious and lasting impacts of abusive imagery for\r\nvictims. Microsoft remains committed to protecting users by embedding robust AI guardrails and safeguarding our\r\nservices from illegal and harmful content. Last year, we committed to continuing to innovate on new ways to keep\r\nusers safe by outlining a comprehensive approach to combat abusive AI-generated content. We published a\r\nwhitepaper with recommendations for U.S. policymakers on modernizing criminal law to equip law enforcement\r\nwith the tools necessary to bring bad actors to justice. We also provided an update on our approach to intimate\r\nimage abuse, detailing the steps we take to protect our services from such harm, whether synthetic or otherwise. \r\nAs we’ve said before, no disruption is complete in one day. Going after malicious actors requires persistence and\r\nongoing vigilance. By unmasking these individuals and shining a light on their malicious activities, Microsoft\r\naims to set a precedent in the fight against AI technology misuse.  \r\nTags: AI, cybercrime, Microsoft Azure OpenAI Service, Microsoft Digital Crimes Unit, Responsible AI, The\r\nDigital Crimes Unit\r\nSource: https://blogs.microsoft.com/on-the-issues/2025/02/27/disrupting-cybercrime-abusing-gen-ai/\r\nhttps://blogs.microsoft.com/on-the-issues/2025/02/27/disrupting-cybercrime-abusing-gen-ai/\r\nPage 5 of 5",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MISPGALAXY",
		"Malpedia"
	],
	"references": [
		"https://blogs.microsoft.com/on-the-issues/2025/02/27/disrupting-cybercrime-abusing-gen-ai/"
	],
	"report_names": [
		"disrupting-cybercrime-abusing-gen-ai"
	],
	"threat_actors": [
		{
			"id": "578ac880-0348-4fe4-aa0a-0dbd44070be6",
			"created_at": "2025-03-07T02:00:03.795776Z",
			"updated_at": "2026-04-10T02:00:03.820825Z",
			"deleted_at": null,
			"main_name": "Storm-2139",
			"aliases": [],
			"source_name": "MISPGALAXY:Storm-2139",
			"tools": [],
			"source_id": "MISPGALAXY",
			"reports": null
		}
	],
	"ts_created_at": 1775434286,
	"ts_updated_at": 1775791899,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/6a7651eab34cee2cc8375f453a6e23c49b92a1bc.pdf",
		"text": "https://archive.orkl.eu/6a7651eab34cee2cc8375f453a6e23c49b92a1bc.txt",
		"img": "https://archive.orkl.eu/6a7651eab34cee2cc8375f453a6e23c49b92a1bc.jpg"
	}
}