{
	"id": "0ac8efc5-3199-49a1-8998-3d9c8997ff1d",
	"created_at": "2026-04-06T00:14:26.935833Z",
	"updated_at": "2026-04-10T03:21:05.303418Z",
	"deleted_at": null,
	"sha1_hash": "97143798be820fc1acea7c1d42d9ea9ac94b9e42",
	"title": "Disrupting a covert Iranian influence operation",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 34078,
	"plain_text": "Disrupting a covert Iranian influence operation\r\nArchived: 2026-04-05 17:43:26 UTC\r\nOpenAI is committed to preventing abuse and improving transparency around AI-generated content. This includes\r\nour work to detect and stop covert influence operations (IO), which try to manipulate public opinion or influence\r\npolitical outcomes while hiding the true identity or intentions of the actors behind them. This is especially\r\nimportant in the context of the many elections being held in 2024. We have expanded our work in this area\r\nthroughout the year, including by leveraging our own AI models to better detect and understand abuse. \r\nThis week we identified and took down a cluster of ChatGPT accounts that were generating content for a covert\r\nIranian influence operation identified as Storm-2035(opens in a new window). We have banned these accounts\r\nfrom using our services, and we continue to monitor for any further attempts to violate our policies. The operation\r\nused ChatGPT to generate content focused on a number of topics—including commentary on candidates on both\r\nsides in the U.S. presidential election – which it then shared via social media accounts and websites. \r\nSimilar to the covert influence operations we reported in May, this operation does not appear to have achieved\r\nmeaningful audience engagement. The majority of social media posts that we identified received few or no likes,\r\nshares, or comments. We similarly did not find indications of the web articles being shared across social media.\r\nUsing Brookings’ Breakout Scale(opens in a new window), which assesses the impact of covert IO on a scale\r\nfrom 1 (lowest) to 6 (highest), this operation was at the low end of Category 2 (activity on multiple platforms, but\r\nno evidence that real people picked up or widely shared their content). Our investigation benefited from\r\ninformation about the operation published by Microsoft last week.(opens in a new window) \r\nOur investigation revealed that this operation used ChatGPT for two purposes: generating long-form articles and\r\nshorter social media comments. The first workstream produced articles on U.S. politics and global events,\r\npublished on five websites that posed as both progressive and conservative news outlets. The second workstream\r\ncreated short comments in English and Spanish, which were posted on social media. We identified a dozen\r\naccounts on X and one on Instagram involved in this operation. Some of the X accounts posed as progressives,\r\nand others as conservatives. They generated some of these comments by asking our models to rewrite comments\r\nposted by other social media users.\r\nThe operation generated content about several topics: mainly, the conflict in Gaza, Israel’s presence at the\r\nOlympic Games, and the U.S. presidential election—and to a lesser extent politics in Venezuela, the rights of\r\nLatinx communities in the U.S. (both in Spanish and English), and Scottish independence. They interspersed their\r\npolitical content with comments about fashion and beauty, possibly to appear more authentic or in an attempt to\r\nbuild a following.\r\nNotwithstanding the lack of meaningful audience engagement resulting from this operation, we take seriously any\r\nefforts to use our services in foreign influence operations. Accordingly, as part of our work to support the wider\r\ncommunity in disrupting this activity after removing the accounts from our services, we have shared threat\r\nintelligence with government, campaign, and industry stakeholders. OpenAI remains dedicated to uncovering and\r\nmitigating this type of abuse at scale by partnering with industry, civil society, and government, and by harnessing\r\nhttps://openai.com/index/disrupting-a-covert-iranian-influence-operation/\r\nPage 1 of 2\n\nthe power of generative AI to be a force multiplier in our work. We will continue to publish findings like these to\r\npromote information-sharing and best practices.\r\nSource: https://openai.com/index/disrupting-a-covert-iranian-influence-operation/\r\nhttps://openai.com/index/disrupting-a-covert-iranian-influence-operation/\r\nPage 2 of 2",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"ETDA"
	],
	"references": [
		"https://openai.com/index/disrupting-a-covert-iranian-influence-operation/"
	],
	"report_names": [
		"disrupting-a-covert-iranian-influence-operation"
	],
	"threat_actors": [],
	"ts_created_at": 1775434466,
	"ts_updated_at": 1775791265,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/97143798be820fc1acea7c1d42d9ea9ac94b9e42.pdf",
		"text": "https://archive.orkl.eu/97143798be820fc1acea7c1d42d9ea9ac94b9e42.txt",
		"img": "https://archive.orkl.eu/97143798be820fc1acea7c1d42d9ea9ac94b9e42.jpg"
	}
}