{
	"id": "f1a7d65f-e961-406b-b93e-273aefc9ed42",
	"created_at": "2026-04-06T00:21:05.087309Z",
	"updated_at": "2026-04-10T03:20:32.13545Z",
	"deleted_at": null,
	"sha1_hash": "998d58b5124a816b5eae7f892bd6677408972cb1",
	"title": "New AMOS Infection Vector Highlights Risks around AI Adoption",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 967643,
	"plain_text": "New AMOS Infection Vector Highlights Risks around AI Adoption\r\nBy Ed Currie, Mikesh Nagar\r\nPublished: 2025-12-08 · Archived: 2026-04-05 16:39:31 UTC\r\nThis article was authored by Mikesh Nagar, Dave Waugh and Alessio Ragazzi of Kroll’s Threat Intelligence Team.\r\nKey Takeaways\r\nInvestigation into AMOS InfoStealer reveals initial infection source being ChatGPT\r\nVictims were tricked into believing they were running a command to fix a sound issue on their Mac device\r\nNew AMOS InfoStealer delivery vector highlights risk of growing trust placed in artificial intelligence (AI)\r\nDuring a recent investigation into AMOS InfoStealer, Kroll SOC along with Kroll Threat Intelligence Team have\r\ndiscovered a troubling new delivery vector that leverages the growing trust users place in AI tools. In this case,\r\nattackers leveraged ChatGPT as the source of guidance, tricking victims into initiating the infection, presenting it\r\nas a legitimate solution to a common technical problem. Victims were tricked into believing they were running a\r\nharmless command to fix a sound issue on their Mac device.\r\nWhat appeared to be a simple troubleshooting step was, in reality, a malicious command that installed AMOS\r\nInfoStealer. Once executed, the malware successfully exfiltrated sensitive data from the system. This tactic\r\nhighlights how attackers are increasingly exploiting the credibility of widely recognized platforms and tools to\r\nlower user suspicion and increase infection rates.\r\nBy framing the attack around a trusted AI brand and a relatable technical annoyance, the threat actors created a\r\nconvincing lure that bypassed the skepticism many users might have toward traditional phishing attempts.\r\nThe result was a seamless compromise that blended social engineering with technical exploitation, underscoring\r\nthe importance of user awareness and proactive security controls.\r\nFigure 1: Google Chrome Browsing History extract from infected Mac device \r\nFrom the Google Chrome Browsing History of the device, Kroll Threat Intelligence Team observed that the user\r\naccessed what appeared to be a legitimate ChatGPT session. The attackers cleverly framed the instruction as a\r\ntroubleshooting step, exploiting the user’s trust in both the ChatGPT brand and the plausibility of a common\r\ntechnical glitch.\r\nhttps://www.kroll.com/en/publications/cyber/new-amos-infection-vector-highlights-risks-around-ai-adoption\r\nPage 1 of 6\n\nFigure 2: ChatGPT Instructions shown to the user\r\nThe command line provided is an Indicator of Compromise (IOC) of AMOS InfoStealer. Attackers delivered this\r\ncommand to victims by instructing them to copy and paste it directly into the macOS terminal. Once executed, the\r\ncommand initiates the download of a malicious script, which is then used to install AMOS InfoStealer on the\r\nsystem. This script acts as the entry point for the malware, enabling data exfiltration and further compromise of\r\nthe affected device.\r\n(Ref: https://www.trendmicro.com/en_us/research/25/i/an-mdr-analysis-of-the-amos-stealer-campaign.html)\r\n \r\nMITRE ATT\u0026CK ID’s relevant to the infection vector (ChatGPT):\r\nUser Execution (T1204)\r\nMalicious File (T1204.002)\r\nApplies when a user is tricked into downloading and running a malicious file\r\nMalicious Command (T1204.003)\r\nApplies when a user is convinced to execute a malicious command (e.g., via terminal or script).\r\nPhishing (T1566)\r\nIf the AI chat interaction is considered a social engineering vector (similar to phishing).\r\nhttps://www.kroll.com/en/publications/cyber/new-amos-infection-vector-highlights-risks-around-ai-adoption\r\nPage 2 of 6\n\nApplication Layer Protocol (T1071)\r\nIf the malware is downloaded over HTTP/HTTPS or another application protocol.\r\nIngress Tool Transfer (T1105)\r\nCovers transferring tools or malware from an external system to the victim machine.\r\nCommand and Scripting Interpreter (T1059)\r\nIf the malicious command uses a shell (e.g., Bash, PowerShell) or scripting language.\r\n \r\nQuestioning ChatGPT on the Output\r\nOn further investigation, it is interesting to see ChatGPT’s response when questioned as to why it delivered the\r\nmalicious response:\r\n Figure 3: ChatGPT Conversation\r\nFigure 4: ChatGPT Conversation\r\nWhen directly asked why the response was given, ChatGPT said that it would never under any circumstances\r\nusually deliver that type of response.\r\nhttps://www.kroll.com/en/publications/cyber/new-amos-infection-vector-highlights-risks-around-ai-adoption\r\nPage 3 of 6\n\nThreat Actors Use of Google Ads\r\nIn past years threat actors have been increasingly using Google Ads to conduct malvertising and phishing\r\ncampaigns, and this is also the case for ChatGPT as an infection vector.\r\nDuring the investigation, it was discovered that attackers are abusing Google Ads to display the malicious\r\nChatGPT chat at the top of search results. The use of the legitimate ChatGPT domain, in contrast to the typo-squatted or newly-crafted observed in previous cases, adds a layer of difficulty from the user perspective in\r\ndetecting malicious intentions.\r\nFigure 5: Google Ad for ChatGPT chat\r\nRecreating the Lure\r\nKroll Threat Intelligence Team attempted to recreate the chat instructions. It questioned the original prompt used\r\nto create the original chat, which came back as: “Follow this method to get your Mac sound working again”.\r\nFigure 6: Response for original ChatGPT prompt\r\nWhen Kroll Threat Intelligence Team attempted to replicate the prompt used by the threat actors, the results were\r\nvery different from what they had achieved. Instead of producing the malicious output that led to infection,\r\nhttps://www.kroll.com/en/publications/cyber/new-amos-infection-vector-highlights-risks-around-ai-adoption\r\nPage 4 of 6\n\nChatGPT responded with a safeguarding message designed to prevent harmful or unsafe instructions from being\r\nexecuted. This protective behavior is part of the platform’s built-in safety mechanisms, which are intended to stop\r\nusers from being tricked into running dangerous commands.\r\nThe contrast between the attacker’s reported experience and our own highlights an important point: threat actors\r\noften manipulate context, presentation or even spoofed interfaces to bypass user scepticism. In this case, although\r\nthe malicious instructions were generated by ChatGPT itself, the threat actor had bypassed the guard rails of the\r\nAI Agent thus presenting the command and instructions as though it came from a trusted AI assistant, the attackers\r\nlowered the victim’s guard and increased the likelihood of execution.\r\n \r\nLessons Learned\r\nThe AMOS InfoStealer case highlights several important takeaways for both defenders and everyday users.\r\nAttackers are increasingly exploiting the credibility of trusted brands, in this instance ChatGPT, to make malicious\r\ninstructions appear legitimate. Social engineering continues to be highly effective, with simple lures such as fixing\r\na sound issue convincing users to run dangerous commands.\r\nMalware delivery is also becoming more subtle, moving away from obvious phishing attachments and instead\r\nembedding malicious instructions into routine troubleshooting scenarios.\r\nWhile legitimate AI platforms enforce safeguards to block unsafe outputs, attackers may spoof or imitate these\r\nenvironments to bypass protections. Finally, user behaviour remains the most critical factor, as technical defences\r\ncannot always prevent a user from executing harmful commands if they believe the source is trustworthy.\r\n \r\nSignificance of AI in Corporate Environments\r\nThe significance of this attack is heightened by the prevalence of ChatGPT in corporate environments. Some\r\ninteresting statistics include:\r\nCorporate Adoption\r\nForty-nine percent of companies are already using ChatGPT, and 93% of those plan to expand usage.\r\nOver 80% of Fortune 500 companies have integrated ChatGPT into workflows within nine months of its\r\nlaunch.\r\nEmployee Usage\r\nThirty-six percent of workers use ChatGPT at least monthly for work tasks; 22% use it daily. \r\nSurveys show 43% of employees have used ChatGPT for work-related tasks, including writing, debugging,\r\nand troubleshooting issues.\r\nIn the U.S., 28% of employed adults reported using ChatGPT for work activities as of March 2025.\r\nhttps://www.kroll.com/en/publications/cyber/new-amos-infection-vector-highlights-risks-around-ai-adoption\r\nPage 5 of 6\n\nThis data highlights a significant concern because nearly half of businesses and a large portion of employees are\r\nactively using AI platforms like ChatGPT for work-related tasks, including technical troubleshooting. If such\r\nplatforms were compromised or intentionally provided malicious commands, the impact could be widespread and\r\nsevere.\r\nThe trust users place in these tools means they are likely to follow instructions without verifying them, creating an\r\nideal vector for social engineering attacks. With adoption rates this high, a single malicious prompt could lead to\r\nmass malware infections, data breaches and operational disruptions across corporate environments.\r\nThis risk underscores the need for strict governance, user training and monitoring when integrating AI into critical\r\nworkflows.\r\n \r\nRecommendations\r\nProvide training for staff to identify suspicious prompts and to avoid copying commands into terminals\r\nunless they come directly from trusted vendor documentation.\r\nEncourage users to confirm fixes through official support channels rather than third party instructions.\r\nDeploy monitoring tools to detect unusual command execution and script downloads, especially on macOS\r\nsystems.\r\nIntegrate known IOCs, such as malicious command lines, into threat intelligence feeds and detection rules.\r\nCombine technical safeguards with strong awareness programs to reduce the success rate of social\r\nengineering attacks. \r\nSource: https://www.kroll.com/en/publications/cyber/new-amos-infection-vector-highlights-risks-around-ai-adoption\r\nhttps://www.kroll.com/en/publications/cyber/new-amos-infection-vector-highlights-risks-around-ai-adoption\r\nPage 6 of 6",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"Malpedia"
	],
	"references": [
		"https://www.kroll.com/en/publications/cyber/new-amos-infection-vector-highlights-risks-around-ai-adoption"
	],
	"report_names": [
		"new-amos-infection-vector-highlights-risks-around-ai-adoption"
	],
	"threat_actors": [],
	"ts_created_at": 1775434865,
	"ts_updated_at": 1775791232,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/998d58b5124a816b5eae7f892bd6677408972cb1.pdf",
		"text": "https://archive.orkl.eu/998d58b5124a816b5eae7f892bd6677408972cb1.txt",
		"img": "https://archive.orkl.eu/998d58b5124a816b5eae7f892bd6677408972cb1.jpg"
	}
}