{
	"id": "b3e48124-60e1-49c9-b7b0-d88714234cbb",
	"created_at": "2026-04-06T00:11:23.432192Z",
	"updated_at": "2026-04-10T13:12:34.133909Z",
	"deleted_at": null,
	"sha1_hash": "4dee3cbe33d4d1644943798069eebc833d329dd5",
	"title": "AI-Poisoning \u0026 AMOS Stealer: The Biggest Mac Threat | Huntress",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 2465443,
	"plain_text": "AI-Poisoning \u0026 AMOS Stealer: The Biggest Mac Threat | Huntress\r\nArchived: 2026-04-05 18:51:37 UTC\r\nSummary\r\nOn December 5, 2025, Huntress triaged an Atomic macOS Stealer (AMOS) alert that initially appeared routine: data\r\nexfiltration, standard AMOS persistence, and no unusual infection chain indicators in the telemetry. We expected to find the\r\nstandard delivery vectors: a phishing link, a trojanized installer, maybe a ClickFix lure. None of those were present: no\r\nphishing email, no malicious installer, and no familiar ClickFix-style lure.\r\nThose expectations weren't arbitrary. Over the past year, macOS-stealer activity has increasingly relied on trusted workflows\r\nand social engineering rather than traditional malware downloads. One prominent example is the rise of \"ClickFix\" attacks,\r\nwhich exploit users' trust in seemingly harmless \"prove you're human\" prompts (such as CAPTCHA). Victims unknowingly\r\nexecute arbitrary commands they believe are part of a legitimate user authentication chain. While this incident is not\r\nClickFix, the historical pattern helps explain why we initially expected to find a user-executed command lure instead of a\r\nfile-based delivery vector.\r\nInstead, what we found was a simple Google search, followed by a conversation with ChatGPT:\r\nThe victim had searched \"Clear disk space on macOS.\" Google surfaced two highly ranked results at the top of the page, one\r\ndirecting the end user to a ChatGPT conversation and the other to a Grok conversation. Both were hosted on their respective\r\nlegitimate platforms. Both conversations offered polite, step-by-step troubleshooting guidance. Both included instructions,\r\nand macOS Terminal commands presented as \"safe system cleanup\" instructions.\r\nThe user clicked the ChatGPT link, read through the conversation, and executed the provided command. They believed they\r\nwere following advice from a trusted AI assistant, delivered through a legitimate platform, surfaced by a search engine they\r\nuse every day. Instead, they had just executed a command that downloaded an AMOS stealer variant that silently harvested\r\ntheir password, escalated to root, and deployed persistent malware.\r\nNo malicious download. No security warnings. No bypassing macOS's built-in protections. Just a search, a click, and a\r\ncopy-paste, into a full-blown persistent data leak.\r\nThis campaign represents a fundamental evolution in social engineering: attackers are no longer just mimicking trusted\r\nplatforms; they're actively using them, poisoning search results to ensure their malicious \"help\" appears as the first answer\r\nvictims find. Malware no longer needs to masquerade as \"clean\" software when it can masquerade as help.\r\nInitial access: AI/SEO poisoning\r\nOne search to steal them all:\r\nFigure 1: Search results showing AI conversations recommending the user download the Infostealer\r\nThe infection began with a search query anyone with a Mac might type: \"Clear disk space on macOS.\" This isn't a niche\r\ntechnical query or a red flag phrase; it's exactly what a normal user would search when their storage is full, and they're\r\nlooking for help.\r\nhttps://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust\r\nPage 1 of 13\n\nDuring our investigation, the Huntress team reproduced these poisoned results across multiple variations of the same\r\nquestion, \"how to clear data on iMac,\" \"clear system data on iMac,\" \"free up storage on Mac\", confirming this isn't an\r\nisolated result but a deliberate, widespread poisoning campaign targeting common troubleshooting queries.\r\nGoogle's response looked like this:\r\nFigure 2: Top search results and highly ranked links via Google Search\r\nTwo highly ranked results appeared near the top of the page:\r\nA ChatGPT conversation: \"How to delete system data on Mac - How to clear storage on Mac?\"\r\nA Grok conversation: \"How to clear storage on Mac? - Guide Clear Space - Clear space safely.\"\r\nBoth were highly ranked results. Both appeared above organic results. Both pointed to what appeared to be legitimate AI-generated troubleshooting guides hosted on grok.com and chatgpt.com, domains users have been conditioned to trust.\r\nInside the poisoned conversation\r\nThe user clicked the ChatGPT link and was directed to a shared conversation that resembled a typical interaction with\r\nChatGPT. The interface was authentic because it was, in fact, a real ChatGPT conversation, hosted on OpenAI's platform,\r\ncreated by an attacker and then weaponized through SEO manipulation.\r\nhttps://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust\r\nPage 2 of 13\n\nFigure 3: ChatGPT conversation guiding the user to downloading the AMOS macOS infostealer\r\nThe conversation followed a familiar pattern of most AI conversations, and everything about this presentation screams\r\nlegitimacy:\r\nProfessional formatting: Numbered steps, emoji indicators, code blocks with syntax highlighting.\r\nReassuring language: \"safely removes,\" \"does not touch your personal data,\" \"does not modify system settings.\"\r\nPlausible technical content: The use of Terminal for system maintenance isn't inherently suspicious.\r\nTrusting the familiarity of the format and the domain, the victim read through the instructions, saw nothing alarming, and\r\nexecuted the command exactly as written. Unfortunately, that was all it took to infect their Mac with 24/7 data-stealing\r\nmalware that persists until removed.\r\nThe deception behind the delivery\r\nThis attack works because it exploits multiple layers of trust simultaneously:\r\nhttps://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust\r\nPage 3 of 13\n\nSearch engine trust: Users trust search engines to surface helpful, vetted results, and results at the top carry implied\r\nendorsement.\r\nPlatform trust: The links point to chatgpt.com and grok.com, both legitimate domains users have been conditioned\r\nto trust for technical guidance.\r\nFormat trust: The conversation looks exactly like thousands of other ChatGPT interactions people see daily.\r\nNothing suspicious.\r\nContent trust: The instructions seem reasonable. Users know that Terminal commands exist for system\r\nmaintenance.\r\nBehavior trust: Users routinely copy and paste Terminal commands from trusted sources, such as Stack Overflow,\r\nApple Support forums, Reddit threads, and AI-generated conversations.\r\nBut this attack doesn't break any of these trust layers; it weaponizes them all.\r\nReplication across platforms\r\nDuring our research for this article, Huntress identified multiple variations of the same conversations and confirmed that this\r\npoisoning isn't isolated to ChatGPT. We found identical malicious instructions hosted on Grok, and various versions of the\r\nbase-64 encoded URL being shared via the instructions on both sites:\r\nhttps://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust\r\nPage 4 of 13\n\nFigure 4: Grok-generated chat to download the AMOS stealer\r\nThe Grok conversation mirrored the ChatGPT version, with the same formatting and reassuring language; the only\r\nsignificant difference was a slightly different base64-encoded payload URL. This confirmed to our team that attackers are\r\nsystematically weaponizing multiple AI platforms with SEO poisoning, and that it is not isolated to a single AI platform,\r\npage, or query, ensuring victims encounter poisoned instructions regardless of which tool they trust. Instead, multiple AI-style conversations are being surfaced organically through standard search terms, each pointing victims toward the same\r\nmulti-stage macOS stealer.\r\nWhy this works so well\r\nTraditional malware delivery requires users to ignore warning signs:\r\nAllow unknown files → Override Gatekeeper to bypass native OS security\r\nhttps://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust\r\nPage 5 of 13\n\nInstall suspicious software → Click through security warnings\r\nGrant elevated permissions → Approve system extensions\r\nThis attack requires:\r\nSearch\r\nClick\r\nCopy-paste\r\nNo warnings. No downloads. No red flags. The entire infection chain appears to be normal and safe behavior, because it is in\r\nevery other context. Users aren't being careless. They're not ignoring security prompts. They're following instructions from a\r\ntrusted AI platform, delivered through a search engine they use daily, for a task that legitimately requires Terminal access.\r\nThis is social engineering at its best: The attack is indistinguishable from the help it impersonates.\r\nWhat happens after the copy-paste\r\nOnce the victim executed the command, a multi-stage infection chain began. The base64-encoded string in the Terminal\r\ncommand decoded to a URL hosting a malicious bash script, the first stage of an AMOS deployment designed to harvest\r\ncredentials, escalate privileges, and establish persistence without ever triggering a security warning.\r\nAfter the commands are executed, here's how the infection unfolds.\r\nTechnical analysis\r\nIn previous articles, Trend Micro has covered the stealer itself quite extensively; however, we will provide a high-level\r\noverview of the AMOS variant used in this case.\r\nStage 1: Credential harvesting and silent privilege escalation\r\nThe command served to the victim in this case was the following, with the base64 encoded blob decoding to\r\nhxxps[://]putuartana[.]com/cleangpt (as of the release of this article, two more C2 URLs were used for delivery).\r\nThe remote loader, called update, is fetched and runs a bash script that requests the user’s credentials by simply asking for\r\nthe \"System Password”. Once entered, the AMOS stealer looks to verify the password supplied by the victim.\r\nhttps://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust\r\nPage 6 of 13\n\nFigure 5: The bash script used to escalate to root\r\nThe password prompt is not an actual system dialog, does not display macOS authentication branding, and, to date, has not\r\nleveraged AppleScript.\r\nBehind the scenes, the loader silently validates the supplied password using: dscl-authonly \u003cusername\u003e \u003cpassword\u003e\r\nThe dscl command is part of a script that will curl a new copy of the update binary to the /tmp directory, remove the\r\nquarantine extended attribute (an odd addition by the attacker here, as a curl command does not apply the quarantine\r\nextended attribute), and update the permissions to allow for execution. This is not unique to this instance of AMOS.\r\nThis command performs silent credential validation with Directory Services without prompting for a graphical password. It\r\nconfirms whether the supplied password is correct, but does so entirely in the background, no system UI, no Touch ID\r\nfallback, no visible authentication challenge. Once a valid password is supplied, the script writes it in plaintext to a hidden\r\nfile in the /tmp directory, called /tmp/.pass. Once it moves to the next stage, it will move the file to the user’s home\r\ndirectory.\r\nThe confirmed credentials are then immediately weaponized. The loader pipes the stored password into sudo -S, which\r\naccepts passwords via stdin rather than requiring interactive entry: cat /tmp/.pass | sudo -S \u003cprivileged_command\u003e\r\nThis allows the attacker to execute subsequent commands with root privileges without requiring further user interaction.\r\nFrom the victim's perspective, they entered their password once for what seemed like a system maintenance task, but behind\r\nthe scenes, that credential has enabled complete administrative control of the endpoint.\r\nOnce it has downloaded the update binary to /tmp, it removes the quarantine extended attribute from the binary (which\r\nApple applies to newly downloaded content) and sets the permissions to allow the binary to execute.\r\nBefore executing the next stage, it will verify that it is not running in a virtual machine and instead will run anti-VM logic.\r\nhttps://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust\r\nPage 7 of 13\n\nFigure 6: Anti-VM logic to ensure the malware is not running inside a virtual machine\r\nStage 2: Loader and payload deployment\r\nWith authorized credential access secured, the AppleScript loader proceeds to download and install the core stealer payload.\r\nThe payload, an ad hoc signed Mach-O executable, is copied to a hidden location in the user's home directory:\r\n/Users/$USER/.helper\r\nThe filename .helper is deliberately generic and innocuous. The leading dot makes it hidden by default in Finder and\r\nstandard directory listings, reducing the likelihood of user detection.\r\nThe loader will then look for two applications in the /Applications directory: Ledger Wallet and Trezor Suite. If they exist, it\r\nwill overwrite them with a trojanized copy that is ad-hoc signed. These applications prompt the user to suggest that, for\r\nsecurity reasons, the seed phrase needs to be re-entered. Additionally, an app bundle resource is included that posts this data\r\nto the attacker-controlled web server.\r\nFigure 7: The AppleScript checks for Ledger Wallet and Trezor Suite and replaces them with a trojanized version\r\nhttps://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust\r\nPage 8 of 13\n\nThe stealer is compiled as a native macOS binary, not a cross-platform script or interpreted payload. This approach avoids\r\ndependencies on Python, Node.js, or other runtimes that security tools might flag as risky. It also integrates cleanly with\r\nmacOS APIs for keychain access and GUI interaction.\r\nAfter collecting all the data, the attacker will stage the data in /tmp/out.zip before exfiltrating it to their C2 server.\r\nStage 3: Persistence via GUI-context watchdog loops\r\nUsing a standard macOS LaunchDaemon mechanism for persistence, the attacker drops a LaunchDaemon that will call a\r\nbash script every time the machine reboots. The plist, located at /Library/LaunchDaemons/com.finder.helper.plist, is quite\r\nbarebones.\r\nFigure 8: Persistent LaunchDaemon called `com.finder.helper.plis\r\nThe LaunchDaemon’s responsibility is to run this hidden .agent script—an AppleScript-based watchdog loop that runs in the\r\nbackground. This .agent file was initially dropped as part of the first-stage dropper, an AppleScript called 'update'. \r\nFigure 9: Script to persist the .helper binary\r\nThis loop operates as follows:\r\nEvery second, the script checks which user is currently logged into the GUI session by querying /dev/console.\r\nIf a user session is active (and it's not the root user), the script relaunches .helper under that user's context using sudo\r\n-u.\r\nIf .helper is killed or crashes, it is automatically restarted within one second.\r\nThe loop runs continuously, ensuring persistent execution across reboots, logouts, and manual termination attempts.\r\nThis persistence strategy is operationally significant because it guarantees continuous execution within the user's GUI\r\nsession rather than at the system level. Running in this context enables access to session-specific credential stores, browser\r\ndatabases, and authenticated application data that are unavailable to background daemons without visible re-authentication.\r\nhttps://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust\r\nPage 9 of 13\n\nBy minimizing reliance on traditional plist-based persistence and maintaining user-context relaunching, the malware can\r\nremain active and continue harvesting sensitive information long after its initial installation, even if standard plist or\r\nLaunchAgent audits reveal no anomalies.\r\nStealer capabilities\r\nThe core AMOS payload maintains its focus on high-value data exfiltration, targeting:\r\nCryptocurrency wallets: Electrum, Exodus, MetaMask, Ledger Live, Coinbase Wallet, and other popular wallet\r\napplications\r\nBrowser credential databases: Saved passwords, cookies, autofill data, and session tokens from all major browsers\r\nKeychain access: Queries macOS Keychain for application passwords, Wi-Fi credentials, and certificates\r\nFile system enumeration: Searches for wallet files, configuration files, and other sensitive documents\r\nExfiltration: Packages all harvested data and transmits it to attacker-controlled servers\r\nThreat actor evolution\r\nThis campaign highlights several meaningful shifts in macOS stealer tradecraft. Two key delivery traits differentiate this\r\ncampaign from traditional macOS stealer deployment.\r\nAI trust exploitation\r\nAttackers mimic the tone, formatting, and instructional style of legitimate AI troubleshooting content.\r\nUsers confidently execute Terminal commands recommended by ChatGPT or Grok without validating safety.\r\nEO poisoning ensures malicious AI-style “advice” appears as the first trusted answer during routine searching.\r\nAMOS has previously leveraged SEO-boosted placement, as documented in Jamf’s macOS infostealer research; however,\r\nthis is the first time we have observed the family using AI-formatted troubleshooting content as the initial lure for execution.\r\nCopy/paste into Terminal\r\nmacOS Gatekeeper does not inspect shell scripts or one-liners, allowing them to execute without prompts or\r\nwarnings.\r\nEliminates the need for installers, cracked apps, or Gatekeeper overrides (e.g., right-clicking and opening or\r\ndragging into Terminal).\r\nInstead of downloading an application or DMG, victims compromise themselves by copying a command\r\ndirectly from the browser into Terminal.\r\nDetection and mitigation recommendations:\r\nTraditional signature-based detection will struggle with this campaign because the initial infection vector, a user-executed\r\nTerminal command, appears identical to legitimate administrative tasks. Focus detection efforts on behavioral anomalies:\r\nFor defenders:\r\nMonitor for osascript requesting user credentials\r\nMonitor for unusual dscl -authonly usage, especially in user-initiated bash scripts\r\nMonitor for system_profiler usage related to emulator detections (anti-analysis)\r\nAudit processes launching via sudo -u with passwords that are piped to stdin\r\nWatch for hidden executables in users' home directories\r\nIf a file is prepended with a period (.), it will be hidden from both the Finder's default view and a basic ls\r\ncommand in Terminal. It is a common way malware tries to hide from the end user’s view.\r\nhttps://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust\r\nPage 10 of 13\n\nFor end users:\r\nNever execute Terminal commands from unfamiliar sources, even if they appear to come from trusted sources\r\nBe suspicious of any utility that you do not understand completely that requests your password\r\nUse strong, random passwords and a password manager to keep your accounts secure if malware executes\r\nConclusion\r\nThe AI-poisoning delivery path used by this AMOS campaign represents a meaningful shift in social engineering tradecraft.\r\nAttackers are no longer trying to overcome user skepticism; they are leveraging user trust directly.\r\nTraditional malware delivery battles against instinct. Phishing emails feel suspicious. Cracked installers trigger warnings.\r\nBut copying a Terminal command from our trusted AI friend ChatGPT? That feels productive. That feels safe. That feels like\r\na simple solution to an annoying problem.\r\nThis strategy is a breakthrough, as attackers have discovered a delivery channel that not only bypasses security controls but\r\nalso circumvents the human threat model entirely. The technical sophistication compounds the problem: silent credential\r\nharvesting, GUI context persistence, trojanized wallets, and native execution. But the real story isn't what happens after\r\ninfection. It's how easily the infection begins.\r\nAs AI assistants become embedded in daily workflows and operating systems, we expect this delivery method to proliferate.\r\nIt's too effective, too scalable, and too difficult to defend against with traditional controls. Defensive strategies must evolve\r\nbeyond static artifact monitoring to include behavioral detection of anomalous authentication patterns, unusual process\r\nexecution chains, and deviations from baseline shell behavior.\r\nHowever, technology alone won't solve this problem. Users need to understand that platform trust does not automatically\r\ntransfer to user-generated content. The most dangerous exploits don't target code; they target behavior and people. In\r\n2025 and beyond, that means exploiting our relationship with AI and our willingness to trust instructions simply because\r\nthey're formatted like help.\r\nMalware no longer needs to resemble legitimate software. It just needs to be helpful.\r\nIOCs\r\nFiles\r\nhttps://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust\r\nPage 11 of 13\n\nName SHA256 Notes\r\ncom.finder.helper.plist 276db4f1dd88e514f18649c5472559aed0b2599aa1f1d3f26bd9bc51d1c62166\r\nPersistent\r\nLaunchDaem\r\nthat runs .age\r\n.pass N/A\r\nHidden file\r\ncontaining th\r\nuser’s\r\npassword\r\n.username N/A\r\nHidden file\r\ncontaining th\r\nusername\r\n.id\r\n.helper ab60bb9c33ccf3f2f9553447babb902cdd9a85abce743c97ad02cbc1506bf9eb\r\nMachO\r\nexecutable\r\nLedger Wallet.app\r\nTrojanised ad\r\nhoc signed\r\napplication\r\nbundle\r\noverwriting a\r\nexisting\r\nlegitimate\r\ninstallation\r\n.agent e1ca6181898b497728a14a5271ce0d5d05629ea4e80bb745c91f1ae648eb5e11\r\nAppleScript\r\nthat relaunch\r\n.helper\r\nupdate 340c48d5a0c32c9295ca5e60e4af9671c2139a2b488994763abe6449ddfc32aa\r\nFirst stage\r\npayload\r\nupdate 68017DF4A49E315E49B6E0D134B9C30BAE8ECE82CF9DE045D5F56550D5F59FE1\r\nFirst stage\r\npayload\r\nInfrastructure\r\nIP Notes\r\n45.94.47[.]205  Gate\r\n45.94.47[.]186 C2\r\nhxxps[://]wbehub[.]org botUrl\r\nhttps://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust\r\nPage 12 of 13\n\nhxxps[://]sanchang[.]org  Ledger Wallet seed app data exfil\r\nSource: https://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust\r\nhttps://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust\r\nPage 13 of 13",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"Malpedia"
	],
	"origins": [
		"web"
	],
	"references": [
		"https://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust"
	],
	"report_names": [
		"amos-stealer-chatgpt-grok-ai-trust"
	],
	"threat_actors": [
		{
			"id": "08c8f238-1df5-4e75-b4d8-276ebead502d",
			"created_at": "2023-01-06T13:46:39.344081Z",
			"updated_at": "2026-04-10T02:00:03.294222Z",
			"deleted_at": null,
			"main_name": "Copy-Paste",
			"aliases": [],
			"source_name": "MISPGALAXY:Copy-Paste",
			"tools": [],
			"source_id": "MISPGALAXY",
			"reports": null
		},
		{
			"id": "aa73cd6a-868c-4ae4-a5b2-7cb2c5ad1e9d",
			"created_at": "2022-10-25T16:07:24.139848Z",
			"updated_at": "2026-04-10T02:00:04.878798Z",
			"deleted_at": null,
			"main_name": "Safe",
			"aliases": [],
			"source_name": "ETDA:Safe",
			"tools": [
				"DebugView",
				"LZ77",
				"OpenDoc",
				"SafeDisk",
				"TypeConfig",
				"UPXShell",
				"UsbDoc",
				"UsbExe"
			],
			"source_id": "ETDA",
			"reports": null
		},
		{
			"id": "8941e146-3e7f-4b4e-9b66-c2da052ee6df",
			"created_at": "2023-01-06T13:46:38.402513Z",
			"updated_at": "2026-04-10T02:00:02.959797Z",
			"deleted_at": null,
			"main_name": "Sandworm",
			"aliases": [
				"IRIDIUM",
				"Blue Echidna",
				"VOODOO BEAR",
				"FROZENBARENTS",
				"UAC-0113",
				"Seashell Blizzard",
				"UAC-0082",
				"APT44",
				"Quedagh",
				"TEMP.Noble",
				"IRON VIKING",
				"G0034",
				"ELECTRUM",
				"TeleBots"
			],
			"source_name": "MISPGALAXY:Sandworm",
			"tools": [],
			"source_id": "MISPGALAXY",
			"reports": null
		},
		{
			"id": "f9806b99-e392-46f1-9c13-885e376b239f",
			"created_at": "2023-01-06T13:46:39.431871Z",
			"updated_at": "2026-04-10T02:00:03.325163Z",
			"deleted_at": null,
			"main_name": "Watchdog",
			"aliases": [
				"Thief Libra"
			],
			"source_name": "MISPGALAXY:Watchdog",
			"tools": [],
			"source_id": "MISPGALAXY",
			"reports": null
		},
		{
			"id": "3a0be4ff-9074-4efd-98e4-47c6a62b14ad",
			"created_at": "2022-10-25T16:07:23.590051Z",
			"updated_at": "2026-04-10T02:00:04.679488Z",
			"deleted_at": null,
			"main_name": "Energetic Bear",
			"aliases": [
				"ATK 6",
				"Blue Kraken",
				"Crouching Yeti",
				"Dragonfly",
				"Electrum",
				"Energetic Bear",
				"G0035",
				"Ghost Blizzard",
				"Group 24",
				"ITG15",
				"Iron Liberty",
				"Koala Team",
				"TG-4192"
			],
			"source_name": "ETDA:Energetic Bear",
			"tools": [
				"Backdoor.Oldrea",
				"CRASHOVERRIDE",
				"Commix",
				"CrackMapExec",
				"CrashOverride",
				"Dirsearch",
				"Dorshel",
				"Fertger",
				"Fuerboos",
				"Goodor",
				"Havex",
				"Havex RAT",
				"Hello EK",
				"Heriplor",
				"Impacket",
				"Industroyer",
				"Karagany",
				"Karagny",
				"LightsOut 2.0",
				"LightsOut EK",
				"Listrix",
				"Oldrea",
				"PEACEPIPE",
				"PHPMailer",
				"PsExec",
				"SMBTrap",
				"Subbrute",
				"Sublist3r",
				"Sysmain",
				"Trojan.Karagany",
				"WSO",
				"Webshell by Orb",
				"Win32/Industroyer",
				"Wpscan",
				"nmap",
				"sqlmap",
				"xFrost"
			],
			"source_id": "ETDA",
			"reports": null
		},
		{
			"id": "a66438a8-ebf6-4397-9ad5-ed07f93330aa",
			"created_at": "2022-10-25T16:47:55.919702Z",
			"updated_at": "2026-04-10T02:00:03.618194Z",
			"deleted_at": null,
			"main_name": "IRON VIKING",
			"aliases": [
				"APT44 ",
				"ATK14 ",
				"BlackEnergy Group",
				"Blue Echidna ",
				"CTG-7263 ",
				"ELECTRUM ",
				"FROZENBARENTS ",
				"Hades/OlympicDestroyer ",
				"IRIDIUM ",
				"Qudedagh ",
				"Sandworm Team ",
				"Seashell Blizzard ",
				"TEMP.Noble ",
				"Telebots ",
				"Voodoo Bear "
			],
			"source_name": "Secureworks:IRON VIKING",
			"tools": [
				"BadRabbit",
				"BlackEnergy",
				"GCat",
				"NotPetya",
				"PSCrypt",
				"TeleBot",
				"TeleDoor",
				"xData"
			],
			"source_id": "Secureworks",
			"reports": null
		},
		{
			"id": "b3e954e8-8bbb-46f3-84de-d6f12dc7e1a6",
			"created_at": "2022-10-25T15:50:23.339976Z",
			"updated_at": "2026-04-10T02:00:05.27483Z",
			"deleted_at": null,
			"main_name": "Sandworm Team",
			"aliases": [
				"Sandworm Team",
				"ELECTRUM",
				"Telebots",
				"IRON VIKING",
				"BlackEnergy (Group)",
				"Quedagh",
				"Voodoo Bear",
				"IRIDIUM",
				"Seashell Blizzard",
				"FROZENBARENTS",
				"APT44"
			],
			"source_name": "MITRE:Sandworm Team",
			"tools": [
				"Bad Rabbit",
				"Mimikatz",
				"Exaramel for Linux",
				"Exaramel for Windows",
				"GreyEnergy",
				"PsExec",
				"Prestige",
				"P.A.S. Webshell",
				"AcidPour",
				"VPNFilter",
				"Neo-reGeorg",
				"Cyclops Blink",
				"SDelete",
				"Kapeka",
				"AcidRain",
				"Industroyer",
				"Industroyer2",
				"BlackEnergy",
				"Cobalt Strike",
				"NotPetya",
				"KillDisk",
				"PoshC2",
				"Impacket",
				"Invoke-PSImage",
				"Olympic Destroyer"
			],
			"source_id": "MITRE",
			"reports": null
		}
	],
	"ts_created_at": 1775434283,
	"ts_updated_at": 1775826754,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/4dee3cbe33d4d1644943798069eebc833d329dd5.pdf",
		"text": "https://archive.orkl.eu/4dee3cbe33d4d1644943798069eebc833d329dd5.txt",
		"img": "https://archive.orkl.eu/4dee3cbe33d4d1644943798069eebc833d329dd5.jpg"
	}
}