{
	"id": "423b1c66-016d-44e1-b3a7-07d8ac488a35",
	"created_at": "2026-04-06T00:21:44.543319Z",
	"updated_at": "2026-04-10T03:38:20.268787Z",
	"deleted_at": null,
	"sha1_hash": "0bb772498f35bee9c9b361a0e6dbcc210bdec0e4",
	"title": "GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 1580651,
	"plain_text": "GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI\r\nTools\r\nBy Google Threat Intelligence Group\r\nPublished: 2025-11-05 · Archived: 2026-04-05 15:15:18 UTC\r\nExecutive Summary\r\nBased on recent analysis of the broader threat landscape, Google Threat Intelligence Group (GTIG) has identified\r\na shift that occurred within the last year: adversaries are no longer leveraging artificial intelligence (AI) just for\r\nproductivity gains, they are deploying novel AI-enabled malware in active operations. This marks a new\r\noperational phase of AI abuse, involving tools that dynamically alter behavior mid-execution.\r\nThis report serves as an update to our January 2025 analysis, \"Adversarial Misuse of Generative AI,\" and details\r\nhow government-backed threat actors and cyber criminals are integrating and experimenting with AI across the\r\nindustry throughout the entire attack lifecycle. Our findings are based on the broader threat landscape.\r\nAt Google, we are committed to developing AI responsibly and take proactive steps to disrupt malicious activity\r\nby disabling the projects and accounts associated with bad actors, while continuously improving our models to\r\nmake them less susceptible to misuse. We also proactively share industry best practices to arm defenders and\r\nenable stronger protections across the ecosystem. Throughout this report we’ve noted steps we’ve taken to thwart\r\nmalicious activity, including disabling assets and applying intel to strengthen both our classifiers and model so it’s\r\nprotected from misuse moving forward. Additional details on how we’re protecting and defending Gemini can be\r\nfound in this white paper, “Advancing Gemini’s Security Safeguards.”\r\nKey Findings\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 1 of 19\n\nFirst Use of \"Just-in-Time\" AI in Malware: For the first time, GTIG has identified malware families,\r\nsuch as PROMPTFLUX and PROMPTSTEAL, that use Large Language Models (LLMs) during\r\nexecution. These tools dynamically generate malicious scripts, obfuscate their own code to evade\r\ndetection, and leverage AI models to create malicious functions on demand, rather than hard-coding them\r\ninto the malware. While still nascent, this represents a significant step toward more autonomous and\r\nadaptive malware.\r\n\"Social Engineering\" to Bypass Safeguards: Threat actors are adopting social engineering-like pretexts\r\nin their prompts to bypass AI safety guardrails. We observed actors posing as students in a \"capture-the-flag\" competition or as cybersecurity researchers to persuade Gemini to provide information that would\r\notherwise be blocked, enabling tool development.\r\nMaturing Cyber Crime Marketplace for AI Tooling: The underground marketplace for illicit AI tools\r\nhas matured in 2025. We have identified multiple offerings of multifunctional tools designed to support\r\nphishing, malware development, and vulnerability research, lowering the barrier to entry for less\r\nsophisticated actors.\r\nContinued Augmentation of the Full Attack Lifecycle: State-sponsored actors including from North\r\nKorea, Iran, and the People's Republic of China (PRC) continue to misuse Gemini to enhance all stages of\r\ntheir operations, from reconnaissance and phishing lure creation to command and control (C2)\r\ndevelopment and data exfiltration.\r\nThreat Actors Developing Novel AI Capabilities \r\nFor the first time in 2025, GTIG discovered a code family that employed AI capabilities mid-execution to\r\ndynamically alter the malware’s behavior. Although some recent implementations of novel AI techniques are\r\nexperimental, they provide an early indicator of how threats are evolving and how they can potentially integrate\r\nAI capabilities into future intrusion activity. Attackers are moving beyond \"vibe coding\" and the baseline observed\r\nin 2024 of using AI tools for technical support. We are only now starting to see this type of activity, but expect it\r\nto increase in the future.\r\nMalware Function Description Status\r\nFRUITSHELL\r\nReverse\r\nShell\r\nPublicly available reverse shell written in PowerShell\r\nthat establishes a remote connection to a configured\r\ncommand-and-control server and allows a threat actor\r\nto execute arbitrary commands on a compromised\r\nsystem. Notably, this code family contains hard-coded prompts meant to bypass detection or analysis\r\nby LLM-powered security systems.\r\nObserved in\r\noperations\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 2 of 19\n\nPROMPTFLUX Dropper\r\nDropper written in VBScript that decodes and\r\nexecutes an embedded decoy installer to mask its\r\nactivity. Its primary capability is regeneration, which\r\nit achieves by using the Google Gemini API. It\r\nprompts the LLM to rewrite its own source code,\r\nsaving the new, obfuscated version to the Startup\r\nfolder to establish persistence. PROMPTFLUX also\r\nattempts to spread by copying itself to removable\r\ndrives and mapped network shares.\r\nExperimental\r\nPROMPTLOCK Ransomware\r\nCross-platform ransomware written in Go, identified\r\nas a proof of concept. It leverages an LLM to\r\ndynamically generate and execute malicious Lua\r\nscripts at runtime. Its capabilities include filesystem\r\nreconnaissance, data exfiltration, and file encryption\r\non both Windows and Linux systems.\r\nExperimental\r\nPROMPTSTEAL Data Miner\r\nData miner written in Python and packaged with\r\nPyInstaller. It contains a compiled script that uses the\r\nHugging Face API to query the LLM Qwen2.5-\r\nCoder-32B-Instruct to generate one-line Windows\r\ncommands. Prompts used to generate the commands\r\nindicate that it aims to collect system information and\r\ndocuments in specific folders. PROMPTSTEAL then\r\nexecutes the commands and sends the collected data\r\nto an adversary-controlled server.\r\nObserved in\r\noperations\r\nQUIETVAULT\r\nCredential\r\nStealer\r\nCredential stealer written in JavaScript that targets\r\nGitHub and NPM tokens. Captured credentials are\r\nexfiltrated via creation of a publicly accessible\r\nGitHub repository. In addition to these tokens,\r\nQUIETVAULT leverages an AI prompt and on-host\r\ninstalled AI CLI tools to search for other potential\r\nsecrets on the infected system and exfiltrate these\r\nfiles to GitHub as well.\r\nObserved in\r\noperations\r\nTable 1: Overview of malware with novel AI capabilities GTIG detected in 2025\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 3 of 19\n\nExperimental Malware Using Gemini for Self-Modification to Evade Detection\r\nIn early June 2025, GTIG identified experimental dropper malware tracked as PROMPTFLUX that suggests\r\nthreat actors are experimenting with LLMs to develop dynamic obfuscation techniques. PROMPTFLUX is written\r\nin VBScript and interacts with Gemini's API to request specific VBScript obfuscation and evasion techniques to\r\nfacilitate \"just-in-time\" self-modification, likely to evade static signature-based detection.\r\nFurther examination of PROMPTFLUX samples suggests this code family is currently in a development or testing\r\nphase since some incomplete features are commented out and a mechanism exists to limit the malware's Gemini\r\nAPI calls. The current state of this malware does not demonstrate an ability to compromise a victim network or\r\ndevice. We have taken action to disable the assets associated with this activity. \r\nThe most novel component of PROMPTFLUX is its \"Thinking Robot\" module, designed to periodically query\r\nGemini to obtain new code for evading antivirus software. This is accomplished using a hard-coded API key to\r\nsend a POST request to the Gemini API endpoint. The actor specifies the use of the \" gemini-1.5-flash-latest \"\r\nmodel; the \" -latest \" tag ensures the malware always calls the most current stable release, making the tool more\r\nresilient to model deprecation. The prompt sent to the LLM is highly specific and machine-parsable, requesting\r\nVBScript code for antivirus evasion and instructing the LLM to output only the code itself. \r\nAlthough the self-modification function ( AttemptToUpdateSelf ) is commented out, its presence, combined with\r\nthe active logging of AI responses to \" %TEMP%\\thinking_robot_log.txt \", clearly indicates the author's goal of\r\ncreating a metamorphic script that can evolve over time.\r\nFigure 1: VBS \"StartThinkingRobot\" function\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 4 of 19\n\nWe identified multiple variations of PROMPTFLUX using LLM-driven code regeneration, including one with the\r\n\"Thinking Robot\" function replaced with a new \"Thinging\" function. This function leverages a prompt to instruct\r\nthe Gemini API to rewrite the malware's entire source code on an hourly basis to evade detection. The prompt\r\ndirects the LLM to act as an \"expert VBScript obfuscator,\" ensuring the new variant remains viable by embedding\r\nthe original decoy payload, the hard-coded API key, and the complete self-regeneration logic, thereby establishing\r\na recursive cycle of mutation.\r\nAlthough unattributed to a specific threat actor, the filenames associated with PROMPTFLUX highlight behaviors\r\ncommonly associated with financially motivated actors. Specifically, varied social engineering lures including\r\n\" crypted_ScreenRec_webinstall \" highlight a broad, geography- and industry-agnostic approach designed to\r\ntrick a wide range of users.\r\nWhile PROMPTFLUX is likely still in research and development phases, this type of obfuscation technique is an\r\nearly and significant indicator of how malicious operators will likely augment their campaigns with AI moving\r\nforward.\r\nMitigations\r\nOur intelligence also indicates this activity is in a development or testing phase, as opposed to being used in\r\nthe wild, and currently does not have the ability to compromise a victim network or device. Google has taken\r\naction against this actor by disabling the assets associated with their activity. Google DeepMind has also used\r\nthese insights to further strengthen our protections against such misuse by strengthening both Google’s\r\nclassifiers and the model itself. This enables the model to refuse to assist with these types of attacks moving\r\nforward.\r\nLLM Generating Commands to Steal Documents and System Information\r\nIn June, GTIG identified the Russian government-backed actor APT28 (aka FROZENLAKE) using new malware\r\nagainst Ukraine we track as PROMPTSTEAL and reported by CERT-UA as LAMEHUG. PROMPTSTEAL is a\r\ndata miner, which queries an LLM (Qwen2.5-Coder-32B-Instruct) to generate commands for execution via the\r\nAPI for Hugging Face, a platform for open-source machine learning including LLMs. APT28's use of\r\nPROMPTSTEAL constitutes our first observation of malware querying an LLM deployed in live operations. \r\nPROMPTSTEAL novelly uses LLMs to generate commands for the malware to execute rather than hard coding\r\nthe commands directly in the malware itself. It masquerades as an \"image generation\" program that guides the\r\nuser through a series of prompts to generate images while querying the Hugging Face API to generate commands\r\nfor execution in the background.\r\nMake a list of commands to create folder C:\\Programdata\\info and\r\nto gather computer information, hardware information, process and\r\nservices information, networks information, AD domain information,\r\nto execute in one line and add each result to text file\r\nc:\\Programdata\\info\\info.txt. Return only commands, without markdown\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 5 of 19\n\nFigure 2: PROMPTSTEAL prompt used to generate command to collect system information\r\nMake a list of commands to copy recursively different office and\r\npdf/txt documents in user Documents,Downloads and Desktop\r\nfolders to a folder c:\\Programdata\\info\\ to execute in one line.\r\nReturn only command, without markdown.\r\nFigure 3: PROMPTSTEAL prompt used to generate command to collect targeted documents\r\nPROMPTSTEAL likely uses stolen API tokens to query the Hugging Face API. The prompt specifically asks the\r\nLLM to output commands to generate system information and also to copy documents to a specified directory.\r\nThe output from these commands are then blindly executed locally by PROMPTSTEAL before the output is\r\nexfiltrated. Our analysis indicates continued development of this malware, with new samples adding obfuscation\r\nand changing the C2 method.\r\nSocial Engineering to Bypass Safeguards\r\nGuided by our AI Principles, Google designs AI systems with robust security measures and strong safety\r\nguardrails. Threat actors are adopting social engineering-like pretexts in their prompts to bypass AI safety\r\nguardrails. We observed actors posing as participants in a \"capture-the-flag\" competition for cybersecurity\r\nresearchers to persuade Gemini to provide information that would otherwise be blocked, enabling vulnerability\r\nexploitation and tool development. In addition to disrupting these actors, we use these insights to continuously\r\nimprove our classifiers and strengthen the security of our products.\r\nCapture-the-Flag: Identifying as a CTF Player to Enable Research for Mass Exploitation\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 6 of 19\n\nA China-nexus threat actor misused Gemini to enhance the effectiveness of their campaigns by crafting lure\r\ncontent, building technical infrastructure, and developing tooling for data exfiltration. \r\nIn one interaction, the actor asked Gemini to identify vulnerabilities on a compromised system, but received a\r\nsafety response from Gemini that a detailed response would not be safe. The actor then continued, reframing the\r\nprompt by depicting themselves as a participant in a capture-the-flag (CTF) exercise, a gamified cybersecurity\r\ncompetition to enhance skills and techniques. When prompted to help in a CTF exercise, Gemini returned helpful\r\ninformation that could be misused to exploit the system.\r\nThe actor appeared to learn from this interaction and used the CTF pretext in support of phishing, exploitation,\r\nand web shell development. The actor prefaced many of their prompts about exploitation of specific software and\r\nemail services with comments such as \"I am working on a CTF problem\" or \"I am currently in a CTF, and I saw\r\nsomeone from another team say …\" This approach provided advice on the next exploitation steps in a \"CTF\r\nscenario.\"\r\nMitigations\r\nGemini’s safety and security guardrails provided safety responses during this activity and Google took further\r\naction against the actor to halt future activity. It’s also important to note the context of these prompts, which if\r\nnormally posed by a participant of the CTF vs. a threat actor, would be benign inquiries. This nuance in AI use\r\nhighlights critical differentiators in benign vs. misuse of AI that we continue to analyze to balance Gemini\r\nfunctionality with both usability and security. Google has taken action against this actor by disabling the assets\r\nassociated with its activity and sharing insights with Google DeepMind to further strengthen our protections\r\nagainst such misuse. We have since strengthened both classifiers and the model itself, helping it to deny\r\nassistance with these types of attacks moving forward.\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 7 of 19\n\nFigure 4: A China-nexus threat actor’s misuse of Gemini mapped across the attack lifecycle\r\nStudent Error: Developing Custom Tools Exposes Core Attacker Infrastructure\r\nThe Iranian state-sponsored threat actor TEMP.Zagros (aka MUDDYCOAST, Muddy Water) used Gemini to\r\nconduct research to support the development of custom malware, an evolution in the group’s capability. They\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 8 of 19\n\ncontinue to rely on phishing emails, often using compromised corporate email accounts from victims to lend\r\ncredibility to their attacks, but have shifted from using public tools to developing custom malware including web\r\nshells and a Python-based C2 server. \r\nWhile using Gemini to conduct research to support the development of custom malware, the threat actor\r\nencountered safety responses. Much like the previously described CTF example, Temp.Zagros used various\r\nplausible pretexts in their prompts to bypass security guardrails. These included pretending to be a student\r\nworking on a final university project or \"writing a paper\" or \"international article\" on cybersecurity.\r\nIn some observed instances, threat actors' reliance on LLMs for development has led to critical operational\r\nsecurity failures, enabling greater disruption.\r\nThe threat actor asked Gemini to help with a provided script, which was designed to listen for encrypted requests,\r\ndecrypt them, and execute commands related to file transfers and remote execution. This revealed sensitive, hard-coded information to Gemini, including the C2 domain and the script’s encryption key, facilitating our broader\r\ndisruption of the attacker’s campaign and providing a direct window into their evolving operational capabilities\r\nand infrastructure.\r\nMitigations\r\nThese activities triggered Gemini’s safety responses and Google took additional, broader action to disrupt the\r\nthreat actor’s campaign based on their operational security failures. Additionally, we’ve taken action against\r\nthis actor by disabling the assets associated with this activity and making updates to prevent further misuse.\r\nGoogle DeepMind has used these insights to strengthen both classifiers and the model itself, enabling it to\r\nrefuse to assist with these types of attacks moving forward.\r\nPurpose-Built Tools and Services for Sale in Underground Forums\r\nIn addition to misusing existing AI-enabled tools and services across the industry, there is a growing interest and\r\nmarketplace for AI tools and services purpose-built to enable illicit activities. Tools and services offered via\r\nunderground forums can enable low-level actors to augment the frequency, scope, efficacy, and complexity of\r\ntheir intrusions despite their limited technical acumen and financial resources. \r\nTo identify evolving threats, GTIG tracks posts and advertisements on English- and Russian-language\r\nunderground forums related to AI tools and services as well as discussions surrounding the technology. Many\r\nunderground forum advertisements mirrored language comparable to traditional marketing of legitimate AI\r\nmodels, citing the need to improve the efficiency of workflows and effort while simultaneously offering guidance\r\nfor prospective customers interested in their offerings.\r\nAdvertised Capability Threat Actor Application \r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 9 of 19\n\nDeepfake/Image Generation\r\nCreate lure content for phishing operations or bypass know your customer\r\n(KYC) security requirements\r\nMalware Generation\r\nCreate malware for specific use cases or improve upon pre-existing\r\nmalware\r\nPhishing Kits and Phishing\r\nSupport\r\nCreate engaging lure content or distribute phishing emails to a wider\r\naudience\r\nResearch and Reconnaissance Quickly research and summarize cybersecurity concepts or general topics\r\nTechnical Support and Code\r\nGeneration\r\nExpand a skill set or generate code, optimizing workflow and efficiency\r\nVulnerability Exploitation\r\nProvide publicly available research or searching for pre-existing\r\nvulnerabilities\r\nTable 2: Advertised capabilities on English- and Russian-language underground forums related to AI tools and\r\nservices\r\nIn 2025 the cyber crime marketplace for AI-enabled tooling matured, and GTIG identified multiple offerings for\r\nmultifunctional tools designed to support stages of the attack lifecycle. Of note, almost every notable tool\r\nadvertised in underground forums mentioned their ability to support phishing campaigns. \r\nUnderground advertisements indicate many AI tools and services promoted similar technical capabilities to\r\nsupport threat operations as those of conventional tools. Pricing models for illicit AI services also reflect those of\r\nconventional tools, with many developers injecting advertisements into the free version of their services and\r\noffering subscription pricing tiers to add on more technical features such as image generation, API access, and\r\nDiscord access for higher prices.\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 10 of 19\n\nFigure 5: Capabilities of notable AI tools and services advertised in English- and Russian-language underground\r\nforums\r\nGTIG assesses that financially motivated threat actors and others operating in the underground community will\r\ncontinue to augment their operations with AI tools. Given the increasing accessibility of these applications, and\r\nthe growing AI discourse in these forums, threat activity leveraging AI will increasingly become commonplace\r\namongst threat actors.\r\nContinued Augmentation of the Full Attack Lifecycle\r\nState-sponsored actors from North Korea, Iran, and the People's Republic of China (PRC) continue to misuse\r\ngenerative AI tools including Gemini to enhance all stages of their operations, from reconnaissance and phishing\r\nlure creation to C2 development and data exfiltration. This extends one of our core findings from our January\r\n2025 analysis Adversarial Misuse of Generative AI.\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 11 of 19\n\nExpanding Knowledge of Less Conventional Attack Surfaces\r\nGTIG observed a suspected China-nexus actor leveraging Gemini for multiple stages of an intrusion campaign,\r\nconducting initial reconnaissance on targets of interest, researching phishing techniques to deliver payloads,\r\nsoliciting assistance from Gemini related to lateral movement, seeking technical support for C2 efforts once inside\r\na victim’s system, and leveraging help for data exfiltration.\r\nIn addition to supporting intrusion activity on Windows systems, the actor misused Gemini to support multiple\r\nstages of an intrusion campaign on attack surfaces they were unfamiliar with including cloud infrastructure,\r\nvSphere, and Kubernetes. \r\nThe threat actor demonstrated access to AWS tokens for EC2 (Elastic Compute Cloud) instances and used Gemini\r\nto research how to use the temporary session tokens, presumably to facilitate deeper access or data theft from a\r\nvictim environment. In another case, the actor leaned on Gemini to assist in identifying Kubernetes systems and to\r\ngenerate commands for enumerating containers and pods. We also observed research into getting host permissions\r\non MacOS, indicating a threat actor focus on phishing techniques for that system.\r\nMitigations\r\nThese activities are similar to our findings from January that detailed how bad actors are leveraging Gemini\r\nfor productivity vs. novel capabilities. We took action against this actor by disabling the assets associated with\r\nthis actor’s activity and Google DeepMind used these insights to further strengthen our protections against\r\nsuch misuse. Observations have been used to strengthen both classifiers and the model itself, enabling it to\r\nrefuse to assist with these types of attacks moving forward.\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 12 of 19\n\nFigure 6: A suspected China-nexus threat actor’s misuse of Gemini across the attack lifecycle\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 13 of 19\n\nNorth Korean Threat Actors Misuse Gemini Across the Attack Lifecycle \r\nThreat actors associated with the Democratic People's Republic of Korea (DPRK) continue to misuse generative\r\nAI tools to support operations across the stages of the attack lifecycle, aligned with their efforts to target\r\ncryptocurrency and provide financial support to the regime. \r\nSpecialized Social Engineering\r\nIn recent operations, UNC1069 (aka MASAN) used Gemini to research cryptocurrency concepts, and perform\r\nresearch and reconnaissance related to the location of users’ cryptocurrency wallet application data. This North\r\nKorean threat actor is known to conduct cryptocurrency theft campaigns leveraging social engineering, notably\r\nusing language related to computer maintenance and credential harvesting. \r\nThe threat actor also generated lure material and other messaging related to cryptocurrency, likely to support\r\nsocial engineering efforts for malicious activity. This included generating Spanish-language work-related excuses\r\nand requests to reschedule meetings, demonstrating how threat actors can overcome the barriers of language\r\nfluency to expand the scope of their targeting and success of their campaigns. \r\nTo support later stages of the campaign, UNC1069 attempted to misuse Gemini to develop code to steal\r\ncryptocurrency, as well as to craft fraudulent instructions impersonating a software update to extract user\r\ncredentials. We have disabled this account.\r\nMitigations\r\nThese activities are similar to our findings from January that detailed how bad actors are leveraging Gemini\r\nfor productivity vs. novel capabilities. We took action against this actor by disabling the assets associated with\r\nthis actor’s activity and Google DeepMind used these insights to further strengthen our protections against\r\nsuch misuse. Observations have been used to strengthen both classifiers and the model itself, enabling it to\r\nrefuse to assist with these types of attacks moving forward.\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 14 of 19\n\nUsing Deepfakes\r\nBeyond UNC1069’s misuse of Gemini, GTIG recently observed the group leverage deepfake images and video\r\nlures impersonating individuals in the cryptocurrency industry as part of social engineering campaigns to\r\ndistribute its BIGMACHO backdoor to victim systems. The campaign prompted targets to download and install a\r\nmalicious \"Zoom SDK\" link.\r\nFigure 7: North Korean threat actor’s misuse of Gemini to support their operations\r\nAttempting to Develop Novel Capabilities with AI\r\nUNC4899 (aka PUKCHONG), a North Korean threat actor notable for their use of supply chain compromise,\r\nused Gemini for a variety of purposes including developing code, researching exploits, and improving their\r\ntooling. The research into vulnerabilities and exploit development likely indicates the group is developing\r\ncapabilities to target edge devices and modern browsers. We have disabled the threat actor’s accounts.\r\nFigure 8: UNC4899 (aka PUKCHONG) misuse of Gemini across the attack lifecycle\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 15 of 19\n\nCapture-the-Data: Attempts to Develop a “Data Processing Agent”\r\nThe use of Gemini by APT42, an Iranian government-backed attacker, reflects the group's focus on crafting\r\nsuccessful phishing campaigns. In recent activity, APT42 used the text generation and editing capabilities of\r\nGemini to craft material for phishing campaigns, often impersonating individuals from reputable organizations\r\nsuch as prominent think tanks and using lures related to security technology, event invitations, or geopolitical\r\ndiscussions. APT42 also used Gemini as a translation tool for articles and messages with specialized vocabulary,\r\nfor generalized research, and for continued research into Israeli defense. \r\nAPT42 also attempted to build a “Data Processing Agent”, misusing Gemini to develop and test the tool. The\r\nagent converts natural language requests into SQL queries to derive insights from sensitive personal data. The\r\nthreat actor provided Gemini with schemas for several distinct data types in order to perform complex queries\r\nsuch as linking a phone number to an owner, tracking an individual's travel patterns, or generating lists of people\r\nbased on shared attributes. We have disabled the threat actors’ accounts.\r\nMitigations\r\nThese activities are similar to our findings from January that detailed how bad actors are leveraging Gemini\r\nfor productivity vs. novel capabilities. We took action against this actor by disabling the assets associated with\r\nthis actor’s activity and Google DeepMind used these insights to further strengthen our protections against\r\nsuch misuse. Observations have been used to strengthen both classifiers and the model itself, enabling it to\r\nrefuse to assist with these types of attacks moving forward.\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 16 of 19\n\nFigure 9: APT42’s misuse of Gemini to support operations\r\nCode Development: C2 Development and Support for Obfuscation\r\nThreat actors continue to adapt generative AI tools to augment their ongoing activities, attempting to enhance their\r\ntactics, techniques, and procedures (TTPs) to move faster and at higher volume. For skilled actors, generative AI\r\ntools provide a helpful framework, similar to the use of Metasploit or Cobalt Strike in cyber threat activity. These\r\ntools also afford lower-level threat actors the opportunity to develop sophisticated tooling, quickly integrate\r\nexisting techniques, and improve the efficacy of their campaigns regardless of technical acumen or language\r\nproficiency. \r\nThroughout August 2025, GTIG observed threat activity associated with PRC-backed APT41, utilizing Gemini for\r\nassistance with code development. The group has demonstrated a history of targeting a range of operating systems\r\nacross mobile and desktop devices as well as employing social engineering compromises for their operations.\r\nSpecifically, the group leverages open forums to both lure victims to exploit-hosting infrastructure and to prompt\r\ninstallation of malicious mobile applications.\r\nIn order to support their campaigns, the actor was seeking out technical support for C++ and Golang code for\r\nmultiple tools including a C2 framework called OSSTUN by the actor. The group was also observed prompting\r\nGemini for help with code obfuscation, with prompts related to two publicly available obfuscation libraries.\r\nFigure 10: APT41 misuse of Gemini to support operations\r\nInformation Operations and Gemini\r\nGTIG continues to observe IO actors utilize Gemini for research, content creation, and translation, which aligns\r\nwith their previous use of Gemini to support their malicious activity. We have identified Gemini activity that\r\nindicates threat actors are soliciting the tool to help create articles or aid them in building tooling to automate\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 17 of 19\n\nportions of their workflow. However, we have not identified these generated articles in the wild, nor identified\r\nevidence confirming the successful automation of their workflows leveraging this newly built tooling. None of\r\nthese attempts have created breakthrough capabilities for IO campaigns.\r\nMitigations\r\nFor observed IO campaigns, we did not see evidence of successful automation or any breakthrough\r\ncapabilities. These activities are similar to our findings from January that detailed how bad actors are\r\nleveraging Gemini for productivity vs. novel capabilities. We took action against this actor by disabling the\r\nassets associated with this actor’s activity and Google DeepMind used these insights to further strengthen our\r\nprotections against such misuse. Observations have been used to strengthen both classifiers and the model\r\nitself, enabling it to refuse to assist with these types of attacks moving forward.\r\nBuilding AI Safely and Responsibly \r\nWe believe our approach to AI must be both bold and responsible. That means developing AI in a way that\r\nmaximizes the positive benefits to society while addressing the challenges. Guided by our AI Principles, Google\r\ndesigns AI systems with robust security measures and strong safety guardrails, and we continuously test the\r\nsecurity and safety of our models to improve them. \r\nOur policy guidelines and prohibited use policies prioritize safety and responsible use of Google's generative AI\r\ntools. Google's policy development process includes identifying emerging trends, thinking end-to-end, and\r\ndesigning for safety. We continuously enhance safeguards in our products to offer scaled protections to users\r\nacross the globe.  \r\nAt Google, we leverage threat intelligence to disrupt adversary operations. We investigate abuse of our products,\r\nservices, users, and platforms, including malicious cyber activities by government-backed threat actors, and work\r\nwith law enforcement when appropriate. Moreover, our learnings from countering malicious activities are fed\r\nback into our product development to improve safety and security for our AI models. These changes, which can be\r\nmade to both our classifiers and at the model level, are essential to maintaining agility in our defenses and\r\npreventing further misuse.\r\nGoogle DeepMind also develops threat models for generative AI to identify potential vulnerabilities, and creates\r\nnew evaluation and training techniques to address misuse. In conjunction with this research, Google DeepMind\r\nhas shared how they're actively deploying defenses in AI systems, along with measurement and monitoring tools,\r\nincluding a robust evaluation framework that can automatically red team an AI vulnerability to indirect prompt\r\ninjection attacks. \r\nOur AI development and Trust \u0026 Safety teams also work closely with our threat intelligence, security, and\r\nmodelling teams to stem misuse.\r\nThe potential of AI, especially generative AI, is immense. As innovation moves forward, the industry needs\r\nsecurity standards for building and deploying AI responsibly. That's why we introduced the Secure AI Framework\r\n(SAIF), a conceptual framework to secure AI systems. We've shared a comprehensive toolkit for developers with\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 18 of 19\n\nresources and guidance for designing, building, and evaluating AI models responsibly. We've also shared best\r\npractices for implementing safeguards, evaluating model safety, and red teaming to test and secure AI systems. \r\nGoogle also continuously invests in AI research, helping to ensure AI is built responsibly, and that we’re\r\nleveraging its potential to automatically find risks. Last year, we introduced Big Sleep, an AI agent developed by\r\nGoogle DeepMind and Google Project Zero, that actively searches and finds unknown security vulnerabilities in\r\nsoftware. Big Sleep has since found its first real-world security vulnerability and assisted in finding a vulnerability\r\nthat was imminently going to be used by threat actors, which GTIG was able to cut off beforehand. We’re also\r\nexperimenting with AI to not only find vulnerabilities, but also patch them. We recently introduced CodeMender,\r\nan experimental AI-powered agent utilizing the advanced reasoning capabilities of our Gemini models to\r\nautomatically fix critical code vulnerabilities. \r\nAbout the Authors\r\nGoogle Threat Intelligence Group focuses on identifying, analyzing, mitigating, and eliminating entire classes of\r\ncyber threats against Alphabet, our users, and our customers. Our work includes countering threats from\r\ngovernment-backed attackers, targeted zero-day exploits, coordinated information operations (IO), and serious\r\ncyber crime networks. We apply our intelligence to improve Google's defenses and protect our users and\r\ncustomers. \r\nPosted in\r\nThreat Intelligence\r\nSource: https://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nhttps://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/\r\nPage 19 of 19",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"Malpedia",
		"MISPGALAXY"
	],
	"references": [
		"https://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools/"
	],
	"report_names": [
		"threat-actor-usage-of-ai-tools"
	],
	"threat_actors": [
		{
			"id": "02e1c2df-8abd-49b1-91d1-61bc733cf96b",
			"created_at": "2022-10-25T15:50:23.308924Z",
			"updated_at": "2026-04-10T02:00:05.298591Z",
			"deleted_at": null,
			"main_name": "MuddyWater",
			"aliases": [
				"MuddyWater",
				"Earth Vetala",
				"Static Kitten",
				"Seedworm",
				"TEMP.Zagros",
				"Mango Sandstorm",
				"TA450"
			],
			"source_name": "MITRE:MuddyWater",
			"tools": [
				"STARWHALE",
				"POWERSTATS",
				"Out1",
				"PowerSploit",
				"Small Sieve",
				"Mori",
				"Mimikatz",
				"LaZagne",
				"PowGoop",
				"CrackMapExec",
				"ConnectWise",
				"SHARPSTATS",
				"RemoteUtilities",
				"Koadic"
			],
			"source_id": "MITRE",
			"reports": null
		},
		{
			"id": "d0e8337e-16a7-48f2-90cf-8fd09a7198d1",
			"created_at": "2023-03-04T02:01:54.091301Z",
			"updated_at": "2026-04-10T02:00:03.356317Z",
			"deleted_at": null,
			"main_name": "APT42",
			"aliases": [
				"UNC788",
				"CALANQUE"
			],
			"source_name": "MISPGALAXY:APT42",
			"tools": [],
			"source_id": "MISPGALAXY",
			"reports": null
		},
		{
			"id": "2ed8d590-defa-4873-b2de-b75c9b30931e",
			"created_at": "2023-01-06T13:46:38.730137Z",
			"updated_at": "2026-04-10T02:00:03.08136Z",
			"deleted_at": null,
			"main_name": "MuddyWater",
			"aliases": [
				"TEMP.Zagros",
				"Seedworm",
				"COBALT ULSTER",
				"G0069",
				"ATK51",
				"Mango Sandstorm",
				"TA450",
				"Static Kitten",
				"Boggy Serpens",
				"Earth Vetala"
			],
			"source_name": "MISPGALAXY:MuddyWater",
			"tools": [],
			"source_id": "MISPGALAXY",
			"reports": null
		},
		{
			"id": "0106b19a-ac99-4bc9-90b9-4647bfc5f3ce",
			"created_at": "2023-11-08T02:00:07.144995Z",
			"updated_at": "2026-04-10T02:00:03.425891Z",
			"deleted_at": null,
			"main_name": "TraderTraitor",
			"aliases": [
				"Pukchong",
				"Jade Sleet",
				"UNC4899"
			],
			"source_name": "MISPGALAXY:TraderTraitor",
			"tools": [],
			"source_id": "MISPGALAXY",
			"reports": null
		},
		{
			"id": "4d5f939b-aea9-4a0e-8bff-003079a261ea",
			"created_at": "2023-01-06T13:46:39.04841Z",
			"updated_at": "2026-04-10T02:00:03.196806Z",
			"deleted_at": null,
			"main_name": "APT41",
			"aliases": [
				"WICKED PANDA",
				"BRONZE EXPORT",
				"Brass Typhoon",
				"TG-2633",
				"Leopard Typhoon",
				"G0096",
				"Grayfly",
				"BARIUM",
				"BRONZE ATLAS",
				"Red Kelpie",
				"G0044",
				"Earth Baku",
				"TA415",
				"WICKED SPIDER",
				"HOODOO",
				"Winnti",
				"Double Dragon"
			],
			"source_name": "MISPGALAXY:APT41",
			"tools": [],
			"source_id": "MISPGALAXY",
			"reports": null
		},
		{
			"id": "99c7aace-96b1-445b-87e7-d8bdd01d5e03",
			"created_at": "2025-08-07T02:03:24.746965Z",
			"updated_at": "2026-04-10T02:00:03.640335Z",
			"deleted_at": null,
			"main_name": "COBALT ILLUSION",
			"aliases": [
				"APT35 ",
				"APT42 ",
				"Agent Serpens Palo Alto",
				"Charming Kitten ",
				"CharmingCypress ",
				"Educated Manticore Checkpoint",
				"ITG18 ",
				"Magic Hound ",
				"Mint Sandstorm sub-group ",
				"NewsBeef ",
				"Newscaster ",
				"PHOSPHORUS sub-group ",
				"TA453 ",
				"UNC788 ",
				"Yellow Garuda "
			],
			"source_name": "Secureworks:COBALT ILLUSION",
			"tools": [
				"Browser Exploitation Framework (BeEF)",
				"MagicHound Toolset",
				"PupyRAT"
			],
			"source_id": "Secureworks",
			"reports": null
		},
		{
			"id": "e698860d-57e8-4780-b7c3-41e5a8314ec0",
			"created_at": "2022-10-25T15:50:23.287929Z",
			"updated_at": "2026-04-10T02:00:05.329769Z",
			"deleted_at": null,
			"main_name": "APT41",
			"aliases": [
				"APT41",
				"Wicked Panda",
				"Brass Typhoon",
				"BARIUM"
			],
			"source_name": "MITRE:APT41",
			"tools": [
				"ASPXSpy",
				"BITSAdmin",
				"PlugX",
				"Impacket",
				"gh0st RAT",
				"netstat",
				"PowerSploit",
				"ZxShell",
				"KEYPLUG",
				"LightSpy",
				"ipconfig",
				"sqlmap",
				"China Chopper",
				"ShadowPad",
				"MESSAGETAP",
				"Mimikatz",
				"certutil",
				"njRAT",
				"Cobalt Strike",
				"pwdump",
				"BLACKCOFFEE",
				"MOPSLED",
				"ROCKBOOT",
				"dsquery",
				"Winnti for Linux",
				"DUSTTRAP",
				"Derusbi",
				"ftp"
			],
			"source_id": "MITRE",
			"reports": null
		},
		{
			"id": "156b3bc5-14b7-48e1-b19d-23aa17492621",
			"created_at": "2025-08-07T02:03:24.793494Z",
			"updated_at": "2026-04-10T02:00:03.634641Z",
			"deleted_at": null,
			"main_name": "COBALT ULSTER",
			"aliases": [
				"Boggy Serpens ",
				"ENT-11 ",
				"Earth Vetala ",
				"ITG17 ",
				"MERCURY ",
				"Mango Sandstorm ",
				"MuddyWater ",
				"STAC 1171 ",
				"Seedworm ",
				"Static Kitten ",
				"TA450 ",
				"TEMP.Zagros ",
				"UNC3313 ",
				"Yellow Nix "
			],
			"source_name": "Secureworks:COBALT ULSTER",
			"tools": [
				"CrackMapExec",
				"Empire",
				"FORELORD",
				"Koadic",
				"LaZagne",
				"Metasploit",
				"Mimikatz",
				"Plink",
				"PowerStats"
			],
			"source_id": "Secureworks",
			"reports": null
		},
		{
			"id": "2a24d664-6a72-4b4c-9f54-1553b64c453c",
			"created_at": "2025-08-07T02:03:24.553048Z",
			"updated_at": "2026-04-10T02:00:03.787296Z",
			"deleted_at": null,
			"main_name": "BRONZE ATLAS",
			"aliases": [
				"APT41 ",
				"BARIUM ",
				"Blackfly ",
				"Brass Typhoon",
				"CTG-2633",
				"Earth Baku ",
				"GREF",
				"Group 72 ",
				"Red Kelpie ",
				"TA415 ",
				"TG-2633 ",
				"Wicked Panda ",
				"Winnti"
			],
			"source_name": "Secureworks:BRONZE ATLAS",
			"tools": [
				"Acehash",
				"CCleaner v5.33 backdoor",
				"ChinaChopper",
				"Cobalt Strike",
				"DUSTPAN",
				"Dicey MSDN",
				"Dodgebox",
				"ForkPlayground",
				"HUC Proxy Malware (Htran)"
			],
			"source_id": "Secureworks",
			"reports": null
		},
		{
			"id": "0b212c43-009a-4205-a1f7-545c5e4cfdf8",
			"created_at": "2025-04-23T02:00:55.275208Z",
			"updated_at": "2026-04-10T02:00:05.270553Z",
			"deleted_at": null,
			"main_name": "APT42",
			"aliases": [
				"APT42"
			],
			"source_name": "MITRE:APT42",
			"tools": [
				"NICECURL",
				"TAMECAT"
			],
			"source_id": "MITRE",
			"reports": null
		},
		{
			"id": "730dfa6e-572d-473c-9267-ea1597d1a42b",
			"created_at": "2023-01-06T13:46:38.389985Z",
			"updated_at": "2026-04-10T02:00:02.954105Z",
			"deleted_at": null,
			"main_name": "APT28",
			"aliases": [
				"Pawn Storm",
				"ATK5",
				"Fighting Ursa",
				"Blue Athena",
				"TA422",
				"T-APT-12",
				"APT-C-20",
				"UAC-0001",
				"IRON TWILIGHT",
				"SIG40",
				"UAC-0028",
				"Sofacy",
				"BlueDelta",
				"Fancy Bear",
				"GruesomeLarch",
				"Group 74",
				"ITG05",
				"FROZENLAKE",
				"Forest Blizzard",
				"FANCY BEAR",
				"Sednit",
				"SNAKEMACKEREL",
				"Tsar Team",
				"TG-4127",
				"STRONTIUM",
				"Grizzly Steppe",
				"G0007"
			],
			"source_name": "MISPGALAXY:APT28",
			"tools": [],
			"source_id": "MISPGALAXY",
			"reports": null
		},
		{
			"id": "3c430d71-ab2b-4588-820a-42dd6cfc39fb",
			"created_at": "2022-10-25T16:07:23.880522Z",
			"updated_at": "2026-04-10T02:00:04.775749Z",
			"deleted_at": null,
			"main_name": "MuddyWater",
			"aliases": [
				"ATK 51",
				"Boggy Serpens",
				"Cobalt Ulster",
				"G0069",
				"ITG17",
				"Mango Sandstorm",
				"MuddyWater",
				"Operation BlackWater",
				"Operation Earth Vetala",
				"Operation Quicksand",
				"Seedworm",
				"Static Kitten",
				"T-APT-14",
				"TA450",
				"TEMP.Zagros",
				"Yellow Nix"
			],
			"source_name": "ETDA:MuddyWater",
			"tools": [
				"Agentemis",
				"BugSleep",
				"CLOUDSTATS",
				"ChromeCookiesView",
				"Cobalt Strike",
				"CobaltStrike",
				"CrackMapExec",
				"DCHSpy",
				"DELPHSTATS",
				"EmPyre",
				"EmpireProject",
				"FruityC2",
				"Koadic",
				"LOLBAS",
				"LOLBins",
				"LaZagne",
				"Living off the Land",
				"MZCookiesView",
				"Meterpreter",
				"Mimikatz",
				"MuddyC2Go",
				"MuddyRot",
				"Mudwater",
				"POWERSTATS",
				"PRB-Backdoor",
				"PhonyC2",
				"PowGoop",
				"PowerShell Empire",
				"PowerSploit",
				"Powermud",
				"QUADAGENT",
				"SHARPSTATS",
				"SSF",
				"Secure Socket Funneling",
				"Shootback",
				"Smbmap",
				"Valyria",
				"chrome-passwords",
				"cobeacon",
				"prb_backdoor"
			],
			"source_id": "ETDA",
			"reports": null
		},
		{
			"id": "dcbff54d-13ec-40b5-b3b9-b74a315669e1",
			"created_at": "2026-02-03T02:00:03.428641Z",
			"updated_at": "2026-04-10T02:00:03.937539Z",
			"deleted_at": null,
			"main_name": "UNC1069",
			"aliases": [
				"MASAN",
				"CryptoCore"
			],
			"source_name": "MISPGALAXY:UNC1069",
			"tools": [],
			"source_id": "MISPGALAXY",
			"reports": null
		},
		{
			"id": "e3767160-695d-4360-8b2e-d5274db3f7cd",
			"created_at": "2022-10-25T16:47:55.914348Z",
			"updated_at": "2026-04-10T02:00:03.610018Z",
			"deleted_at": null,
			"main_name": "IRON TWILIGHT",
			"aliases": [
				"APT28 ",
				"ATK5 ",
				"Blue Athena ",
				"BlueDelta ",
				"FROZENLAKE ",
				"Fancy Bear ",
				"Fighting Ursa ",
				"Forest Blizzard ",
				"GRAPHITE ",
				"Group 74 ",
				"PawnStorm ",
				"STRONTIUM ",
				"Sednit ",
				"Snakemackerel ",
				"Sofacy ",
				"TA422 ",
				"TG-4127 ",
				"Tsar Team ",
				"UAC-0001 "
			],
			"source_name": "Secureworks:IRON TWILIGHT",
			"tools": [
				"Downdelph",
				"EVILTOSS",
				"SEDUPLOADER",
				"SHARPFRONT"
			],
			"source_id": "Secureworks",
			"reports": null
		},
		{
			"id": "ae320ed7-9a63-42ed-944b-44ada7313495",
			"created_at": "2022-10-25T15:50:23.671663Z",
			"updated_at": "2026-04-10T02:00:05.283292Z",
			"deleted_at": null,
			"main_name": "APT28",
			"aliases": [
				"APT28",
				"IRON TWILIGHT",
				"SNAKEMACKEREL",
				"Group 74",
				"Sednit",
				"Sofacy",
				"Pawn Storm",
				"Fancy Bear",
				"STRONTIUM",
				"Tsar Team",
				"Threat Group-4127",
				"TG-4127",
				"Forest Blizzard",
				"FROZENLAKE",
				"GruesomeLarch"
			],
			"source_name": "MITRE:APT28",
			"tools": [
				"Wevtutil",
				"certutil",
				"Forfiles",
				"DealersChoice",
				"Mimikatz",
				"ADVSTORESHELL",
				"Komplex",
				"HIDEDRV",
				"JHUHUGIT",
				"Koadic",
				"Winexe",
				"cipher.exe",
				"XTunnel",
				"Drovorub",
				"CORESHELL",
				"OLDBAIT",
				"Downdelph",
				"XAgentOSX",
				"USBStealer",
				"Zebrocy",
				"reGeorg",
				"Fysbis",
				"LoJax"
			],
			"source_id": "MITRE",
			"reports": null
		},
		{
			"id": "d2516b8e-e74f-490d-8a15-43ad6763c7ab",
			"created_at": "2022-10-25T16:07:24.212584Z",
			"updated_at": "2026-04-10T02:00:04.900038Z",
			"deleted_at": null,
			"main_name": "Sofacy",
			"aliases": [
				"APT 28",
				"ATK 5",
				"Blue Athena",
				"BlueDelta",
				"FROZENLAKE",
				"Fancy Bear",
				"Fighting Ursa",
				"Forest Blizzard",
				"G0007",
				"Grey-Cloud",
				"Grizzly Steppe",
				"Group 74",
				"GruesomeLarch",
				"ITG05",
				"Iron Twilight",
				"Operation DealersChoice",
				"Operation Dear Joohn",
				"Operation Komplex",
				"Operation Pawn Storm",
				"Operation RoundPress",
				"Operation Russian Doll",
				"Operation Steal-It",
				"Pawn Storm",
				"SIG40",
				"Sednit",
				"Snakemackerel",
				"Sofacy",
				"Strontium",
				"T-APT-12",
				"TA422",
				"TAG-0700",
				"TAG-110",
				"TG-4127",
				"Tsar Team",
				"UAC-0028",
				"UAC-0063"
			],
			"source_name": "ETDA:Sofacy",
			"tools": [
				"ADVSTORESHELL",
				"AZZY",
				"Backdoor.SofacyX",
				"CHERRYSPY",
				"CORESHELL",
				"Carberp",
				"Computrace",
				"DealersChoice",
				"Delphacy",
				"Downdelph",
				"Downrage",
				"Drovorub",
				"EVILTOSS",
				"Foozer",
				"GAMEFISH",
				"GooseEgg",
				"Graphite",
				"HATVIBE",
				"HIDEDRV",
				"Headlace",
				"Impacket",
				"JHUHUGIT",
				"JKEYSKW",
				"Koadic",
				"Komplex",
				"LOLBAS",
				"LOLBins",
				"Living off the Land",
				"LoJack",
				"LoJax",
				"MASEPIE",
				"Mimikatz",
				"NETUI",
				"Nimcy",
				"OCEANMAP",
				"OLDBAIT",
				"PocoDown",
				"PocoDownloader",
				"Popr-d30",
				"ProcDump",
				"PythocyDbg",
				"SMBExec",
				"SOURFACE",
				"SPLM",
				"STEELHOOK",
				"Sasfis",
				"Sedkit",
				"Sednit",
				"Sedreco",
				"Seduploader",
				"Shunnael",
				"SkinnyBoy",
				"Sofacy",
				"SofacyCarberp",
				"SpiderLabs Responder",
				"Trojan.Shunnael",
				"Trojan.Sofacy",
				"USB Stealer",
				"USBStealer",
				"VPNFilter",
				"Win32/USBStealer",
				"WinIDS",
				"Winexe",
				"X-Agent",
				"X-Tunnel",
				"XAPS",
				"XTunnel",
				"Xagent",
				"Zebrocy",
				"Zekapab",
				"carberplike",
				"certutil",
				"certutil.exe",
				"fysbis",
				"webhp"
			],
			"source_id": "ETDA",
			"reports": null
		},
		{
			"id": "f32df445-9fb4-4234-99e0-3561f6498e4e",
			"created_at": "2022-10-25T16:07:23.756373Z",
			"updated_at": "2026-04-10T02:00:04.739611Z",
			"deleted_at": null,
			"main_name": "Lazarus Group",
			"aliases": [
				"APT-C-26",
				"ATK 3",
				"Appleworm",
				"Citrine Sleet",
				"DEV-0139",
				"Diamond Sleet",
				"G0032",
				"Gleaming Pisces",
				"Gods Apostles",
				"Gods Disciples",
				"Group 77",
				"Guardians of Peace",
				"Hastati Group",
				"Hidden Cobra",
				"ITG03",
				"Jade Sleet",
				"Labyrinth Chollima",
				"Lazarus Group",
				"NewRomanic Cyber Army Team",
				"Operation 99",
				"Operation AppleJeus",
				"Operation AppleJeus sequel",
				"Operation Blockbuster: Breach of Sony Pictures Entertainment",
				"Operation CryptoCore",
				"Operation Dream Job",
				"Operation Dream Magic",
				"Operation Flame",
				"Operation GhostSecret",
				"Operation In(ter)caption",
				"Operation LolZarus",
				"Operation Marstech Mayhem",
				"Operation No Pineapple!",
				"Operation North Star",
				"Operation Phantom Circuit",
				"Operation Sharpshooter",
				"Operation SyncHole",
				"Operation Ten Days of Rain / DarkSeoul",
				"Operation Troy",
				"SectorA01",
				"Slow Pisces",
				"TA404",
				"TraderTraitor",
				"UNC2970",
				"UNC4034",
				"UNC4736",
				"UNC4899",
				"UNC577",
				"Whois Hacking Team"
			],
			"source_name": "ETDA:Lazarus Group",
			"tools": [
				"3CX Backdoor",
				"3Rat Client",
				"3proxy",
				"AIRDRY",
				"ARTFULPIE",
				"ATMDtrack",
				"AlphaNC",
				"Alreay",
				"Andaratm",
				"AngryRebel",
				"AppleJeus",
				"Aryan",
				"AuditCred",
				"BADCALL",
				"BISTROMATH",
				"BLINDINGCAN",
				"BTC Changer",
				"BUFFETLINE",
				"BanSwift",
				"Bankshot",
				"Bitrep",
				"Bitsran",
				"BlindToad",
				"Bookcode",
				"BootWreck",
				"BottomLoader",
				"Brambul",
				"BravoNC",
				"Breut",
				"COLDCAT",
				"COPPERHEDGE",
				"CROWDEDFLOUNDER",
				"Castov",
				"CheeseTray",
				"CleanToad",
				"ClientTraficForwarder",
				"CollectionRAT",
				"Concealment Troy",
				"Contopee",
				"CookieTime",
				"Cyruslish",
				"DAVESHELL",
				"DBLL Dropper",
				"DLRAT",
				"DRATzarus",
				"DRATzarus RAT",
				"Dacls",
				"Dacls RAT",
				"DarkComet",
				"DarkKomet",
				"DeltaCharlie",
				"DeltaNC",
				"Dembr",
				"Destover",
				"DoublePulsar",
				"Dozer",
				"Dtrack",
				"Duuzer",
				"DyePack",
				"ECCENTRICBANDWAGON",
				"ELECTRICFISH",
				"Escad",
				"EternalBlue",
				"FALLCHILL",
				"FYNLOS",
				"FallChill RAT",
				"Farfli",
				"Fimlis",
				"FoggyBrass",
				"FudModule",
				"Fynloski",
				"Gh0st RAT",
				"Ghost RAT",
				"Gopuram",
				"HARDRAIN",
				"HIDDEN COBRA RAT/Worm",
				"HLOADER",
				"HOOKSHOT",
				"HOPLIGHT",
				"HOTCROISSANT",
				"HOTWAX",
				"HTTP Troy",
				"Hawup",
				"Hawup RAT",
				"Hermes",
				"HotCroissant",
				"HotelAlfa",
				"Hotwax",
				"HtDnDownLoader",
				"Http Dr0pper",
				"ICONICSTEALER",
				"Joanap",
				"Jokra",
				"KANDYKORN",
				"KEYMARBLE",
				"Kaos",
				"KillDisk",
				"KillMBR",
				"Koredos",
				"Krademok",
				"LIGHTSHIFT",
				"LIGHTSHOW",
				"LOLBAS",
				"LOLBins",
				"Lazarus",
				"LightlessCan",
				"Living off the Land",
				"MATA",
				"MBRkiller",
				"MagicRAT",
				"Manuscrypt",
				"Mimail",
				"Mimikatz",
				"Moudour",
				"Mydoom",
				"Mydoor",
				"Mytob",
				"NACHOCHEESE",
				"NachoCheese",
				"NestEgg",
				"NickelLoader",
				"NineRAT",
				"Novarg",
				"NukeSped",
				"OpBlockBuster",
				"PCRat",
				"PEBBLEDASH",
				"PLANKWALK",
				"POOLRAT",
				"PSLogger",
				"PhanDoor",
				"Plink",
				"PondRAT",
				"PowerBrace",
				"PowerRatankba",
				"PowerShell RAT",
				"PowerSpritz",
				"PowerTask",
				"Preft",
				"ProcDump",
				"Proxysvc",
				"PuTTY Link",
				"QUICKRIDE",
				"QUICKRIDE.POWER",
				"Quickcafe",
				"QuiteRAT",
				"R-C1",
				"ROptimizer",
				"Ratabanka",
				"RatabankaPOS",
				"Ratankba",
				"RatankbaPOS",
				"RawDisk",
				"RedShawl",
				"Rifdoor",
				"Rising Sun",
				"Romeo-CoreOne",
				"RomeoAlfa",
				"RomeoBravo",
				"RomeoCharlie",
				"RomeoCore",
				"RomeoDelta",
				"RomeoEcho",
				"RomeoFoxtrot",
				"RomeoGolf",
				"RomeoHotel",
				"RomeoMike",
				"RomeoNovember",
				"RomeoWhiskey",
				"Romeos",
				"RustBucket",
				"SHADYCAT",
				"SHARPKNOT",
				"SIGFLIP",
				"SIMPLESEA",
				"SLICKSHOES",
				"SORRYBRUTE",
				"SUDDENICON",
				"SUGARLOADER",
				"SheepRAT",
				"SierraAlfa",
				"SierraBravo",
				"SierraCharlie",
				"SierraJuliett-MikeOne",
				"SierraJuliett-MikeTwo",
				"SimpleTea",
				"SimplexTea",
				"SmallTiger",
				"Stunnel",
				"TAINTEDSCRIBE",
				"TAXHAUL",
				"TFlower",
				"TOUCHKEY",
				"TOUCHMOVE",
				"TOUCHSHIFT",
				"TOUCHSHOT",
				"TWOPENCE",
				"TYPEFRAME",
				"Tdrop",
				"Tdrop2",
				"ThreatNeedle",
				"Tiger RAT",
				"TigerRAT",
				"Trojan Manuscript",
				"Troy",
				"TroyRAT",
				"VEILEDSIGNAL",
				"VHD",
				"VHD Ransomware",
				"VIVACIOUSGIFT",
				"VSingle",
				"ValeforBeta",
				"Volgmer",
				"Vyveva",
				"W1_RAT",
				"Wana Decrypt0r",
				"WanaCry",
				"WanaCrypt",
				"WanaCrypt0r",
				"WannaCry",
				"WannaCrypt",
				"WannaCryptor",
				"WbBot",
				"Wcry",
				"Win32/KillDisk.NBB",
				"Win32/KillDisk.NBC",
				"Win32/KillDisk.NBD",
				"Win32/KillDisk.NBH",
				"Win32/KillDisk.NBI",
				"WinorDLL64",
				"Winsec",
				"WolfRAT",
				"Wormhole",
				"YamaBot",
				"Yort",
				"ZetaNile",
				"concealment_troy",
				"http_troy",
				"httpdr0pper",
				"httpdropper",
				"klovbot",
				"sRDI"
			],
			"source_id": "ETDA",
			"reports": null
		}
	],
	"ts_created_at": 1775434904,
	"ts_updated_at": 1775792300,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/0bb772498f35bee9c9b361a0e6dbcc210bdec0e4.pdf",
		"text": "https://archive.orkl.eu/0bb772498f35bee9c9b361a0e6dbcc210bdec0e4.txt",
		"img": "https://archive.orkl.eu/0bb772498f35bee9c9b361a0e6dbcc210bdec0e4.jpg"
	}
}