{
	"id": "a333169d-e1e8-4415-89a9-fa64c60e1dda",
	"created_at": "2026-04-06T00:19:46.131004Z",
	"updated_at": "2026-04-10T03:36:06.914084Z",
	"deleted_at": null,
	"sha1_hash": "fb23aa3fc7c4f0f5af9cbf87b085e5438f0a4c60",
	"title": "Pointer: Hunting Cobalt Strike globally",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 2125940,
	"plain_text": "Pointer: Hunting Cobalt Strike globally\r\nBy Pavel Shabarkin\r\nPublished: 2021-11-21 · Archived: 2026-04-05 22:28:12 UTC\r\n14 min read\r\nSep 16, 2021\r\nIntroduction\r\nCobalt Strike is a commercial, full-featured, remote access tool that bills itself as “adversary simulation software\r\ndesigned to execute targeted attacks and emulate the post-exploitation actions of advanced threat actors”. Cobalt\r\nStrike’s interactive post-exploit capabilities cover the full range of ATT\u0026CK tactics, all executed within a single,\r\nintegrated system.\r\nIn addition to its own capabilities, Cobalt Strike leverages the capabilities of other well-known tools such as\r\nMetasploit and Mimikatz.\r\nCobalt Strike is a legitimate security tool used by penetration testers and red teamers to emulate threat actor\r\nactivity in a network. However, lately, this tool has been hijacked and abused by cybercriminals.\r\nOur goal was to develop a tool to help identify default Cobalt Strike servers exposed on the Internet. We strongly\r\nbelieve that understanding and mapping adversaries and their use of Cobalt Strike can improve defenses and boost\r\norganization detection \u0026 response controls. Blocking, mapping and tracking adversaries is a good start.\r\nPress enter or click to view image in full size\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 1 of 17\n\nPointer logo\r\nTool Development\r\nA review of existing Cobalt Strike detection tools and public research showed that current tools can only scan a\r\nsmall number of potential Cobalt Strike instances (1–5k hosts). Our goal was to increase the scanning capabilities\r\nand validate several million potential Cobalt instances in less than an hour.\r\nTo achieve the above goal within a reasonable timeframe and on a small budget, it was necessary to adapt and\r\nscale the current understanding of the Cobalt Strike hunting methodology. The following content assumes an\r\nunderstanding of what Cobalt Strike is and how to locate and identify Cobalt strike instances. Before going into\r\nthe details of the tool and their components, let’s take a look at the general architecture.\r\nArchitecture review\r\nScanning a large number of hosts in a reasonable amount of time does not scale and has physical, cost and power\r\nlimitations. Unless you have a great home lab and the bandwidth to support it, personal computing cannot really\r\nsolve the scaling problem, so the decision was made to use AWS to affordably scale and achieve the desired goals.\r\nGeneral architecture review\r\nThe tool is developed and heavily based on AWS SQS, Lambda and DynamoDB.\r\nPress enter or click to view image in full size\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 2 of 17\n\nThe Pointer client parses the local json file with a list of IPs, optimally splits them into packets (10–20 IPs), and\r\nthen adds the packets to be processed to the SQS queue.\r\nThe SQS queue is setup to invoke a lambda function for each packet in the queue. The lambda function (Pointer\r\nserver) performs the actual scanning of the provided packet of IPs and saves results to DynamoDB.\r\nIn cases where Lambda fails or throws an error, packets are returned to the SQS queue and will wait for a retry.\r\nIf the packet fails a second time, a new Lambda function is launched that logs the failed packet to DynamoDB for\r\nfurther analysis and rescan each IP individually to locate the failed IPs.\r\nCode Review\r\nThe scan functionality of the “Pointer server” consists of 4 parts:\r\n1. Port Scanning (Port Workers)\r\n2. HTTP Webservice scan (HTTP Workers)\r\nCertificate parsing\r\nJARM parsing\r\n3. HTTPS Webservice scan (HTTPS Workers)\r\n4. Beacon Parsing (Beacon Workers)\r\nThe tool was designed with an asynchronous approach to IP processing. Each scan probe stands as an independent\r\nunit, which is then processed by a Worker. The probes include a port scanning, Certificate Issuer parsing, JARM\r\nparsing, webservice scanning, and Beacon parsing. Once each probe is completed, the result is sent to the\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 3 of 17\n\ncorresponding controller, which writes the result to the global map. After all scan workers are done, the data is\r\nsorted and ordered before being combined into the Target structure. Overall, this reduces the number of delays\r\nsince each service(ip:port) has its own scan pipeline.\r\nPress enter or click to view image in full size\r\nInternal architecture of Lambda function\r\nDetailed review\r\nInitially the lambda function launches Port, HTTP, HTTPS, and Beacon workers. The number of workers depends\r\non the level of internal concurrency (Internal concurrency is the controllable CLI parameter). Each type of worker\r\nis portioned accordingly to the required power resources. Portioning has been calculated based on the number of\r\nprobes each worker performs in average.\r\nEach targeted IP address is scanned for 27 predefined ports, this list includes common ports on which Cobalt\r\nStrike beacons are hosted. The “launcher” sends service (ip:port) to the Port Workers through portChannel\r\nGolang channel.\r\nPress enter or click to view image in full size\r\nCode snippet of the Service Launcher\r\nPort workers then scan the individual ports. If a port is open, the worker sends the service to the HTTP Worker\r\nand Output controller through httpChannel and outputChannel Golang channels. If the port is closed the Port\r\nWorker exits the function.\r\nPress enter or click to view image in full size\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 4 of 17\n\nCode snippet of Port Worker\r\nAll workers send results through a single Golang channel, outputChannel , which are then processed by the\r\noutput controller and saved to the global map ( Sorter struct).\r\nEach result produced by the workers has its own type tag (Ex: \"Service|\" , \"Certificate|\" , \"Jarm|\" , … ),\r\nensuring that the ValidateOutput function can sort the results based on their types.\r\nPress enter or click to view image in full size\r\nCode snippet of Output Controller\r\nThe HTTP worker waits for IP and port tuple (service) to be provided by the Port Worker via the httpChannel . If\r\nthe HTTP Worker receives port 50050 it attempts the following actions:\r\nParse the certificate issuer -\u003e identifying the default self-signed Cobalt certificate\r\nParse the JARM signature -\u003e detecting malicious JARM signatures\r\nFor other services, it performs a web request to analyse response behaviour. Beacon’s HTTP/HTTPS indicators\r\nare controlled by a malleable C2 profile, if the server uses the default malleable C2 profile, it responds with a 404\r\nstatus code and 0 content-length for requests made to the root web endpoint. (http://domain.com/)\r\nIf the request to the targeted web service fails, HTTP Worker sends the service through httpsChannel channel\r\nfurther to the HTTPS Worker to perform the web request through HTTPS protocol.\r\nPress enter or click to view image in full size\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 5 of 17\n\nCode snippet of the HTTP Worker\r\nBeing inspired by the “Analyzing Cobalt Strike for Fun and Profit” research and its corresponding tool for cobalt\r\nstrike beacon parsing (developed using Python), we integrated the similar logic into our tool for beacon parsing\r\n(developed using Golang).\r\nThe guy, who researched how the beacon is packed, how to parse the beacon, how to decrypt the beacon, and how\r\nto work with that in general, you did the good job a big thank you!\r\nAll identified web services that have been configured with default malleable C2 profile are sent to the Beacon\r\nWorkers. The Beacon Worker attempts to parse the beacon config. If the parsing succeeds, Beacon Worker sends\r\nthe CobaltStrikeBeaconStruct struct to the Beacon controller through beaconStructChannel channel, and the\r\nbeacon location URI to the output controller through outputChannel channel.\r\nPress enter or click to view image in full size\r\nCode snippet of Beacon Worker\r\nPress enter or click to view image in full size\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 6 of 17\n\nCode snippet of Beacon Controller\r\nWhen all workers finish the scans, the Sort method maps all gathered scan results sent to the output controller\r\ninto the array of CobaltStrikeStruct type:\r\nPress enter or click to view image in full size\r\nCode snippet of CobaltStrikeStruct data type\r\nThe Probability field is assigned when the Voter function calls the internal method Vote for each\r\nCobaltStrikeStruct object within the array.\r\nIn case the certificate issuer matches the default Cobalt Strike self-signed certificate, the Vote method gives\r\n100% probability that it is the Cobalt Strike server. The same applies if the Beacon Worker successfully parses the\r\nbeacon config hosted on the web service.\r\nDefault web service response and malicious JARM signature results cannot give us confidence in assigning the\r\nprobability rate. Because other web services can respond with 0 content length and 404 status code, and servers\r\ncan be configured with the same TLS options (if you don’t understand what JARM is). If the Vote method\r\nmatches only those two indicators, it assigns the 70% probability to the object.\r\nIf none of these meet our requirements, it is probably not a Cobalt Strike server. But, again, this tool targets only\r\nCobalt Strike servers with default malleable C2 profile configurations.\r\nPress enter or click to view image in full size\r\nCode snippet of the Vote method\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 7 of 17\n\nDynamoDB Component\r\nWe chose DynamoDB service to store scan results. DynamoDB can handle more than 10 trillion requests per day\r\nand support peaks of more than 20 million requests per second. That is what we needed 100%! We wanted to scan\r\n20000–25000 targets per 60 seconds, which is about 40k-50k writing requests to the database.\r\nOn the first implementation, the Output and Beacon workers exceeded DynamoDB rate limits because they\r\nperformed a write request to the DynamoDB table for each target object separately, and, in addition, we used the\r\ndefault DynamoDB configuration. The default capacity configuration could not handle that many requests, but by\r\nincreasing the capacity we would pay more money for autoscaling during constant scanning. Further examination\r\nof the AWS documentation revealed that AWS had implemented the batch write to DynamoDB. For each lambda\r\ninvocation, we have 10–20 targets (depending on the packet size) to scan, so this should reduce the number of\r\nrequests to DynamoDB tables by a factor of 10–20.\r\nWe found that DynamoDB’s BatchWriteItemInput function allows writing up to 25 items and up to 16 Mb in\r\none request. The batch write implementation significantly decreased the number of requests and removed the rate\r\nlimiting issue at the default configuration level. We did not have to pay for unnecessary autoscaling.\r\nThis method has the disadvantage that if one of the items in the batch fails to be written, the whole batch will not\r\nbe saved. (The partition key must be unique and not exist in the table, but this is suitable in our case, as we filter\r\nour targets by unique values before launching the scans).\r\nPress enter or click to view image in full size\r\nCode snippet of the WriteBatchTarget function\r\nAlso, for unpredictable cases where the default capacity configuration cannot handle a large number of requests,\r\nwe configure autoscaling:\r\nAWS Console → DynamoDB → choose the Table → Edit Capacity → Read / Write Capacity increase to 10–15.\r\nTo enable autoscaling we should give the required permissions for the DynamoDB service role .\r\nLambda Component\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 8 of 17\n\nAWS Lambda is an interesting service. We wanted to try Lambda as a core service for our scans, however we did\r\nnot want to get a crazy paycheck at the end of the month, so we had several things to figure out:\r\n1. How much memory to allocate for Lambda execution\r\n2. What default timeout to set for Lambda execution\r\n3. How to manage Lambda concurrency\r\n4. What internal concurrency would be suited for our model;\r\n5. What request timeouts would be suited for our model\r\n6. What packet size would be suited for our model\r\nAnd the most difficult question — How to setup everything the way it would be efficient, cheap, and with\r\nminimum loss rate?\r\nLambda memory allocation\r\nIt was interesting to research how AWS allocates memory and CPU for Lambda functions, because it is physically\r\nimpossible to divide 1/10 of the CPU. But it can allocate 1/10 of the time of the CPU to a single function, and you\r\ncan have 10 of them working at the same time to share the same CPU core (check this research, it explains how\r\nAWS Lambda allocates CPU).\r\nGet Pavel Shabarkin’s stories in your inbox\r\nJoin Medium for free to get updates from this writer.\r\nRemember me for faster sign in\r\nThe only controllable parameter in AWS for Lambda functions is memory usage:\r\nPress enter or click to view image in full size\r\nExample of the memory configuration in AWS Console\r\nWe designed our model with a multithreaded architecture — the more cores we have, the better performance we\r\ncan potentially obtain. But the nasty thing here is what the price of this luxury is.)))\r\nWe cannot directly control the number of cores we want to use. The CPU performance scales with the memory\r\nconfiguration. Lambda functions used to always have 2 vCPU cores, regardless of the allocated memory. The rest\r\nof the cores are throttled at certain memory configurations. By increasing the memory allocation, we obtain more\r\ncores. I found the research that discovered how the number of vCPUs and multithreaded computation power vary\r\ndepending on the memory configuration.\r\nThe price for using the AWS Lambda function is based on the function runtime (in milliseconds) multiplied by the\r\nallocated memory (fixed prices per Mb). So, allocating 3008MB for the Lambda function, we get 2 vCPUs, and\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 9 of 17\n\nallocating 3009MB we get 3vCPUs. By allocating 3009MB memory, we could gain more performance, at almost\r\nthe same price.)))\r\nAccording to the research, we get better performance gain for multithreading with each spike transition (jump in\r\ncores). But for our model we do not need more than 3 cores, 3009MB is enough for our purposes.\r\nCorrelation between Lambda memory configuration and number of cores\r\nBy the way, we decided to measure the computational power ourselves, and the practical tests showed that the\r\npower spike between 3008–3009 MB is bigger than between 5307–5308 MB. This once again confirms that the\r\n3009MB memory configuration is the best choice for us.\r\nPress enter or click to view image in full size\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 10 of 17\n\nCode snippet for measuring the multithreaded computation power\r\nInternal parameter tuning\r\nAs we got a solid understanding of how many resources to use, we started tuning and suiting other parameters for\r\nour model.\r\nIn my opinion, the lifetime of the Pointer lambda function should not be more than 60 seconds, because otherwise\r\nit will not be a true server-less tool with easy management, stable to errors and autoscaled architecture.\r\nWith a memory configuration of 3009 Mb and a default timeout of 60 seconds for one Lambda execution, we\r\ncould scan from 10–20 targets in a single packet.\r\nIn case the lambda execution fails, we do not want to rescan all the targets inside the packet again. By having less\r\nnumber of targets in the packet, we minimize the probability that the packet will be crashed. Therefore, the\r\noptimal size, in my opinion, is 10–20 targets.\r\nBy having less number of targets inside the packet, we are minimising the chance that the packet will be crashed.\r\nIn case the lambda execution fails, we must rescan all targets inside the packet (even those that have been\r\nsuccessfully scanned).\r\nWhen we defined lambda configuration parameters, the rest of the parameters were tuned with a big number of\r\ntests:\r\nTargets/per packet 20 (Items)\r\nConcurrency 140 (Items)\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 11 of 17\n\nLambda Memory 3009 (Mb)\r\nLambda Timeout 60 (sec)\r\nHttp timeout 4 (sec)\r\nPort timeout 2 (sec)\r\nBeacon timeout 10 (sec)\r\nLambda Concurrency\r\nAWS Lambda provides autoscaling for function instances. But we simply cannot deploy as many instances as we\r\nwant. We are limited by the AWS region quota (All the lambda functions of an account can use the pool of 1000\r\nunreserved concurrent executions). Thus, having only 1 deployed function, we could get 1000 concurrent\r\nexecutions at the same time.\r\nDorking potential Cobalt Strike servers through Shodan, we could retrieve around 200–300k potential targets.\r\nHowever, we are designing the tool to scan 2M-10M targets. For example, 2M targets is about 100k packets (20\r\ntargets per packet), which means 100k lambda function invocations. If Lambda function is invoked 100k times,\r\nthe lambda puller would process only 1k requests at a time, and the rest would be just throttled. So even if we\r\nincrease the AWS region quota, it will not be enough.\r\nSo the question arises — how we can manage the invocation process? The answer is simple — SQS.\r\nSQS Component\r\nConfiguration\r\nAmazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple\r\nand scale microservices, distributed systems, and serverless applications. This means we can send all our packets\r\nto the queue and SQS will manage the process of lambda invocation. The SQS management can be configured\r\naccording to our needs:\r\n1. We can control the number of retries.\r\nWe can configure the maximum number of retries the SQS would perform, if the batch of\r\nmessages(packets) fails\r\nThe SQS sends messages packed in the batches, we can control the number of messages inside the batch\r\nwe want to pass to the Lambda function, so having 1 message in the batch will equal 1 message.\r\nIn our model we decided that If message fails more than once, it would be sent to the Dead-Letter-Queue\r\n(DLQ). We designed the Dead-Letter-Queue (DLQ) to redirect the failed messages to the Lambda function\r\nwith the same logic as the core one, but before scanning activities it writes the the failed packet to the\r\nDynamoDB table and rescans each target separately.\r\n2. Visibility timeout\r\nThe visibility timeout sets the length of time that a message received from the queue (by one lambda\r\nfunction) will not be visible to the lambda function again. If the lambda function fails to process and delete\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 12 of 17\n\nthe message before the visibility timeout expires, the message becomes visible to the lambda function\r\nagain.\r\n3. SQS batch size\r\nWe configured SQS batch size to a single message.\r\nSQS \u0026 Lambda autoscaling\r\nFor standard queues, Lambda uses long polling to poll the queue until it becomes active. When messages are\r\navailable, Lambda reads up to 5 batches and sends them to our lambda function. If messages are still available,\r\nLambda increases the number of processes that read batches up to 60 more instances per minute. The maximum\r\nnumber of batches that can be processed simultaneously by event source mapping is 1000. This means that the full\r\npower we can get after 16 minutes of continuous scanning.\r\nResults\r\nAt the first launch, when we ran a scan for 160k targets, we were able to identify 1,700 Cobalt Strike servers and\r\nparse 1,400 of their beacon configurations within 40 minutes. The Pointer tool can produce best performance\r\nresults if the target size exceeds 500k. Scanning 160k targets took a little longer because 1000 concurrent lambda\r\nexecutions were achieved only after 30 minutes when the tool was launched. For the current implementation, the\r\ncost of scanning 250k targets is about 20$, however we are looking for a solution that will make it cheaper.\r\nTargets table [sample]\r\nWe have developed 2 tables, first one for identified Cobalt Strike servers, and the second for parsed beacon\r\nconfigurations. Identified Cobalt Strike servers can be described by 7 features:\r\nIP address is a unique sorting key\r\nprobability that it’s the actual cobalt strike server (easier filtering)\r\nJARM signature\r\nCertificate Issuer\r\nOpened Ports\r\nResponse behaviour\r\nLinks to the beacon configurations that we parsed and saved to another table\r\nThere is an example of the cobalt strike server table:\r\nPress enter or click to view image in full size\r\nTable of parsed Cobalt Strike targets\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 13 of 17\n\nBeacons table [sample]\r\nThe Beacon configuration table has the uri feature as a unique sorting key, when the rest of the features are the\r\nactual parsed beacon configurations.\r\nHere is an example of the table with parsed beacon configurations:\r\nPress enter or click to view image in full size\r\nThe full version of tables you can find here:\r\nSpreadsheet with 1709 Cobalt Strike servers\r\nSpreadsheet with 1473 Beacon configurations\r\nData analysis\r\nWe are using collected data to map attackers infrastructure and understand how the attackers operate Cobalt\r\nStrike.\r\nWe know that threat intelligence groups are tracking specific ransomware groups with the help of Watermarks, For\r\nexample:\r\nSodinokibi (Watermark 452436291)\r\nAPT 27 (Watermark 305419896).\r\nBased on the beacon’s spawnto locations the blue teams can develop detection controls.\r\nPress enter or click to view image in full size\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 14 of 17\n\nLocation of servers (IP) (Hosting provides)\r\nPress enter or click to view image in full size\r\nWatermarks\r\nPress enter or click to view image in full size\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 15 of 17\n\nCountries\r\nPress enter or click to view image in full size\r\nSpawn location\r\nPress enter or click to view image in full size\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 16 of 17\n\nSample of the Dork database (not completed)\r\nSource: https://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nhttps://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a\r\nPage 17 of 17",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"Malpedia"
	],
	"references": [
		"https://medium.com/@shabarkin/pointer-hunting-cobalt-strike-globally-a334ac50619a"
	],
	"report_names": [
		"pointer-hunting-cobalt-strike-globally-a334ac50619a"
	],
	"threat_actors": [
		{
			"id": "610a7295-3139-4f34-8cec-b3da40add480",
			"created_at": "2023-01-06T13:46:38.608142Z",
			"updated_at": "2026-04-10T02:00:03.03764Z",
			"deleted_at": null,
			"main_name": "Cobalt",
			"aliases": [
				"Cobalt Group",
				"Cobalt Gang",
				"GOLD KINGSWOOD",
				"COBALT SPIDER",
				"G0080",
				"Mule Libra"
			],
			"source_name": "MISPGALAXY:Cobalt",
			"tools": [],
			"source_id": "MISPGALAXY",
			"reports": null
		},
		{
			"id": "5c13338b-eaed-429a-9437-f5015aa98276",
			"created_at": "2022-10-25T16:07:23.582715Z",
			"updated_at": "2026-04-10T02:00:04.675765Z",
			"deleted_at": null,
			"main_name": "Emissary Panda",
			"aliases": [
				"APT 27",
				"ATK 15",
				"Bronze Union",
				"Budworm",
				"Circle Typhoon",
				"Earth Smilodon",
				"Emissary Panda",
				"G0027",
				"Group 35",
				"Iron Taurus",
				"Iron Tiger",
				"Linen Typhoon",
				"LuckyMouse",
				"Operation DRBControl",
				"Operation Iron Tiger",
				"Operation PZChao",
				"Operation SpoiledLegacy",
				"Operation StealthyTrident",
				"Red Phoenix",
				"TEMP.Hippo",
				"TG-3390",
				"ZipToken"
			],
			"source_name": "ETDA:Emissary Panda",
			"tools": [
				"ASPXSpy",
				"ASPXTool",
				"Agent.dhwf",
				"AngryRebel",
				"Antak",
				"CHINACHOPPER",
				"China Chopper",
				"Destroy RAT",
				"DestroyRAT",
				"FOCUSFJORD",
				"Farfli",
				"Gh0st RAT",
				"Ghost RAT",
				"HTTPBrowser",
				"HTran",
				"HUC Packet Transmit Tool",
				"HighShell",
				"HttpBrowser RAT",
				"HttpDump",
				"HyperBro",
				"HyperSSL",
				"HyperShell",
				"Kaba",
				"Korplug",
				"LOLBAS",
				"LOLBins",
				"Living off the Land",
				"Mimikatz",
				"Moudour",
				"Mydoor",
				"Nishang",
				"OwaAuth",
				"PCRat",
				"PlugX",
				"ProcDump",
				"PsExec",
				"RedDelta",
				"SEASHARPEE",
				"Sensocode",
				"SinoChopper",
				"Sogu",
				"SysUpdate",
				"TIGERPLUG",
				"TVT",
				"Thoper",
				"Token Control",
				"TokenControl",
				"TwoFace",
				"WCE",
				"Windows Credential Editor",
				"Windows Credentials Editor",
				"Xamtrav",
				"ZXShell",
				"gsecdump",
				"luckyowa"
			],
			"source_id": "ETDA",
			"reports": null
		}
	],
	"ts_created_at": 1775434786,
	"ts_updated_at": 1775792166,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/fb23aa3fc7c4f0f5af9cbf87b085e5438f0a4c60.pdf",
		"text": "https://archive.orkl.eu/fb23aa3fc7c4f0f5af9cbf87b085e5438f0a4c60.txt",
		"img": "https://archive.orkl.eu/fb23aa3fc7c4f0f5af9cbf87b085e5438f0a4c60.jpg"
	}
}