{
	"id": "3846713c-31a2-4341-9474-883c5acfc3de",
	"created_at": "2026-04-06T00:16:21.130523Z",
	"updated_at": "2026-04-10T13:12:53.412733Z",
	"deleted_at": null,
	"sha1_hash": "18e8492bf01604e0b0fff23c3dc0cf99d841d3d8",
	"title": "Behind the scenes in the Expel SOC: Alert-to-fix in AWS",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 1759607,
	"plain_text": "Behind the scenes in the Expel SOC: Alert-to-fix in AWS\r\nBy Jon Hencinski, Anthony Randazzo, Sam Lipton, Lori Easterly\r\nPublished: 2020-07-28 · Archived: 2026-04-05 20:46:18 UTC\r\nOver the July 4th holiday weekend our SOC spotted a coin-mining attack in a customer’s Amazon Web Services\r\n(AWS) environment. The attacker compromised the root IAM user access key and used it to enumerate the\r\nenvironment and spin up ten (10) c5.4xlarge EC2s to mine Monero.\r\nWhile this was just a coin miner, it was root key exposure. The situation could have easily gotten out of control\r\npretty quickly. It took our SOC 37 minutes to go from alert-to-fix. That’s 37 minutes to triage the initial lead (a\r\ncustom AWS rule using CloudTrail logs), declare an incident and tell our customer how to stop the attack.\r\nJon’s take: Alert-to-fix in 37 minutes is quite good. Recent industry reporting indicates that most incidents\r\nare contained on a time basis measured in days not minutes. Our target is that 75 percent of the time we go\r\nfrom alert-to-fix in less than 30 minutes. Anything above that automatically goes through a review process\r\nthat we’ll talk about more in a bit.\r\nHow’d we pull it off so quickly? Teamwork.\r\nWe get a lot of questions about what detection and response looks like in AWS, so we thought this would be a\r\ngreat opportunity to take you behind the scenes. In this post we’ll walk you through the process from alert-to-fix\r\nhttps://expel.io/blog/behind-the-scenes-expel-soc-alert-aws/\r\nPage 1 of 10\n\nin AWS over a holiday weekend. You’ll hear from the SOC analysts and Global Response Team who worked on\r\nthe incident.\r\nBefore we tell you how it went down, here’s the high level play-by-play:\r\nTriage, investigation and remediation timeline\r\nNow we’ll let the team tell the story.\r\nSaturday, July 4, 2020\r\nInitial Lead: 12:19:37 AM ET\r\nBy Sam Lipton and Lori Easterly – SOC analysts\r\nOur shift started at 08:45 pm ET on Friday, July 3. Like many organizations, we’ve been working fully remotely\r\nsince the middle of March. We jumped on the Zoom call for shift handoff, reviewed open investigations, weekly\r\nalert trending and general info for situational awareness. Things were (seemingly) calm.\r\nWe anticipated a quieter shift. On a typical Friday night into Saturday morning, we’ll handle about 100 alerts. It’s\r\nnot uncommon for us to spot an incident on Friday evening/Saturday morning, but it’s not the norm. It’s usually\r\nslower on the weekend; there are fewer active users and devices.\r\nhttps://expel.io/blog/behind-the-scenes-expel-soc-alert-aws/\r\nPage 2 of 10\n\nOur shift started as we expected, slow and steady. Then suddenly, as is the case in security operations, that all\r\nchanged.\r\nWe spotted an AWS alert based on CloudTrail logs that told us that EC2 SSH access keys were generated for the\r\nroot access key from a suspicious source IP address using the AWS Golang SDK:\r\nInitial lead into the AWS coin-mining incident\r\nThe source IP address in question was allocated to a cloud hosting provider that we hadn’t previously seen create\r\nSSH key pairs via the ImportKeyPair API in this customer’s AWS environment (especially from the root\r\naccount!). The SSH key pair alert was followed shortly thereafter by AWS GuardDuty alerts for an EC2 instance\r\ncommunicating with a cryptocurrency server (monerohash[.]com on TCP port 7777).\r\nWe jumped into the SIEM, queried CloudTrail logs and quickly found that the EC2 instances communicating with\r\nmonerohash[.]com were the same EC2 instances associated with the SSH key pairs that were just detected.\r\nhttps://expel.io/blog/behind-the-scenes-expel-soc-alert-aws/\r\nPage 3 of 10\n\nCorroborating AWS GuardDuty alert\r\nAs our CTO Peter Silberman says, it was time to buckle up and “pour some Go Fast” on this.\r\nWe’ve talked about our Expel robots in a previous post. As a quick refresher, our robot Ruxie (yes– we give our\r\nrobots names) automates investigative workflows to surface up more details to our analysts. In this event, Ruxie\r\npulled up API calls made by the principal (interesting in this context is mostly anything that isn’t Get*, List*,\r\nDescribe* and Head*).\r\nhttps://expel.io/blog/behind-the-scenes-expel-soc-alert-aws/\r\nPage 4 of 10\n\nAWS alert decision support – Tell me what other interesting API calls this AWS principal made\r\nThis made it easy for us to understand what happened:\r\nThe root AWS access key was potentially compromised.\r\nThe root access key was used to access the AWS environment from a cloud hosting environment using the\r\nAWS Golang SDK. It was then used to create SSH keys, spin up EC2 instances via the RunInstances API\r\ncall and created new security groups likely to allow inbound access from the Internet.\r\nWe inferred that the root access key was likely compromised and used to deploy coin miners.\r\nYep, time to escalate this to an incident, take a deeper look, engage the customer and notify the on-call Global\r\nResponse Team Incident Handler.\r\nPagerDuty escalation to Global Response Team: 12:37:00 AM ET\r\nOur Global Response Team (GRT) consists of senior and principal-level analysts who serve as incident responders\r\nfor critical incidents. AWS root key exposure introduces a high level of risk for any customer, so we made the call\r\nto engage the GRT on call using PagerDuty. The escalation goes out to a Slack channel that’s monitored by the\r\nmanagement team to track utilization.\r\nhttps://expel.io/blog/behind-the-scenes-expel-soc-alert-aws/\r\nPage 5 of 10\n\nPagerDuty escalation out to the GRT on-call\r\nIncident declaration: 12:39:21 AM ET\r\nA few minutes after the initial lead landed in Expel Workbench – 19 minutes to be exact – we notified the\r\ncustomer that there was a critical security incident in their AWS environment involving the root access key. And\r\nthat access key was used to spin up new EC2 instances to perform coin mining. Simultaneously, we jumped into\r\nour SIEM and queried CloudTrail logs to help answer:\r\nDid the attacker compromise any other AWS accounts?\r\nHow long has the attacker had access?\r\nWhat did the attacker do with the access?\r\nHow did the attacker compromise the root AWS access key?\r\nAt 12:56:43 ET we provided the first remediation actions to our customer to help contain the incident in AWS\r\nbased on what we knew. This included:\r\nSteps on how to delete and remove the stolen root access key; and\r\nInstructions on how to terminate EC2 instances spun up by the attacker.\r\nWe felt pretty good at this point – we had a good understanding of what happened. The customer acknowledged\r\nthe critical incident and started working on remediation, while the GRT Incident Handler was inbound to perform\r\na risk assessment.\r\nAlert-to-fix in 37 minutes. Not a bad start to our shift.\r\nGlobal Response Team enters the chat: 12:42:00 AM ET\r\nhttps://expel.io/blog/behind-the-scenes-expel-soc-alert-aws/\r\nPage 6 of 10\n\nBy Anthony Randazzo – Global Response Team Lead\r\nI usually keep my phone on silent, but PagerDuty has a vCard that allows you to set an emergency contact. This\r\nbypasses your phone’s notifications setting so that if you receive a call from this contact, your phone rings\r\n(whether it’s in silent mode or not).\r\nWe call it the SOC “bat phone.”\r\nThis wasn’t the first time I was paged in the middle of the night. I grabbed my phone, saw the PagerDuty icon and\r\nanswered.\r\nThere’s a lot of trust in our SOC. I knew immediately that if I was being paged, then the shift analysts were\r\nconfident that there was something brewing that needed my attention.\r\nI made my way downstairs to my office and hopped on Zoom to get a quick debrief from the analysts about what\r\nalerts came in and what they were able to discover through their initial response. Now that I’m finally awake, it’s\r\ntime to surgically determine the full extent of what happened.\r\nAs the GRT incident handler, it’s important to not only perform a proper technical response to the incident, but\r\nalso understand the risk. That way, we can thoroughly communicate with our customer at any given time\r\nthroughout the incident, and continue to do so until we’re able to declare that the incident is fully contained.\r\nAt this point, we have the answers to most of our investigative questions, courtesy of the SOC shift analysts:\r\nDid the attacker compromise any other AWS accounts? There is no evidence of this.\r\nHow long has the attacker had access? This access key was not observed in use for the previous 30\r\ndays.\r\nWhat did the attacker do with the access? The attacker generated a bunch of EC2 instances and\r\nenabled an ingress rule to SSH in and install CoinMiner malware.\r\nHow did the attacker compromise the root AWS access key? We don’t know and may never know.\r\nMy biggest concern at this point was communicating to the customer that the access key remediation needs to\r\noccur as soon as possible. While this attack was an automated coin miner bot, there was still an unauthorized\r\nattacker with an intent of financial gain that had root access to an AWS account containing proprietary and\r\npotentially sensitive information lurking somewhere.\r\nThere are a lot of “what ifs” floating around in my head. What if the attacker realizes they have a root access key?\r\nWhat if the attacker decides to start copying our customer’s EBS volumes or RDS snapshots?\r\nhttps://expel.io/blog/behind-the-scenes-expel-soc-alert-aws/\r\nPage 7 of 10\n\nIncident contained: 02:00:00 AM ET\r\nBy 2:00 am ET we had the incident fully scoped which meant we understood:\r\nWhen the attack started\r\nHow many IAM principals the attacker compromised\r\nAWS EC2 instances compromised by the attacker\r\nIP addresses used by the attacker to access AWS (ASN: AS135629)\r\nDomain and IP address resolutions to coin mining pool (monerohash[.]com:7777)\r\nAnd API calls made by the attacker using the root access key\r\nAt this point I focused on using what we understood about the attack to deliver complete remediation steps to our\r\ncustomer. This included:\r\n1. A full list of all EC2 instances spun up by the attacker with details on how to terminate them\r\n2. AWS security groups created by the attacker and how to remove them\r\n3. Checking in on the status of the compromised root access key\r\nI provided a summary of everything we knew about the attack to our customer, did one last review of the\r\nremediation steps for accuracy and chatted with the SOC over Zoom to make sure we set the team up for success\r\nif the attacker came back.\r\nFor reference, below are the MITRE ATT\u0026CK Enterprise and Cloud Tactics observed during Expel’s response:\r\nMITRE ATT\u0026CK Enterprise and Cloud Tactics observed during Expel’s response\r\nInitial Access Valid Accounts\r\nExecution Scripting\r\nPersistence Valid Accounts, Redundant Access\r\nCommand and Control Uncommonly Used Port\r\nWith the incident now under control, I resolved the PagerDuty escalation and called it a morning.\r\nPagerDuty escalation resolution at 2:07am ET\r\nhttps://expel.io/blog/behind-the-scenes-expel-soc-alert-aws/\r\nPage 8 of 10\n\nTuesday, July 7th\r\nBy Jon Hencinski – Director of Global Security Operations\r\nCritical incident hot wash: 10:00:00 AM ET\r\nFor every critical incident we’ll perform a lightweight 15-minute “hot wash.” We use this time to come together as\r\na team to reflect and learn. NIST has some opinions on what you should ask, at Expel we mainly focus on asking\r\nourselves:\r\nHow quickly did we detect and respond? Was this within our internal target?\r\nDid we provide the right remediation actions to our customer?\r\nDid we follow the process and was it effective?\r\nDid we fully scope the incident?\r\nIs any training required?\r\nWere we effective? If not, what steps do we need to take to improve?\r\nIf you’re looking for an easy way to get started with a repeatable incident hot wash, steal this:\r\nIncident hot wash document template. Steal me! \r\nThe bottom line: celebrate what went well and don’t be afraid to talk about where you need to improve. Each\r\nincident is an opportunity to advance your skills and train your investigative muscle.\r\nLessons Learned\r\nWe were able to help our customer get the situation under control pretty quickly but there were still some really\r\ninteresting observations:\r\nhttps://expel.io/blog/behind-the-scenes-expel-soc-alert-aws/\r\nPage 9 of 10\n\nIt’s entirely possible that the root access key was scraped and passed off to the bot to spin up miners right\r\nbefore this was detected. We didn’t see any CLI, console or other interactive activity, fortunately.\r\nThe attacker definitely wasn’t worried about setting off any sort of billing or performance alarms given the\r\nsize of these EC2s.\r\nThis was the first time we saw an attacker bring their own SSH key pairs that were uniquely named.\r\nUsually we see these generated in the bot automation run via the CreateKeyPair API.\r\nThe CoinMiner was likely installed via SSH remote access (as a part of the bot). We didn’t have local EC2\r\nvisibility to confirm, but an ingress rule was created in the bot automation to allow SSH from the Internet.\r\nThis was also the first time we’d observed a bot written in the AWS Golang software development kit\r\n(SDK). This is interesting because as defenders, it’s easy to suppress alert-based on user-agents,\r\nparticularly SDKs we don’t expect to be used in attacks.\r\nWe’ll apply these lessons learned, continue to improve our ability to spot evil quickly in AWS and mature are\r\nresponse procedures.\r\nWhile we felt good about taking 37 minutes to go from alert-to-fix in AWS in the early morning hours, especially\r\nduring a holiday, we don’t plan on letting it get to our heads. We hold that highly effective SOCs are the right\r\ncombination of people, tech and process.\r\nReally great security is a process, there is no end state – the work to improve is never done!\r\nDid you find this behind-the-scenes look into our detection and response process helpful? If so, let us know and\r\nwe’ll plan to continue pulling the curtain back in the future!\r\nSource: https://expel.io/blog/behind-the-scenes-expel-soc-alert-aws/\r\nhttps://expel.io/blog/behind-the-scenes-expel-soc-alert-aws/\r\nPage 10 of 10",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"origins": [
		"web"
	],
	"references": [
		"https://expel.io/blog/behind-the-scenes-expel-soc-alert-aws/"
	],
	"report_names": [
		"behind-the-scenes-expel-soc-alert-aws"
	],
	"threat_actors": [
		{
			"id": "d90307b6-14a9-4d0b-9156-89e453d6eb13",
			"created_at": "2022-10-25T16:07:23.773944Z",
			"updated_at": "2026-04-10T02:00:04.746188Z",
			"deleted_at": null,
			"main_name": "Lead",
			"aliases": [
				"Casper",
				"TG-3279"
			],
			"source_name": "ETDA:Lead",
			"tools": [
				"Agentemis",
				"BleDoor",
				"Cobalt Strike",
				"CobaltStrike",
				"RbDoor",
				"RibDoor",
				"Winnti",
				"cobeacon"
			],
			"source_id": "ETDA",
			"reports": null
		},
		{
			"id": "75108fc1-7f6a-450e-b024-10284f3f62bb",
			"created_at": "2024-11-01T02:00:52.756877Z",
			"updated_at": "2026-04-10T02:00:05.273746Z",
			"deleted_at": null,
			"main_name": "Play",
			"aliases": null,
			"source_name": "MITRE:Play",
			"tools": [
				"Nltest",
				"AdFind",
				"PsExec",
				"Wevtutil",
				"Cobalt Strike",
				"Playcrypt",
				"Mimikatz"
			],
			"source_id": "MITRE",
			"reports": null
		}
	],
	"ts_created_at": 1775434581,
	"ts_updated_at": 1775826773,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/18e8492bf01604e0b0fff23c3dc0cf99d841d3d8.pdf",
		"text": "https://archive.orkl.eu/18e8492bf01604e0b0fff23c3dc0cf99d841d3d8.txt",
		"img": "https://archive.orkl.eu/18e8492bf01604e0b0fff23c3dc0cf99d841d3d8.jpg"
	}
}