{
	"id": "d5e24fef-3491-4ea0-9a24-1cc92bd0e909",
	"created_at": "2026-04-06T00:18:32.176397Z",
	"updated_at": "2026-04-10T03:23:52.062695Z",
	"deleted_at": null,
	"sha1_hash": "78d0df692e9f7687b861b4edc4f07ed50a293678",
	"title": "Managed Detection \u0026 Response for AWS",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 1036626,
	"plain_text": "Managed Detection \u0026 Response for AWS\r\nBy Anthony Randazzo, Britton Manahan, Sam Lipton\r\nPublished: 2020-04-28 · Archived: 2026-04-05 23:21:15 UTC\r\nDetection and response in cloud infrastructure is a relatively new frontier. On top of that, there aren’t many\r\ncompromise details publicly available to help shape the detection strategy for anyone running workloads in the\r\ncloud.\r\nThat’s why our team here at Expel is attempting to bridge the gap between theory and practice.\r\nOver the years, we’ve detected and responded to countless Amazon Web Services (AWS) incidents, ranging from\r\npublic S3 bucket exposures to compromised EC2 instance credentials and RDS ransomware attacks. Recently, we\r\nidentified an incident involving the use of compromised AWS access keys.\r\nIn this post, we’ll walk you through how we caught the problem, what we observed in our response, how we\r\nkicked the bad guy out and the lessons we learned along the way.\r\nCompromised AWS access keys: How we caught ‘em\r\nWe first determined there was something amiss thanks to an Expel detection using CloudTrail logs. Here at Expel,\r\nwe encourage many of our customers who run on AWS to use Amazon GuardDuty. But we’ve also taken it upon\r\nourselves to develop detection use cases against CloudTrail logs. GuardDuty does a great job of identifying\r\ncommon attacks, and we’ve also found CloudTrail logs to be a great source of signal for additional alerting that’s\r\nmore specific to an AWS service or an environment.\r\nIt all started with the alert below, telling us that EC2 SSH access keys were being generated\r\n(CreateKeyPair/ImportKeyPair) from a suspicious source IP address.\r\nhttps://expel.io/blog/finding-evil-in-aws/\r\nPage 1 of 7\n\nInitial lead Expel alert\r\nHow’d we know it was suspicious?\r\nWe’ve created an orchestration framework that allows us to launch actions when certain things happen. In this\r\ncase, when an alert fired an Expel robot picked it up and added additional information. This robot uses a third-party enrichment service for IPs (in this case, our friends at ipinfo.io). More on our robots here shortly.\r\nKeep in mind that these are not logins to AWS per se. These are authenticated API calls with valid IAM\r\nuser access keys. API access can be restricted at the IP layer, but it can be a little burdensome to manage in\r\nthe IAM Policy. As you can see in the alert shown above, there was no MFA enforced for this API call.\r\nAgain, this was not a login, but you can also enforce MFA for specific API calls through the IAM Policy.\r\nWe’ve observed only a few AWS customers using either of these controls.\r\nAnother interesting detail from this alert was the use of the AWS Command Line Interface (CLI). This isn’t\r\ncompletely out of the norm, but it heightened our suspicion a bit because it’s less common than console (UI) or\r\nAWS SDK access. Additionally, we found this user hadn’t used the AWS CLI in recent history, potentially\r\nindicating a new person was using these credentials. The manual creation of an access key was also an atypical\r\naction versus leveraging infrastructure as code to manage keys (i.e. CloudFormation or Terraform).\r\nTaking all of these factors into consideration, we knew we had an event worthy of additional investigation.\r\nCue the robots, answer some questions\r\nOur orchestration workflows are critically important – they tackle highly repetitive tasks, that is answer questions\r\nan analyst would ask about an alert, on our behalf as soon as the alert fires. We call these workflows our robots.\r\nWhen we get an AWS alert from a customer’s environment, we have three consistent questions we like to answer\r\nto help our analysts determine if it’s worthy of additional investigation (decision support):\r\nhttps://expel.io/blog/finding-evil-in-aws/\r\nPage 2 of 7\n\nDid this IAM principal (user, role, etc.) assume any other roles?\r\nWhat AWS services does this principal normally interact with?\r\nWhat interesting API calls has this principal made?\r\nSo, when the initial lead alert for the SSH key generation came in, we quickly understood that role assumption\r\nwas not in play for this compromise. If the user had assumed roles, it would have been key to identity and include\r\nthem in the investigation. Instead, we saw the image below:\r\nExpel AWS AssumeRole Robot\r\nOnce we knew access was limited to this IAM user, we wanted to know what AWS services this principal\r\ngenerally interacts with. Understanding this helps us spot outlier activity that’s considered unusual for that\r\nprincipal. Seeing the very limited API calls to other services further indicated that something nefarious might be\r\ngoing on.\r\nExpel AWS Service Interaction Robot\r\nFinally, we wanted to see what interesting API calls the principal made. From a detection perspective, we define\r\ninteresting API calls in this context to be mostly anything that isn’t Get*, List*, Describe* and Head*. This\r\nenrichment returned 344 calls to the AuthorizeSecurityGroupIngress API from the AWS CLI user-agent. This is\r\nreally the tipping point for considering this a security incident.\r\nhttps://expel.io/blog/finding-evil-in-aws/\r\nPage 3 of 7\n\nExpel AWS Interesting API Robot\r\nHow we responded\r\nAfter we spotted the attack, we needed to scope this incident and provide the measures for containment. We\r\nframed our response by asking the primary investigative questions. Our initial response was going to be limited to\r\ndetermining what happened in the AWS control plane (API). CloudTrail was our huckleberry for answering most\r\nof our questions.\r\nWhat credentials did the attacker have access to?\r\nHow long has the attacker had access?\r\nWhat did the attacker do with the access?\r\nHow did the attacker get access?\r\nWhat credentials did the attacker have access to?\r\nBy querying historical CloudTrail events for signs of this attacker, Expel was able to identify that they had access\r\nto a total of eight different IAM user access keys, and was active from two different IPs. If we recall from earlier,\r\nwe were able to use our robot to determine that no successful AssumeRole calls were made, limiting our response\r\nto these IAM users.\r\nHow long has the attacker had access?\r\nCloudTrail indicated that most of the access keys had not been used by anyone else in the past 30 days thus we\r\ncan infer that the attacker likely discovered the keys recently.\r\nWhat did the attacker do with the access?\r\nBased on observed API activity, the attacker had a keen interest in S3, EC2 and RDS services as we observed\r\nListBuckets, DescribeInstances and DescribeDBInstances calls for each access key, indicating an attempt to see\r\nwhich of these resources was available to the compromised IAM user.\r\nAs soon as the attacker identified a key with considerable permissions, DescribeSecurityGroups was called to\r\ndetermine the level of application tier access (firewall access) into the victim’s AWS environment. Once these\r\ngroups were enumerated, the attacker “backdoored” all of the security groups with a utility similar to aws_pwn’s\r\nbackdoor_all_security_group script. This allowed for any TCP/IP access into the victim’s environment.\r\nhttps://expel.io/blog/finding-evil-in-aws/\r\nPage 4 of 7\n\nAdditional AuthorizeSecurityGroupIngress calls were made for specific ingress rules for port 5432 (postgresql)\r\nand port 1253, amounting to hundreds of unique Security Group rules created. These enabled the attacker to gain\r\nnetwork access to the environment and created additional risks by exposing many AWS service instances (EC2,\r\nRDS, etc.) to the internet.\r\nA subsequent DescribeInstances call identified available EC2 instances to the IAM user. The attacker then\r\ncreated a SSH key pair (our initial lead alert for CreateKeyPair) for an existing EC2 instance. This instance was\r\nnot running at the time so the attacker turned it on via a RunInstances call. Ultimately, this series of actions\r\nresulted in command line access to the targeted EC2 instance, at which point visibility can be a challenge without\r\nadditional OS logging or security products to investigate instance activity.\r\nHow did the attacker get credentials?\r\nWhile frustrating, it’s not always feasible to identify the root cause of an incident for a variety of reasons. For\r\nexample, sometimes the technology simply doesn’t produce the data necessary to determine the root cause. In this\r\ncase, using the tech we had available to us, we weren’t able to determine how the attacker gained credentials, but\r\nwe have the following suspicions:\r\nGiven multiple credentials were compromised, it’s likely they were found in a public repository such as git,\r\nan exposed database or somewhere similar.\r\nIt’s also possible credentials were lifted from developer machines directly, for example the AWS\r\ncredentials file.\r\nWe attempted to confirm these, but couldn’t get to an answer in this case. Though unfortunate, it offers an\r\nopportunity to work with the victim to improve visibility.\r\nFor reference, below are the Mitre ATT\u0026CK Cloud Tactics observed during Expel’s response.\r\nInitial Access Valid Accounts\r\nPersistence Valid Accounts, Redundant Access\r\nPrivilege Escalation Valid Accounts\r\nDefensive Evasion Valid Accounts\r\nDiscovery Account Discovery\r\nCloud Security Threat Containment\r\nBy thoroughly scoping the attacker’s activities, we were able to deliver clear remediation steps. This included:\r\nDeleting the compromised access keys for the eight involved IAM user accounts;\r\nSnapshotting (additional forensic evidence) and rebuilding the compromised EC2 instance;\r\nDeleting the SSH keys generated by the attacker;\r\nhttps://expel.io/blog/finding-evil-in-aws/\r\nPage 5 of 7\n\nAnd deleting the hundreds of Security Group ingress rules created by the attacker.\r\nResilience: Helping our customer improve their security posture\r\nWhen we say incident response isn’t complete without fixing the root of the problem – we mean it.\r\nOne of the many things that makes us different at Expel is that we don’t just throw alerts over the fence. That\r\nwould only be sort of helpful to our customers and puts us in a position where we’d have to tackle the same issue\r\non another day … and likely on many more days after that.\r\nWe’re all about efficiency here. That’s why we provide clear recommendations for how to solve issues and what\r\nactions a customer can take to prevent these kinds of attacks in the future. Everybody wins (except for the bad\r\nguys).\r\nWhile we weren’t certain how the access keys were compromised in the first place, below are the resilience\r\nrecommendations we gave our customer once the issue was resolved.\r\nExpel AWS Resilience (1)\r\nIf the IAM user is unused, then it probably doesn’t need to remain active in your account. We made this\r\nrecommendation because these access keys hadn’t been in use by anyone other than the attacker in the\r\nprevious 30 days.\r\nExpel AWS Resilience (2)\r\nhttps://expel.io/blog/finding-evil-in-aws/\r\nPage 6 of 7\n\nSince the access keys for this IAM principal were at least 30 days old given that no activity occurred from\r\na legitimate user, it was time to do some tidying up, so to speak. If you need that user, rotate the access\r\nkeys on a regular basis.\r\nExpel AWS Resilience (3)\r\nWe noticed that this IAM user had far too many EC2 permissions and thought this resilience measure was\r\nin order. We also shared that it would be far safer to delegate those EC2 permissions with an IAM role.\r\nLessons learned\r\nFortunately, we were able to disrupt this attack before there was any serious damage, but it highlighted the very\r\nreal fact that cloud infrastructure – whether you’re running workloads on AWS or somewhere else – is a prime\r\ntarget for attackers. As with every incident, we took some time to talk through what we discovered through this\r\ninvestigation and are sharing our key lessons here.\r\nAWS customers must architect better security “in” the cloud. That is, create greater visibility into the\r\nEC2 and other infrastructure identified in the shared responsibility model.\r\nYou can’t find evil if the analysts don’t know what to look for – train, train some more, and then when\r\nyou’re done training, train again. Special thanks to Scott Piper (@0xdabbad00) and Rhino Security Labs\r\n(@RhinoSecurity) for their contributions to AWS security research.\r\nWhile security in the cloud is still relatively in its infancy, the same can be said for the attacker\r\nbehaviors – much of what we observed here and in the past were elementary attack patterns.\r\nThere are additional automated enrichment opportunities. We’ve started working on a new AWS\r\nrobotic workflow to summarize historical API usage data for the IAM principal and will compare it to the\r\naccess parameters of the alert.\r\nBe on the lookout for an additional blog post in the future for our automated AWS alert enrichments. Until then,\r\ncheck out our other blogs to learn more about how we leverage AWS cloud security for our customers, along with\r\ntips and tricks for ramping up your own org’s security when it comes to cloud.\r\nSource: https://expel.io/blog/finding-evil-in-aws/\r\nhttps://expel.io/blog/finding-evil-in-aws/\r\nPage 7 of 7",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"references": [
		"https://expel.io/blog/finding-evil-in-aws/"
	],
	"report_names": [
		"finding-evil-in-aws"
	],
	"threat_actors": [
		{
			"id": "d90307b6-14a9-4d0b-9156-89e453d6eb13",
			"created_at": "2022-10-25T16:07:23.773944Z",
			"updated_at": "2026-04-10T02:00:04.746188Z",
			"deleted_at": null,
			"main_name": "Lead",
			"aliases": [
				"Casper",
				"TG-3279"
			],
			"source_name": "ETDA:Lead",
			"tools": [
				"Agentemis",
				"BleDoor",
				"Cobalt Strike",
				"CobaltStrike",
				"RbDoor",
				"RibDoor",
				"Winnti",
				"cobeacon"
			],
			"source_id": "ETDA",
			"reports": null
		}
	],
	"ts_created_at": 1775434712,
	"ts_updated_at": 1775791432,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/78d0df692e9f7687b861b4edc4f07ed50a293678.pdf",
		"text": "https://archive.orkl.eu/78d0df692e9f7687b861b4edc4f07ed50a293678.txt",
		"img": "https://archive.orkl.eu/78d0df692e9f7687b861b4edc4f07ed50a293678.jpg"
	}
}