{
	"id": "020d92e7-e89e-42b4-a555-5e4fb7e7d87a",
	"created_at": "2026-04-06T00:14:47.437631Z",
	"updated_at": "2026-04-10T03:20:32.101755Z",
	"deleted_at": null,
	"sha1_hash": "07dadef3a5524e639ac442701ca720041eaba6ce",
	"title": "S3 Ransomware Part 2: Prevention and Defense",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 682152,
	"plain_text": "S3 Ransomware Part 2: Prevention and Defense\r\nBy Spencer Gietzen\r\nPublished: 2019-06-10 · Archived: 2026-04-05 20:20:17 UTC\r\nThis is part two in a two-part series on S3 Ransomware. Part One discusses the attack vector of S3 Ransomware\r\nand demonstrates a proof of concept.\r\nNote: This post not only discusses defense mechanisms against S3 ransomware, but it also touches on important\r\ngeneral security hygiene you should be following in your AWS accounts.\r\nPart 1 Recap: S3 Ransomware\r\nAs demonstrated in part one of this blog, S3 ransomware is an attack that can have an extremely high impact on a\r\ncompany. S3 ransomware is when an attacker is able to gain access to a victim’s S3 buckets, then they replace\r\neach object with a new copy of itself, but encrypted with the attacker’s KMS key. The victim would no longer be\r\nable to access their own S3 objects and would need to submit to the attackers demands in order to get them back\r\n(or risk the extra time it might take to go to the authorities or AWS incident response).\r\nS3 Ransomware Security Controls\r\nWhile S3 ransomware can be fairly straightforward for an attacker to perform, with the right defenses in place,\r\nyou can protect yourself. There are many different ways to defend against and prevent S3 ransomware, and this\r\npost aims to outline those various methods.\r\nDefense and Prevention Methods\r\nThere is no single method that acts as a “silver bullet” when preventing and defending against S3 ransomware\r\ngiven the cost, effort, and impact each method has on its own. Instead, the following methods are meant to be a\r\ncollection of good practices to follow. Not every method is a good fit for every environment, so it is important to\r\nunderstand them all and be able to choose the best methods to implement in your own environment.\r\nMethod #1: Follow Security Best Practice to Prevent Unauthorized Access to your Account\r\nThis is the most obvious defense, in that it essentially means “don’t let unauthorized people into your\r\nenvironment”, but that is much easier said than done. There is a laundry list of things to do for this defense, but\r\nsome important ones are outlined here:\r\nHave all users use methods of temporary access to access the environment instead of long-lived credentials\r\n(use IAM roles instead of IAM users).\r\nCreate pre-receive hooks in your Git repositories (more info here) to monitor for commits that may\r\naccidentally contain credentials and deny them before a developer pushes their access keys or credentials.\r\nhttps://rhinosecuritylabs.com/aws/s3-ransomware-part-2-prevention-and-defense/\r\nPage 1 of 7\n\nPerform regular phishing/social engineering training with your team to educate employees on how to spot\r\ntargeted attacks.\r\nEnforce multi-factor authentication (MFA) everywhere possible for everyone (both for the AWS web\r\nconsole and for AWS access keys)! This makes it more difficult (but still not impossible) for an attacker\r\nwho has stolen credentials to actually use those credentials.\r\nEnforce long, complex passwords/passphrases, and if MFA is not being enforced everywhere, then enforce\r\npassword expiration at a reasonable interval while disallowing repeat passwords\r\nEnsure that strong application security is in place for any application that has AWS access. This can help\r\nprevent something like a server-side request forgery (SSRF) attack to an EC2 instance’s metadata or a local\r\nfile read/remote code execution vulnerability from reading credentials from the AWS CLI or environment\r\nvariables.\r\nRegularly audit and monitor IAM access that is delegated within your accounts/organization (with\r\nsomething like Security Monkey by Netflix).\r\nAdditionally, consider reading this blog post on how AWS accounts are compromised.\r\nMethod #2: Follow the Principle of Least-Privilege\r\nThis defense involves setting up your environment and delegating access in such a way that no one has more\r\naccess than they should. That means that for any user (role/group/etc), they should only have the IAM permissions\r\ngranted to them that they need to use, and those permissions should only be allowed on resources that they need to\r\nwork with.\r\nFor example, a simple “s3:PutObject” permission granted on any resource (“*”) could mean a massive\r\nransomware attack across every bucket in your account. By limiting the resource to a specific bucket, such as\r\n“arn:aws:s3:::example_bucket”, then only that specific bucket could be targeted.\r\nThe principle of least-privilege should be applied at both the IAM policy level and the resource policy level (such\r\nas an S3 bucket policy) to be most effective.\r\nMethod #3: Logging and Monitoring Activity in Your Account\r\nYou should always have AWS CloudTrail enabled in your account. Ideally, CloudTrail should cover all regions in\r\nall of your accounts, log read and write management events, log data events for your S3 buckets and Lambda\r\nfunctions, enable log file encryption, and enable log file validation.\r\nIt might not always make sense to log data events for every bucket in your account depending on your budget,\r\nbecause it can get fairly expensive if they are frequently accessed. If you aren’t logging data events for every\r\nbucket, it is very important that the buckets you are logging are the ones that contain sensitive content. This way,\r\nyou can view fine-grained details of who is accessing your data, how they’re doing it, and where they’re doing it\r\nfrom.\r\nLog file validation in CloudTrail can help ensure that your log files are not being modified before you read them,\r\nso you can 100% trust what you are looking at. This is important to enable, but be sure to actually verify your logs\r\nhttps://rhinosecuritylabs.com/aws/s3-ransomware-part-2-prevention-and-defense/\r\nPage 2 of 7\n\n(with something like “aws cloudtrail validate-logs”), as you won’t be alerted of modifications to your logs, even\r\nwith the setting enabled.\r\nIn addition to CloudTrail, a tool like AWS GuardDuty should be enabled to monitor for malicious activity within\r\nyour account. These alerts along with your CloudTrail logs should be exported to an external SIEM for further\r\ninspection and monitoring.\r\nIf you use AWS Organizations, you should be enabling CloudTrail and GuardDuty at the Organization level and\r\napplying them to your child accounts. This way, attackers in child accounts cannot modify or disable any\r\nimportant settings.\r\nDepending on your budget and other factors, consider enabling other types of logs and monitoring tools for your\r\nother resources. This might include something like Elastic Load Balancer access logs or host-based logs for your\r\nEC2 instances, or possibly enabling something like AWS Inspector or AWS Config.\r\nMethod #4: S3 Object Versioning and MFA Delete\r\nThis is perhaps the most important, but potentially very expensive, defense method to use against S3 ransomware\r\nspecifically. S3 Object Versioning allows S3 objects to be “versioned”, which means that if a file is modified, then\r\nboth copies are kept in the bucket as a sort of “history”. The same thing happens if a file is uploaded with the same\r\nname as a file that already exists in the bucket. An example scenario here would be a versioned bucket that\r\nCloudTrail logs are being stored in. If an attacker modified a log file to remove traces of their activity, then the\r\ndefender could compare the old version of the file and the current version to see exactly what the attacker\r\nremoved.\r\nS3 Object Versioning is not enough on its own though, because in theory, an attacker could just disable the\r\nversioning and overwrite/delete any existing versions that are in the bucket without the worry of a new version\r\nbeing created. To combat this, AWS offers the feature of multi-factor authentication delete in S3 buckets. Having\r\nMFA delete enabled forces MFA to be used to do either of the following two things:\r\n1. Change the versioning state of the specified S3 bucket (i.e. disable versioning)\r\n2. Permanently delete an object version\r\nIf both versioning and MFA delete are enabled on a bucket, that means an attacker would need to compromise the\r\nroot user and their MFA device to disable versioning and MFA delete on the bucket. This is possible in theory, but\r\nin practice is very unlikely.\r\nMethod #5: Bucket Policies and ACLs\r\nIt is imperative to not make your buckets accessible by the public. Buckets and the objects in them can be made\r\npublicly accessible through a variety of different ways, but most often it is the bucket ACL or bucket policy that is\r\nthe culprit.\r\nYou should avoid using the bucket ACL entirely in most cases, but sometimes it’s not possible due to a few\r\ndifferent reasons. It is an old, “managed” way of managing access to your bucket and it does not allow fine-grained access control. If you grant someone access to “List objects” in an ACL, they can get a list of all the\r\nhttps://rhinosecuritylabs.com/aws/s3-ransomware-part-2-prevention-and-defense/\r\nPage 3 of 7\n\nobjects in the bucket in a single API call and can read any of those files. If you grant someone access to “Write\r\nobjects” in an ACL, that means they can create, overwrite, and delete objects in the bucket. Those two are reason\r\nenough to avoid using ACLs.\r\nYou should be using bucket policies instead of ACLs because it allows the most fine-grained permissions\r\nmanagement. Instead of granting a user list and read permissions in the ACL, you could grant them only one of\r\nthe two permissions in the bucket policy. You can even impose further restrictions, such as read access to only a\r\nfew objects in the bucket. For example, if you have a website that is reading S3 objects directly from your bucket,\r\nit does not need permission to list every object in the bucket, because it should already know what it is fetching. In\r\nthis case, you could just grant it “s3:GetObject” in the bucket policy instead of list and read in the ACL. Instead of\r\ngranting create/overwrite and delete object permissions through the ACL, you could grant one or the other and\r\nimpose further restrictions, just like above.\r\nAnother feature of S3 bucket policies is that they have the ability to enforce encryption of a specific type.\r\nExamples of this are forcing any uploaded file to be encrypted with AES256, or forcing any uploaded file to be\r\nencrypted with a specific AWS KMS key. This can be used to prevent S3 ransomware because, ideally, the\r\nattacker won’t have access to modify the bucket’s policy.\r\nYou could set your bucket policy to only allow objects to be uploaded with your specific KMS key. In that case,\r\nthe attacker would not be able to use another KMS key that you haven’t specified in the policy, which is necessary\r\nto ransomware the bucket. They would then get an access denied error message, ultimately preventing the attack.\r\nHere is an example S3 bucket policy that forces file uploads to use a certain KMS key for encryption:\r\n{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Deny\",\"Principal\":\"*\",\"Action\":\"s3:PutObject\",\"Resou\r\nMore information on S3 actions, resources, and condition keys can be found here.\r\nMethod #6: Account-Wide S3 Public Access Settings\r\nIf your concern is that your developers might grant public accessibility to your buckets, then you can utilize the\r\naccount-wide option to block public ACLs and public policies from taking effect. Note that this applies to every\r\nbucket in your account and might cause problems if you rely on cross-account/public access to specific buckets in\r\nyour account.\r\nIf you can verify that it is alright to block public access to all your buckets, then you should consider enabling the\r\naccount-wide S3 public-access settings for your account.\r\nhttps://rhinosecuritylabs.com/aws/s3-ransomware-part-2-prevention-and-defense/\r\nPage 4 of 7\n\nThe screenshot above shows the public access settings for an example account in the AWS web console. In this\r\nexample, all existing, public bucket ACLs and all existing, public bucket policies will be blocked, meaning that\r\nthey won’t actually grant public access because of this higher-tier public access setting. Additionally, new ACLs\r\nand policies that grant public access will be blocked as well.\r\nMethod #7: Backups\r\nFinally, of course, you need to back up your data! Whether that is done with a MFA versioned bucket, data\r\nreplicated across buckets or accounts, or even locally, it is extremely important. With sufficiently backed up data,\r\nyou can just ignore the attacker (after your incident response plan has been put into action, of course) and restore\r\nyour backups, then it was like you were never ransomwared at all.\r\nRecovering from a Successful Ransomware Attack\r\nIf you’ve been targeted with an S3 ransomware attack, how you respond will likely depend on a few different\r\nfactors. AWS Security is aware of the risk of the attack vector, but it is uncertain what their role could be in\r\nhelping you—if any. If you can accept the extra time it will take, it would likely be best to contact the authorities,\r\nhttps://rhinosecuritylabs.com/aws/s3-ransomware-part-2-prevention-and-defense/\r\nPage 5 of 7\n\nthough the delay involved in that may be too much to accept for your business. For that reason, it is best to\r\nimplement a strong defense to this attack so that you don’t end up in that situation needing to weigh your potential\r\noptions and their risks.\r\nIncident Response Plan\r\nIf you have been targeted, then the first step would be to enact your incident response plan. Doing this will lower\r\nthe blast radius of the attacker, get them out of the environment, and determine what they have gained access to\r\nand attacked. This is one reason why strong logging and monitoring in an AWS environment are both essential.\r\nKnowing what they attackers has accessed will help you determine what your next steps need to be.\r\nScript for Checking Bucket Configurations\r\nWe wrote a script that checks the important settings on all buckets in an AWS account. This includes checks for\r\nobject versioning and MFA delete on each bucket. You can find this script on our GitHub. It also has an option to\r\nenable object versioning for any buckets that don’t have it enabled already.\r\nThis screenshot shows some of the example output of running the script against a vulnerable AWS account.\r\nWhen opening the CSV file where the results were output, you will see something like the following screenshot\r\n(S3 bucket names are censored):\r\nThere are a few arguments to be aware of when running this script:\r\n-p/--profile: The AWS CLI profile to use for authentication with AWS (~/.aws/credentials).\r\n-b/--buckets: A comma-separated list of S3 buckets to check. These should be owned by you, or at leas\r\n-e/--enable-versioning: If this argument is passed in, the script will attempt to enable object versi\r\nhttps://rhinosecuritylabs.com/aws/s3-ransomware-part-2-prevention-and-defense/\r\nPage 6 of 7\n\nThe following screenshot shows the usage of the enable versioning argument (S3 bucket names are censored\r\nagain).\r\nThe CSV file will report “Enabled” for object versioning for any bucket that had its versioning successfully\r\nenabled. If the script fails to enable versioning (because of something like a permissions error), it will move onto\r\nthe next bucket and mark the failed bucket with its original versioning setting (“Disabled”/”Suspended”).\r\nConclusion\r\nS3 ransomware can be fairly straightforward for an attacker to perform, but there are a variety of both easy and\r\ndifficult defense mechanisms that defenders can put in place. At the lowest level, it is simple for a defender to\r\nenable versioning and MFA delete on an S3 bucket, which would effectively prevent ransomware in a majority of\r\ncases.\r\nIt is extremely important to implement defense mechanisms in your AWS environment and sensitive S3 buckets.\r\nWhile it might not be necessary to execute every method outlined above, it is imperative to determine which\r\nattack vectors and entry points you are most susceptible to, and therefore need to protect against.\r\nWhile the above outlined methods will be very useful in helping prevent against S3 ransomware, there are always\r\nmore methods of defense for prevention/detection out there. We encourage any readers to contact us with other\r\nideas so we can add them to this post (credited to you) and share them with other people who are trying to defend\r\ntheir own environment.\r\nSource: https://rhinosecuritylabs.com/aws/s3-ransomware-part-2-prevention-and-defense/\r\nhttps://rhinosecuritylabs.com/aws/s3-ransomware-part-2-prevention-and-defense/\r\nPage 7 of 7",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"references": [
		"https://rhinosecuritylabs.com/aws/s3-ransomware-part-2-prevention-and-defense/"
	],
	"report_names": [
		"s3-ransomware-part-2-prevention-and-defense"
	],
	"threat_actors": [],
	"ts_created_at": 1775434487,
	"ts_updated_at": 1775791232,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/07dadef3a5524e639ac442701ca720041eaba6ce.pdf",
		"text": "https://archive.orkl.eu/07dadef3a5524e639ac442701ca720041eaba6ce.txt",
		"img": "https://archive.orkl.eu/07dadef3a5524e639ac442701ca720041eaba6ce.jpg"
	}
}