{
	"id": "c684e738-f9dc-4774-b7c7-6146713f2566",
	"created_at": "2026-04-06T00:12:38.842821Z",
	"updated_at": "2026-04-10T03:20:22.35207Z",
	"deleted_at": null,
	"sha1_hash": "0f1a19e1a053b255a4b47b2db9858569fa0a7441",
	"title": "Google Cloud Platform (GCP) | Bucket Enumeration and Privilege Escalation",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 230552,
	"plain_text": "Google Cloud Platform (GCP) | Bucket Enumeration and Privilege\r\nEscalation\r\nBy Spencer Gietzen\r\nPublished: 2019-02-26 · Archived: 2026-04-05 21:10:06 UTC\r\nIntro: Google Cloud Platform (GCP) Security\r\nFor those unfamiliar, GCP is a cloud platform that offers a variety of cloud-computing solutions for businesses of any size to\r\ntake advantage of. Most people would put it up in the “big 3” cloud providers that are available, those being Amazon Web\r\nServices (AWS), Microsoft Azure, and GCP.\r\nCloud security is an extremely important field of study and it is becoming more and more critical for users of these cloud\r\nplatforms to understand it and embrace it. GCP security is one area of research that is seemingly untouched compared to its\r\ncompetitor, AWS. This could be for a lot of different reasons, with the main reason likely being the market share that each of\r\nthem control. According to an article recently released on ZDNet.com, AWS has the largest market share, followed by\r\nMicrosoft Azure, followed by GCP, so it makes sense that AWS security (and Azure security to a much smaller degree)\r\nwould be far ahead of where GCP is. This doesn’t mean GCP is “insecure” at all, but more so that there is less external, 3rd-party research into GCP security.\r\nAs it is an up-and-comer, Rhino has been researching GCP security behind-the-scenes, with our first official blog post on the\r\ntopic relating to Google Storage security.\r\nGoogle Storage / Bucket Security\r\nGoogle Storage is a service offering through GCP that provides static file hosting within resources known as “buckets”. If\r\nyou’re familiar with AWS, Google Storage is GCP’s version of AWS Simple Storage Service (S3) and an S3 bucket would\r\nbe equivalent to a Google Storage bucket across the two clouds.\r\nGCP Bucket Policies and Enumeration\r\nGoogle Storage buckets permissions policies can get very fine-grained though. By design, they can be exposed to a variety\r\nof sources (other accounts, organizations, users, etc) which includes being open to the public internet or all authenticated\r\nGCP users.\r\nFor this reason, we have decided to release a tool that has been used internally for reconnaissance at Rhino for some time\r\nknown as GCPBucketBrute.\r\nAnother Cloud Bucket Enumerator?\r\nSure, you could say that, but GCPBucketBrute is bringing something new to the idea of “bucket bruteforcing” in that it is\r\nexpanded to another cloud outside of AWS. There are countless AWS S3 bucket enumerators out there online, but none (that\r\nwe could find, at least) that targeted other similar storage services, such as Google Storage.\r\nThis tool is necessary because of the lack of multi-cloud research and as time goes on, more and more companies are\r\nexpanding their environments across multiple cloud providers and companies that never have used the cloud are making the\r\nleap.\r\nAnother benefit of GCPBucketBrute is that it allows you to check every discovered bucket for what privileges you have on\r\nthat bucket so you can determine your access and if the bucket is vulnerable to privilege escalation. This is outlined further\r\nbelow.\r\nIt provides a thorough and customizable interface to find (and abuse) open/misconfigured Google Storage buckets. Here at\r\nRhino, this tool has proven to be a necessity in our Google Cloud penetration tests, web application penetration tests, and\r\nred team engagements.\r\nThe Tool: GCPBucketBrute\r\nGCPBucketBrute is currently available on our GitHub: https://github.com/RhinoSecurityLabs/GCPBucketBrute\r\nThe tool is written in Python 3 and only requires a few different libraries, so it is simple to install. To do so, follow these\r\nsteps:\r\n1. git clone https://github.com/RhinoSecurityLabs/GCPBucketBrute.git\r\n2. cd GCPBucketBrute \u0026\u0026 pip3 install -r requirements.txt\r\nhttps://rhinosecuritylabs.com/gcp/google-cloud-platform-gcp-bucket-enumeration/\r\nPage 1 of 6\n\nWith it installed, you can move down to the next section of this post to get started, or you can run the following command to\r\ncheck out the help output on your own:\r\npython3 gcpbucketbrute.py –help\r\nHow it Works\r\nInstead of using the “gsutil” (Google Storage Utility) CLI program to perform the enumeration, it was found to be a lot\r\nfaster to just hit the HTTP endpoint of each bucket we are looking for to check for existence, because there is no overhead\r\nfrom the “gsutil” CLI involved in that case. It also uses subprocesses instead of threads for concurrent execution by design.\r\nWhen the script starts, it will generate a list of permutations based on the keyword that you supply to the “-k/–keyword”\r\nargument. Then, it will start bruteforcing buckets by sending HTTP requests to the Google APIs and it will determine the\r\nexistence of a bucket based on the HTTP response code. By making HTTP HEAD requests instead of HTTP GET requests,\r\nwe can make sure the HTTP response does not contain a body, while still getting valid response codes. Although the\r\ndifference may be negligible, it is theoretically faster for a smaller response (i.e. a response without a body from a HEAD\r\nrequest) to arrive and be parsed than a bigger response (i.e. a response with a body from a GET request) to arrive and be\r\nparsed. Using HEAD requests also allows Google’s servers to work a little bit less when trying to process our requests,\r\nwhich is helpful at a mass scale.\r\nEach HTTP HEAD request will be made to the following URL:\r\n“https://www.googleapis.com/storage/v1/b/BUCKET_NAME”, where “BUCKET_NAME” is replaced by the current guess.\r\nIf the HTTP response code is “404” or “400”, then the bucket does not exist. Based on what was discovered during testing,\r\nany other HTTP response code we encountered indicates that the bucket exists.\r\nFor any bucket that is discovered, the Google Storage TestIamPermissions API will be used to determine what level of\r\naccess (if any) we have to the target bucket. If credentials are passed into the script, then the results from both an\r\nauthenticated TestIamPermissions and an unauthenticated TestIamPermissions call will be output to the screen for\r\ncomparison, so you can see the difference between access granted to allAuthenticatedUsers and allUsers. If no credentials\r\nare passed in, only the unauthenticated check will be made and output (to the screen and the out-file if passed into the “-o/–\r\nout-file” argument).\r\nIf it is found that the user has any permissions (authenticated or not) to the bucket, then all of the permissions will be output.\r\nPrior to this, the tool will check for a few common misconfigurations and will output a separate line to make things a little\r\nmore clear. For example, a bucket that grants the “storage.objects.list” permission to “allUsers” would output the message\r\n“UNAUTHENTICATED LISTABLE (storage.objects.list)” prior to outputting all the permissions. This is just to make it a\r\nlittle more clear when there is a misconfiguration in the target bucket.\r\nThe list of permissions is also checked to see if the user has access to escalate their permissions on the bucket by modifying\r\nthe bucket policy. More on this is written below.\r\nQuickstart Examples\r\nNote: If you don’t pass in any authentication-related arguments, you will be prompted by the script for what you want to do\r\n(service account, access token, default credentials, unauthenticated).\r\nScan for buckets using “netflix” as the keyword, using 5 concurrent subprocesses (default), prompting for the authentication\r\ntype to check authenticated-list permissions on any found buckets (default):\r\npython3 gcpbucketbrute.py -k netflix\r\nScan for buckets using “google” as the keyword, using 10 concurrent subprocesses, while staying unauthenticated:\r\npython3 gcpbucketbrute.py -k google -s 10 -u\r\nScan for buckets using “apple” as the keyword using 5 concurrent subprocesses (default), prompting for the authentication\r\ntype to check authenticated-list permissions on any found buckets (default), and outputting results to “out.txt”:\r\npython3 gcpbucketbrute.py -k apple -o out.txt\r\nScan for buckets using “android” as the keyword, using 5 concurrent subprocesses (default), while authenticating with a\r\nGCP service account whose credentials are stored in sa.pem:\r\npython3 gcpbucketbrute.py -k android -f sa.pem\r\nhttps://rhinosecuritylabs.com/gcp/google-cloud-platform-gcp-bucket-enumeration/\r\nPage 2 of 6\n\nScan for buckets using “samsung” as the keyword, using 8 concurrent subprocesses, while authenticating with a GCP\r\nservice account whose credentials are stored in service-account.pem:\r\npython3 gcpbucketbrute.py -k samsung -s 8 -f service-account.pem\r\nReviewing GCP Buckets in Alexa Top 10k\r\nFollowing the trend of our S3 bucket enumeration blog post, we went ahead and used GCPBucketBrute to scan the top\r\n10,000 sites according to Alexa.com’s top sites list. This process entailed grabbing the top 10,000 websites, stripping the\r\ntop-level domains (TLDs), then running GCPBucketBrute against each of the base domains. For example, something like\r\n“netflix.com” was stripped of “.com” and we just used “netflix” as the keyword. GCPBucketBrute automatically will\r\nremove any duplicates in its wordlist, then it will remove any that are less than 3 characters in length or greater than 63\r\ncharacters in length, because that is how Google Storage places restrictions on bucket names.\r\nThese were our findings:\r\n18,618 total buckets were discovered\r\n29 buckets of the total 18,618 (~0.16%) allowed read access to all authenticated GCP users, but not unauthenticated\r\nusers (allAuthenticatedUsers)\r\n715 buckets of the total 18,618 (~3.84%) allowed read access to any user on the web (allUsers)\r\nThe remaining 17,874 (~96%) were locked down\r\nIn addition to looking for publicly available buckets, we decided to check every bucket found for privilege escalation as\r\nwell.\r\nGoogle Storage Bucket Privilege Escalation\r\nJust like AWS S3 buckets can be vulnerable to privilege escalation through misconfigured bucket ACLs (discussed in-depth\r\nhere), Google Storage buckets can be vulnerable to the same sort of attack.\r\nSimilar to how GCPBucketBrute checks for open Google Storage buckets through a direct HTTP request to\r\n“https://www.googleapis.com/storage/v1/b/BUCKET_NAME/o”, we could check for the bucket’s policy by making direct\r\nHTTP requests to “https://www.googleapis.com/storage/v1/b/BUCKET_NAME/iam”, or we can use the “gsutil” CLI tool to\r\nrun “gsutil iam get gs://BUCKET_NAME”. If “allUsers” or “allAuthenticatedUsers” are allowed to read the bucket policy,\r\nwe will receive a valid response when pulling the bucket policy, otherwise we will get access denied.\r\nThe bucket policy is helpful, but that requires we have the “storage.buckets.getIamPolicy” permission, which we might not\r\nhave. What if there was a way to determine what permissions we are granted without needing to look at the bucket policy?\r\nWait, there is! The Google Storage “TestIamPermissions” API allows us to supply a bucket name and list of Google Storage\r\npermissions, and it will respond with the permissions we (the user making the API request) have on that bucket. This\r\ncompletely bypasses the requirement of viewing the bucket policy and could potentially even give us better information (in\r\nthe case of a custom role being used).\r\nTo determine what permissions we have on a bucket, we can make a request to a URL similar to\r\n“https://www.googleapis.com/storage/v1/b/BUCKET_NAME/iam/testPermissions?permissions=storage.objects.list”, where\r\nit will respond and let us know if we have the “storage.objects.list” permission on the bucket “BUCKET_NAME”. The\r\npermissions parameter can be passed multiple times to check multiple permissions at once, and the Google Storage Python\r\nlibrary supports the test_iam_permissions API (even though “gsutil” does not).\r\nGCPBucketBrute will use the current credentials (or none/anonymous if running an unauthenticated scan) to determine what\r\nprivileges we are granted to every bucket that is discovered. Like mentioned above, for buckets that we have no access to,\r\nnothing will output. For buckets that we have some access to, it will output a list of what permissions we have. For buckets\r\nthat we have enough access to privilege escalate and become a full bucket administrator, it will output the permissions and a\r\nmessage indicating it is vulnerable to privilege escalation.\r\nThe following screenshot shows what is output when finding a bucket with a few privileges alongside a bucket that is\r\nvulnerable to privilege escalation.\r\nhttps://rhinosecuritylabs.com/gcp/google-cloud-platform-gcp-bucket-enumeration/\r\nPage 3 of 6\n\nFor our scan of the Alexa top 10,000, we used the TestIamPermissions API to check what access we were granted and to see\r\nif any of the buckets were vulnerable to privilege escalation. For each bucket, that means we made an authenticated request\r\n(with our personal GCP credentials) to see what access was granted to “allAuthenticatedUsers”. To also confirm what access\r\nwas granted to unauthenticated users (allUsers), we ran the same check while unauthenticated. Although it is a very serious\r\nmisconfiguration to grant all GCP users and/or all unauthenticated users high-level bucket privileges, it turned out to be\r\nmore common than we thought.\r\nOut of our Alexa top 10,000 scan, we discovered 13 buckets that were vulnerable to privilege escalation to a full\r\nbucket admin (~0.07% of the total buckets found) and 21 buckets that were already granting the public internet full\r\nbucket admin privileges (~0.11% of the total buckets found).\r\nFor the buckets that were reported vulnerable to privilege escalation, this essentially meant that the bucket policy allowed\r\neither “allUsers” or “allAuthenticatedUsers” to write to their bucket policy (the storage.buckets.setIamPolicy permission).\r\nThis allowed us to write to the policy that “allUsers”/”allAuthenticatedUsers” were bucket owners, granting us full access to\r\nthe bucket.\r\nFor the buckets that were discovered to be vulnerable to public privilege escalation, we reported the finding to companies\r\nthat owned them (where we could identify such a thing, the rest were reported directly to Google).\r\nTo perform the privilege escalation, we followed these steps:\r\n1. Scanned for existing buckets given a keyword we supplied\r\n2. For any buckets found, check what privileges were granted to “allUsers” or “allAuthenticatedUsers” by using the\r\nTestIamPermissions API as both an authenticated and unauthenticated user. If it was found that they had permission\r\nto write to the bucket policy (storage.buckets.setIamPolicy), we would have privilege escalation. A vulnerable bucket\r\npolicy might look something like this:\r\n{\"bindings\":[{\"members\":[\"allAuthenticatedUsers\",\"projectEditor:my-test-project\",\"projectOwner:my-test-project\r\nIgnoring most of what is defined in this policy, we can see that the “allAuthenticatedUsers” group is a member of the role\r\n“roles/storage.legacyBucketOwner”. If we look at what permissions that role is granted, we see the following:\r\nhttps://rhinosecuritylabs.com/gcp/google-cloud-platform-gcp-bucket-enumeration/\r\nPage 4 of 6\n\nstorage.buckets.get\r\nstorage.buckets.getIamPolicy\r\nstorage.buckets.setIamPolicy\r\nstorage.buckets.update\r\nstorage.objects.create\r\nstorage.objects.delete\r\nstorage.objects.list\r\nThis means that we can read (storage.buckets.getIamPolicy) and write (storage.buckets.setIamPolicy) to the buckets policy\r\nand we can create, delete, and list objects within the bucket.\r\nNote that we see the same information by visiting the URL below: (note that the “storage.objects.getIamPolicy” and\r\n“storage.objects.setIamPolicy” permissions are omitted because they will throw an error on any bucket that is setup to\r\ndisable object-level permissions. For buckets that enable object-level permissions, those values can be included).\r\nhttps://www.googleapis.com/storage/v1/b/BUCKET_NAME/iam/testPermissions?\r\npermissions=storage.buckets.delete\u0026permissions=storage.buckets.get\u0026permissions=storage.buckets.getIamPolicy\u0026permissions=storage.buckets.setIamP\r\nIf we look at the permissions granted by the Storage Admin role instead, we can see that it grants these privileges:\r\nfirebase.projects.get\r\nresourcemanager.projects.get\r\nresourcemanager.projects.list\r\nstorage.buckets.create\r\nstorage.buckets.delete\r\nstorage.buckets.get\r\nstorage.buckets.getIamPolicy\r\nstorage.buckets.list\r\nstorage.buckets.setIamPolicy\r\nstorage.buckets.update\r\nstorage.objects.create\r\nstorage.objects.delete\r\nstorage.objects.get\r\nstorage.objects.getIamPolicy\r\nstorage.objects.list\r\nstorage.objects.setIamPolicy\r\nstorage.objects.update\r\nThere are more privileges granted to this role than the role “allAuthenticatedUsers” is a member of, so why don’t we change\r\nthat?\r\nWith the “gsutil” Google Storage CLI program, we can run the following command to grant “allAuthenticatedUsers” access\r\nto the “Storage Admin” role, thus escalating the privileges we were granted to the bucket:\r\ngsutil iam ch group:allAuthenticatedUsers:admin gs://BUCKET_NAME\r\nNow if we look at the bucket policy again, we can see the following added to it (because the “ch” command appends instead\r\nof overwrites to the policy):\r\n{\"members\":[\"group:allAuthenticatedUsers\"],\"role\":\"roles\\/storage.admin\"}\r\nAnd just like that, we have escalated our privileges from a Storage Legacy Bucket Owner to a Storage Admin on a bucket\r\nthat we don’t even own!\r\nOne of the main attractions to escalating from a LegacyBucketOwner to Storage Admin is the ability to use the\r\n“storage.buckets.delete” privilege. In theory, you could delete the bucket after escalating your privileges, then you could\r\ncreate the bucket in your own account to steal the name.\r\nNow, if we review the privileges we are granted with the TestIamPermissions API again, we see that a few extras are added\r\nfrom the new role we used. Note that not all the privileges allowed by that role will be listed when using the\r\nTestIamPermissions API (such as resourcemanager.projects.list) because not all the permissions are Google Storage specific\r\npermissions and aren’t supported by the API.\r\nNote: The “gsutil iam ch” command requires permission to read the target bucket’s policy, because it first reads the policy,\r\nthen adds your addition to it, then writes the new policy. You might not always have this permission, even if you have the\r\nSetBucketPolicy permission. In those cases, you would need to overwrite the existing policy and risk causing errors in their\r\nenvironment, such as if you accidentally revoke access from something that needs it.\r\nhttps://rhinosecuritylabs.com/gcp/google-cloud-platform-gcp-bucket-enumeration/\r\nPage 5 of 6\n\nDisclaimer: Privilege escalation was not actually performed on any of the vulnerable buckets, but instead it was only\r\nconfirmed the vulnerability existed.\r\ntl;dr:\r\nThe Google Storage TestIamPermissions API can be used to determine what level of access we are granted to a specific\r\nbucket, regardless of what permissions we actually do have. This allows us to detect when we can write to a buckets policy\r\nto grant ourselves a higher level of access to the target bucket.\r\nConclusion\r\nEven though buckets are created private-by-default, time and time again we see users misconfiguring the permissions on\r\ntheir assets and exposing them to malicious actors in the public and simple APIs like the Google Storage\r\nTestIamPermissions just make it easier.\r\nGCPBucketBrute is available right now on our GitHub: https://github.com/RhinoSecurityLabs/GCPBucketBrute\r\nHere at Rhino Security Labs, we perform Google Cloud Platform penetration tests to detect and report on misconfigurations\r\nlike these from within your environment, rather than from an external perspective. If you want to get started, check out our\r\nGCP Penetration Testing Services Page.\r\nFor updates and announcements about our research and offerings you can follow us on twitter @RhinoSecurity and you can\r\nfollow the author of this post/GCPBucketBrute @SpenGietz.\r\nSource: https://rhinosecuritylabs.com/gcp/google-cloud-platform-gcp-bucket-enumeration/\r\nhttps://rhinosecuritylabs.com/gcp/google-cloud-platform-gcp-bucket-enumeration/\r\nPage 6 of 6",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"references": [
		"https://rhinosecuritylabs.com/gcp/google-cloud-platform-gcp-bucket-enumeration/"
	],
	"report_names": [
		"google-cloud-platform-gcp-bucket-enumeration"
	],
	"threat_actors": [],
	"ts_created_at": 1775434358,
	"ts_updated_at": 1775791222,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/0f1a19e1a053b255a4b47b2db9858569fa0a7441.pdf",
		"text": "https://archive.orkl.eu/0f1a19e1a053b255a4b47b2db9858569fa0a7441.txt",
		"img": "https://archive.orkl.eu/0f1a19e1a053b255a4b47b2db9858569fa0a7441.jpg"
	}
}