Micro Focus is now part of OpenText. Learn more >

You are here

You are here

AWS finally adds default privacy setting to S3 buckets

public://webform/writeforus/profile-pictures/richi-2016-480.jpg
Richi Jennings Your humble blogwatcher, dba RJA
 

Finally! Amazon Web Services is tackling the public bucket problem.

AWS is adding strong protections against accidentally making an S3 storage bucket public. This has been the cause of much heartbreak for people such as Alteryx and the NSA.

Security by obscurity is no defense. So in this week’s Security Blogwatch, we get offensive.

Your humble blogwatcher curated these bloggy bits for your entertainment. Not to mention: Retro-futurism 

Why’s it taken so long?

What’s the craic? Shaun Nichols offers security-blanket policies:

Amazon … is taking steps to halt the epidemic of data leaks caused by the S3 cloud buckets it hosts from being accidentally left wide open to the internet by customers. … With the protections in place, objects placed in the buckets are blocked from enabling public access or cross-account access.

The idea … is to make it clear to both admins and end users of S3 buckets that public access is intended to be very limited in scope. … Dozens of high-profile exposure incidents have been traced back to S3 buckets and objects that were improperly configured. … Researchers continue to come across storage silos that, for one reason or another … allow public access.

I’ve been living under a rock; is this a thing? Eduard Kovacs calls it a New Feature for Preventing Data Leaks:

Improperly configured Simple Storage Service (S3) buckets can expose an organization’s sensitive files, as demonstrated by several incidents involving companies such as Viacom, Verizon, Accenture, Booz Allen Hamilton, and Dow Jones.

Amazon S3 Block Public Access [provides] settings for blocking existing public access and ensuring that public access is not granted to new items. … The new settings can be accessed from the S3 console, the command-line interface (CLI) or the S3 APIs.

What’s S3 doing about it? Amazon’s Jeff Barr drinks in the limelight: [You’re fired—Ed.]

We want to make sure that you use public buckets and objects as needed, while giving you tools to make sure that you don’t make them publicly accessible due to a simple mistake. … I have two options for managing public ACLs and two for managing public bucket policies:

Block new public ACLs and uploading public objects: … To protect against future attempts to use ACLs to make buckets or objects public.

Remove public access granted through public ACLs: … Overrides any current or future public access settings for current and future objects in the bucket.

Block new public bucket policies: … Disallows the use of new public bucket policies.

Block public and cross-account access to buckets that have public policies:. … Can be used to protect buckets that have public policies while you work to remove the policies.

Going forward, buckets that you create using the S3 Console will have all four of the settings enabled. … You will need to disable one or more of the settings in order to make the bucket public. … If you are using AWS Organizations, you can use a Service Control Policy (SCP) to restrict the settings.

Clear as mud? Even worse, according to ams6110:

It's still too confusing. Too much terminology, too many settings. 3 pages and over half a dozen screenshots to explain how to make a bucket private. Too complicated.

Phrases like "Block public and cross-account access to buckets that have public policies" are not … understandable.

Hide the fine-grained control in an "Advanced" panel, for those who really need it.

But at least it’s consistently confusing, says driverdan:

All of AWS' access control is too confusing. … It's hard to remember how to configure IAM and ACLs. I have to read the docs almost every time I change something just to be sure I don't screw it up.

But why are there so many public buckets being discovered all the time? Some of the fault lies in third-party software, as Rainer recounts:

Our ticketing system can store BLOBs in S3 buckets. But their support made it clear that the bucket has to be completely public.

Their support said, it wasn't a big deal because the actual URL of the bucket was "not public." We store the BLOBs on the local filesystem now.

Eyeroll. Or there’s this, from fredley:

I could have had this a few weeks ago, when I realised popular S3 integration tool 'django-storages' sets all objects' ACL as public-read by default.

I know, right? But this Anonymous Coward foresees unintended consequences:

So what happens when your app completely fails because the developers never built in authentication, and just relied on it being public?

It's possible the buckets were made public and don't need to be and this would work without a problem. The more likely scenario is that somewhere, some piece … isn't authenticating, and is relying on the buckets to be public. Finding that piece is often non-trivial.

So how do the new settings fix this? Michael Hoffmann explainifies:

If you are delegating control over a bucket within an account, you end up with some herp-derp for whom "IAM 101" might as well have been in Minoan Linear A who, after 2 failed attempts at secure access, just sets public on their bucket.

I believe in the UK the favourite term is now "backstop."

But why-oh-why-oh-why has it taken so long for this to be the default for new buckets? Well it kinda always was, says cjcampbell:

[But] it was far too easy to enable public access through a bucket policy that seemed sane to the untrained eye. … I figure it's safest to assume that someone will eventually screw up the policy.

I'm happy to add one more guardrail (with much lower overhead).

Meanwhile, ctilsie242 blames agile DevOps CI/CD millennial “morons”:

It is default-deny. In the past, you were presented with the option of making it … public. [But] I think people got confused [and] set it to public, assuming that was what was needed to give other members of their AWS account access.

In my experience … the person with AWS access often times has no clue what they are doing, is likely using the root account itself, rather than a sub account with admin privs, and just needs things to work so the dev team can get their code going. Their goal is to get stuff up and running, even if it means ignoring security issues, since the SCRUM master and their boss is going to call them out on missed deliverables on a daily basis.

But security guidelines missed and S3 buckets left public won't be something that the developer would be facing direct consequences for their actions.

The moral of the story? It’s child’s play to find “hidden” public buckets. So use the tools to secure them already.

And finally …

Retro-futurism from Future Punk



You have been reading Security Blogwatch by Richi Jennings. Richi curates the best bloggy bits, finest forums, and weirdest websites … so you don’t have to. Hate mail may be directed to @RiCHi or sbw@richi.uk. Ask your doctor before reading. Your mileage may vary. E&OE.

Image source: Pexels (cc:0)

Keep learning

Read more articles about: SecurityData Security