Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it really depends upon a number of factors but even pretty smart people can make stupid mistakes, especially when it comes to security in AWS. I’m familiar with several cases where engineers fired up old AMIs and got the instances compromised within an hour because they were running old, vulnerable software and ran it in a publicly routable subnet. There’s some basic rules to follow that can help avoid issues like those though that as organizations scale need to enforce to a greater degree eventually. Disallowing provisioning their own VPCs, disallowing publicly routed subnets, and establishing some decent auth infrastructure is all a good start that will work for a long time and have minimal friction for users. I’m a strong believer in security as a UX problem where doing the Right Thing should be easier than doing the Lazy / Bad Thing so I feel if people are having issues doing things the right way I’ve messed up and need to improve usability and meet my users where they are to achieve my own goals of a secured infrastructure.

Giving people responsibility and autonomy also comes with some responsibilities by the providers in a shared responsibility model is all I’m saying and every policy works out fine until it doesn’t.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: