Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
On Cybersecurity and Being Targeted (kennethreitz.org)
281 points by kenneth_reitz on Aug 10, 2016 | hide | past | favorite | 117 comments


I applaud the fast response, especially contacting a security engineer to be removed from the GitHub organization. One particular bit stuck out to me.

>>In this instance, though, the attack vector was DNS. My account at the not-so-incredibly-common DNSimple.com did not use a highly secure password. I didn’t think it was necessary, as in my mind, the only reason that the security of an account like that would be at risk would be if I was the explicit target of an attack. Once again, I thought to myself “That’s something that only happens to other people”.

Kenneth used a randomly generated password and two-factor authentication on his GitHub account, which is great! But on DNSimple he made the decision to forego better security because it seemed unlikely to be a target.

It is not enough to use some strong passwords for the things you think are sensitive. Every weak password is a weak link in your total identity chain.

The best way to use a password manager is to never give yourself authority to make passwords unless they are randomly generated. Even if the site or account in question appears innocuous or insignificant, even if it does not allow you to make a password of your manager's default strength, commit yourself to going through this process 100% of the time.

Yes, it's a usability pain to constantly use a browser extension to log in. But that pain is nothing compared to the stress of a compromise or targeted attack.

Until password management or authentication are substantially overhauled on the web, the most optimal solution for protecting yourself is constant, militant vigilance with passwords. I don't know any of my passwords at all, and what's more, I even have randomly generated answers to security questions.

Also, where possible, use two-factor authentication. You can use SMS, Authy, Google Authenticator, a Yubikey, whatever. Just turn the damn thing on and use it if it's available to you.


Some people forget that email almost always works as a way to reset passwords, and the also forget that dns registrars and dns hosts control where your email goes (if you're using your own domain).

Your email domain is a link in the authentication security chain for everything that'll let you send a "reset password" email. If someone can subvert your domain's MX records (by breaching your registrar account or your dns hosting, or by subverting the dns of the outgoing mail servers of the service they're trying to attack) then you've lost. If _any_ of those parts have weak authentication, your whole auth chain is weak.

(How much would you bet against me being able to change your cellphone account password and get your sms 2FA redirected if I can subvert the email account associated with your cellphone account? The OP is only lucky that his attacker wasn't quite sophisticated enough to attempt to take control of his cellphone account before firing off the Github 2FA SMS...)


> Some people forget that email almost always works as a way to reset passwords

I recently switched distros on my work laptop and went on holiday. I didn't take the backup drive from the old install with me. I use KeePass. My company uses LastPass.

I had to log-in to LastPass, so grabbed my username and password from KeePass, and tried to log-in to their website. I was then greeted by a request to give the "Sesame" code. For those who don't use LastPass on Linux (maybe other platforms as well), Sesame is an OTP generator, installed on the user's computer.

I was stumped. I hadn't copied the Sesame install to the new distribution. Luckily, LastPass provided me with a helpful link "Disable Sesame for this account". They sent me an email to confirm that I really wanted to disable Sesame, I clicked the helpful link, and upon next log-in, just my username/password were sufficient.

The LastPass enterprise administrators (people in the company with admin access to our LastPass account) did get an email saying that I had requested the deactivation of Sesame on my account. They were... "surprised" to hear that I did it myself.

Edit: typo.


That's a LastPass bug; if your 2FA can be disabled with nothing more than an email challenge-response, your second factor is just email.

For comparison, when I had trouble with Linode's 2FA - I set up a new phone and wiped the old one before realizing that Google Authenticator doesn't include TOTP secrets in iPhone backups, in effect destroying my 2FA token - and my scratch codes failed to work, I had to provide government ID matching my account information before their support department would disable 2FA on the account. I'm not overjoyed by the fact that an implementation error prevented my scratch codes from working, but I was pleased by the nature of the verification required to disable 2FA so I could log in again and re-enable it. Linode may have had some security problems in the last few years, but this at least they got exactly right.


Keep in mind too that this argument chains - if your 2FA can be disabled with access to your phone/SMS, and your phone account can be updated with an email challenge response (or just a confident voice-call to your carrier's customer service), it's just one extra level of hoop-jumping for your attacker.


>> You can use SMS

Well, it's not recommended:

>> Due to the risk that SMS messages may be intercepted or redirected, implementers of new systems SHOULD carefully consider alternative authenticators. If the out of band verification is to be made using a SMS message on a public mobile telephone network, the verifier SHALL verify that the pre-registered telephone number being used is actually associated with a mobile network and not with a VoIP (or other software-based) service. It then sends the SMS message to the pre-registered telephone number. Changing the pre-registered telephone number SHALL NOT be possible without two-factor authentication at the time of the change. OOB using SMS is deprecated, and may no longer be allowed in future releases of this guidance.

Source: NIST (https://pages.nist.gov/800-63-3/sp800-63b.html)


http://www.itnews.com.au/news/telcos-declare-sms-unsafe-for-...

The telco industry tells the banking industry not to consider SMS secure...


So they are finally admitting that SS7 is a huge security problem to the point they have to recommend everyone not to use it for 2FA anymore....but they are not going to try and fix SS7.


It's not just the technical problems with SS7 - it's a business issue.

The Telco's prime motivations are to make it easy for their customers to make more calls and send more text messages - that's what make them money. Making it difficult to port phone numbers away from competitors, or making it difficult for customers to redirect phone calls or text messages is bad for business, and will lose them customers and/or increase customer support costs.

There's absolutely no upside to a Telco for doing any of that - they don't really care if that makes their product less secure for use-cases other 3rd parties who aren't paying the Telco - the Telco's never signed up to provide a secure channel for your bank to send secrets to you - and the banks aren't offering to pay them for it. The Telco's customers are paying, and to an overwhelming degree they demonstrate that they prefer convenience over security - you get pissed off and potentially change carriers very quickly if you cant call them up from a friend's phone when yours get broken/stolen and get your calls/texts redirected to another number immediately. The fact that that same ability give access to social engineers to get hold of banking pins and internet service 2fa secrets is a vanishingly small concern for the people paying the Telco's.

So they just don't care. Not their problem. Sucks to be a bank.


Isn't migrating to a pure LTE network a legitimate option to fixing SS7 ?

Not that I think we'll be rid of SS7 even a decade from now, but there is a reasonable path to take, yes ?


My idea is get rid of minutes and text and simply have data on your phone. Then you can use the call/sms service of your choice to provide those features over https. Basically unbundle the whole deal. The carriers would never go for it but I think it would work.


I've got a nephew who sort-of does that with an iPod touch. It doesn't even have a cellular data connection - he relies on glomming on to free wifi at school/home/friend's houses/the library/the shopping mall/wherever, and gets iMessage and Skype only when he's got wifi - which is "good enough" for him to not even bother upgrading to a phone with a pre-paid account... (He's even managed to convince his Mom to leave the wifi hotspot on her phone running all the time, so he can make/receive calls in the car...)


SS7 attacks get a lot of news coverage but they still aren't practical for most non-government attackers. What generally happens is that they trick the phone provider into issuing them a new SIM card for the account or redirecting the number to one they control.


TOTP is overall more secure, but if Kenneth had used TOTP in this case, he would not have found out about the attack.


Also, TOTP is kind of secure if you never log into the website from the device that has the authenticator. Otherwise, it is pretty vulnerable to attacks that target the client.

IMO the real issue here is that almost all websites consider that email is a good way to verify the identity of someone when they want to reset their password, while in reality email is a terrible medium. Attacks on the DNS work, and a lot of SMTP traffic is still plaintext. The way to secure email is to encrypt and sign at endpoints but of course websites don't do this in the reset password flow.


Yes, yes NIST has many recommendations, and technically SMS is insecure.

That said, many websites use a Twilio two-factor integration and don't support anything else. If that's the case, you should still use it.


Yep, I've witnessed telco's being social engineered to forward numbers for attackers to intercept SMS messages.


A long time ago I used "rings" of passwords, where the least secure sites shared one, medium secure sites shared another, and most secure sites shared a third.

This has some disadvantages, one because I was not always good at predicting which sites would end up security critical. (I probably signed up for 10 random web forums in 2009 and nothing at the time made news.ycombinator.com the obviously-important-to-me one.)

These days, though, password managers. Use them 100% of the time. Turn 2FA on everywhere; mandate it administratively (ideally in can't-be-ignored settings) for company systems.


One case although, is to keep a simple password you'll remember to access your client-side encrypted cloud backups of your password manager db. Otherwise you could end up in a case where you've lost all of your passwords and your shit out of luck when you lose your computer and your local backup.


> One case although, is to keep a simple password you'll remember to access your client-side encrypted cloud backups of your password manager db. Otherwise you could end up in a case where you've lost all of your passwords and your shit out of luck when you lose your computer and your local backup.

The problem is that if you can remember a password, then a computer can guess it, and if you store something in the cloud then a computer can perform offline attacks all day long. And as time goes by, those offline attacks only get faster. This is one of the reasons that Mozilla's new Firefox accounts & sync system are insecure and unsuitable for storing passwords or other private data.

Sure, if it's an epic passphrase run through PBKDF2, bcrypt, scrypt or argon with a truly evil work factor (like, on the order of hours), then _maybe_ it's suitable for securing a backup of the keys to one's kingdom. Maybe.

Better, I think, is to encrypt the backup with a secure key, then encrypt the secure key with a memorable password, then use k-of-n secret sharing to give shares of the encrypted key to some number of trusted people. Up to k-1 of those people may disclose his share without endangering the encrypted secret, and if k do disclose it then it is still protected by the memorable password.

The problem is that in a world of cloud backups, a mistake tomorrow can endanger yesterday's backups.


>The problem is that in a world of cloud backups, a mistake tomorrow can endanger yesterday's backups.

This. This is the big problem with password managers. If your master password ever get compromised, you can't effectively change it in all backups and replicas, new and old, local and cloud, on various media supporting reliable wiping or not. You either can't track or can't control them. Even periodic rotation of master password is almost pointless.


I keep a note in the safe at work providing me with sufficient hints about my password manager password to be able to recreate it - without it being something that helps anybody else (think something along the lines of "every third word of the first two stanzas of that poem my second girlfriend loved").

I have the encrypted file synced across two phones, an iPad, a work machine (including it's on-site and off-site backups) two home machines (including a Dropbox account, a Time Machine networked backup drive, and a weekly rsync of the Time Machine backup to off-site storage).

I did once leave a bag containing my laptop, and my 60G iPod containing my laptop's only backup (including my only copy of my password manager file and a fair bit of confidential work-related source code) under an outside table at a cafe right before closing time. I spent a sleepless night worried as hell about it, but a friendly and familiar waitress who kind-of knew me had found it and stashed it behind the counter with a note to her manager describing me, so I got it back the next morning. That made me a) very happy, and b) somewhat more careful and paranoid about my backup and password storage habits...


It would be interesting to see a Shamir's Secret Sharing implementation applied to a bunch of multi-factor authentication devices.


Keep it simple but long, see famous relevant xkcd "correct horse battery staple" . My randomly generated passwords are usually 16 characters, but my password manager master is 40+ chars, and much easier to remember.


> It is not enough to use some strong passwords for the things you think are sensitive. Every weak password is a weak link in your total identity chain.

I get what you're saying here, but this is patently false. My HN account is not a weak link in my "total identity chain" in any way that matters.

Yes, obviously defaulting to strong passwords everywhere is a better default because you can make mistakes when judging how important a specific site is, but that doesn't mean that every site is critical.


I believe parent's thinking is that a total identity chain comprises all human-influencing bits of identity. Most of these social engineering attacks fuzz boundaries in human-arbitrated authentication systems. And I wouldn't categorically rule out any portion of my identity as never being useful to an attacker.


You're right that not every website is critically important, but that also wasn't my point. My real point is that it is important to treat every website as though it were, because having that decision making ability is dangerous.

I think we mostly agree here though :).


Yet with access to your HN account, a malicious person could damage your reputation and career, and I can imagine different kinds of social engineering that makes "weak link" seem plausible.


> Until password management or authentication are substantially overhauled on the web, the most optimal solution for protecting yourself is constant, militant vigilance with passwords. I don't know any of my passwords at all, and what's more, I even have randomly generated answers to security questions.

This is a good point. Those "On what street did you grow up" security questions do not increase security. If answered with real answers, they decrease security, since a lot of this stuff can be looked up online. It boggles the mind why a company would go to the trouble of encouraging users come up with a strong password, then turn around and deliberately provide a softer attack vector.


I recently had to do some security questions with my bank to verify myself- the trick was, they were all multiple choice, and they actually said "we pulled these from public record". Furthermore, I saw earlier someone mentioned the same thing, but they had filled their questions with gibberish, so when support gave the options the gibberish stood out significantly.


Helpfully, NIST just recently deprecated knowledge-based authentication (security questions) so we should be seeing less of this as time goes on.


> Also, where possible, use two-factor authentication.

I have turned on two-factor wherever possible, but I am thinking about turning it off again for PayPal. It sends an SMS every time I log in, even in their app or on a PC I have logged in on before. I wish they would keep a device authenticated at least a month or so, only asking for the password on each login.


Unfortunately, I had to disable SMS 2FA on Paypal. I got locked out multiple times for days because their system had issues and it wouldn't send me the SMS code. Eventually I had to call service support and get them to disable it so I can enter the account again. And I left it disabled.

Paypal is one of those services where it's very dumb not to have 2FA, but until they overhaul their 2FA system, or support something like a Yubikey/Nitrokey/U2F and Google Auth, I won't risk using it again.


Good on Kenneth for being quick on the draw. I love 'requests'.

If you're a developer of a popular open-source project, this should serve as a warning to make sure you have multi-factor authentication on, yes, but it's even better to learn from this and come up with incident response plans with your core maintainer base. Ask among yourselves:

1. Do we have the ability to detect an overt breach like this one?

2. Do we have the ability to detect a covert breach (e.g. are our builds reproducible, auditable? Are our binaries signed? Do we know who our committers are?)

3. Do we have a consistent way to message users of the project of the compromise?

4. Do we have a way to deprecate/mark as tainted compromised versions of our module/package/application?

GitHub offers some technology to help in this regard. Sign your release tags, at a minimum [1]; sign your commits with developer keys if you're paranoid. [2]

As FOSS becomes more used in the enterprise, I suspect these attacks will become less of a rarity.

[1] https://news.ycombinator.com/item?id=11494997

[2] https://help.github.com/articles/signing-commits-using-gpg/


It's odd not to examine the "contacted a friend at GitHub" part. On the one hand, it's all too common to see this as the only escalation path at a modern tech company. On the other hand, at companies without strong internal controls, it raises the question of how to authenticate yourself to the friend at the company - especially in what the author describes as a stressful 10 minutes.

We know from postmortems that the error-handling code tends to be among the least-tested parts of a codebase, which leads to cascading failure. I wonder if an even wilier attacker could have leveraged the analogous failure here.


> It's odd not to examine the "contacted a friend at GitHub" part.

Authentication aside: what does somebody do to talk to GitHub if you don't have a friend at GitHub that's willing to chat with you? Would Kenneth have been given the Source IP address of the attacker if he didn't know someone there?


>what does somebody do to talk to GitHub if you don't have a friend at GitHub that's willing to chat with you?

Write a blog post on Medium with the perfect click-bait title to make it go viral. Hope a github engineer reads it and gets back to you.


Github has a support system and from my experience they reply quite fast.


I don't see how "contacted a friend" enables an exploit - in this case (and many others I've heard on the 'net) this channel only provides info or verification about the breach, and specifically doesn't provide, and IMHO won't provide, either of (a) any security credentials or (b) resetting any security credentials.

The question isn't about how to authenticate yourself to the friend at the company, the issue is that the friend in the company shouldn't (and shouldn't be able to) perform privilege escalation. They can tell you what has/hasn't been done with your account, but then you have to login or reset password the normal way in any case.


Maybe a bit too hidden for critical 10 minutes, but the device loggin information is readily available in the Security tab of your Github account:

https://github.com/settings/security


In this case he was locked out and didn't want to trigger a recovery email that the attacker could use.


The usual way - social engineering.

Spoof an inside line's callerID and say "Hi, it's Dave. What's the root password?"

;-)


Oh boy another plug for 2FA. I won't deny the obvious security advantages it confers, but that well has been poisoned a long time ago.

Call me paranoid, but I have a hard time seeing the push for 2FA as anything other than a plot to collect valuable user data. As with most any good lie, it's mostly true -- 2FA does improve security -- but what happens when a company goes bankrupt and sells off it's assets?

Moreover, I can't help but to question the actual necessity of this security feature. The OP's mess could have been avoided if he'd ... you know ... systematically chosen secure passwords.

>Turn on two-factor authentication. Right now.

I'll pass, thanks.

P.S.: thanks for Requests!


Call me paranoid, but I have a hard time seeing the push for 2FA as anything other than a plot to collect valuable user data

Exactly. Mostly when I see companies like Google, Facebook, etc constantly trying to trick me to activate it. And yes, I say trick: The option to ignore/skip is always hidden/disguised, totally ignoring the UX and accessibility needs.

This and the fact that the input text fields already have my phone number populated on them ...and are just waiting for my consent.. this does not inspire trust, no.


Another interesting thing most people don't realize is that if you switch IP addresses Google demands a phone number for Gmail login, even if the account has no 2FA and no phone number was specified initially. "Give us some phone number to log in". Isn't that strange from security perspective?


Exactly: the dark-pattern design speaks to the true motivation behind this push... and it ain't security.


2FA doesn't automatically require giving any user data. Using SMS-based 2FA requires giving over your mobile phone number, but it's not like that's the only option available to you. TOTP gives the provider absolutely nothing, and has the added benefit of not being able to be compromised by a call to your mobile phone network or SIM cloning.

You make this sound like it's part of a conspiracy to force 2FA on the unsuspecting masses so that the Illuminati can sell your contact details to aliens from Mars, or something. That's a bit insulting to the original poster, who is sharing useful information learned through real world experience.


> but it's not like that's the only option available to you

A lot of the time, though, it really is. I always prefer TOTP over SMS, but it's rare to be given the option.


> The OP's mess could have been avoided if he'd ... you know ... systematically chosen secure passwords.

In this exact scenario that is true but in general, not necessarily. He could of had a very secure password on his dns provider but through some other method that account could still have been compromised (social engineering, sql injection, poor brute force rate limiting, some other attack vector). And then once the dns account is compromised a password reset on github is trivial if there isn't 2FA on the github account - or any other account linked to that email account.


I can make the same argument for 2FA: social engineering can defeat it, or I can turn to some other attack vector.

This isn't to say that 2FA doesn't add security. The point is rather twofold:

1. with a little extra attention, I can secure my system without 2FA and achieve comparable security

2. it's absolutely scandalous that the proverbial well has been poisoned and we are, by virtue of the fact that extra attention is required and that most people won't put in the work, in a less secure state. Again, this is due to the irresponsible data-policy of the overwhelming majority of service providers.

So again: 2FA is a valid security layer, but I'll pass. :/


> with a little extra attention, I can secure my system without 2FA and achieve comparable security

How? Lets play devils advocate here and pretend your email is compromised (account takeover or redirect). What have you done on any of the services you use that has given you comparable security to enabling 2FA on them that now prevents the attacker from resetting your password?


>How?

By hardening your email server. Yes, assuming the email server is compromised implies a compromised system in this scenario, but that is tautological and therefore uninteresting.

The extra attention should go towards hardening the weak link, in this case the email server that, de facto, provides authentication services.

Server hardening is a vast topic, so I leave it as an exercise for the reader to research what can be done. Surely you don't deny that "a little extra attention" to server configuration can provide additional security without necessarily resorting to 2FA?

But again, 2FA is valid security, so why not host it yourself? I have nothing against 2FA in principle, but I don't trust Google, Github, Facebook et al with my phone number.

While we're on the subject, the inherent insecurity of the SMS protocol should also be weighed in any serious endorsement of phone-based 2FA: http://www.itnews.com.au/news/telcos-declare-sms-unsafe-for-...


So you're running your own DNS server then too right? A hardened email server doesn't prevent a targeted attacker from just redirecting your mx records (as happened to OP).

> but I don't trust Google, Github, Facebook et al with my phone number

Github doesn't require a phone number to use 2FA. And if you don't trust Google or Facebook with your phone number then that means you don't use either of them at all? Because otherwise holding back your phone number from them is a rather pointless exercise as they already know everything about you [1]

[1] https://en.wikipedia.org/wiki/AOL_search_data_leak


I'm assuming a setup similar to that of the OP's.

>And if you don't trust Google or Facebook with your phone number then that means you don't use either of them at all?

More or less. Email I can handle, and there's stuff I stupidly gave up in the past, but I see no reason to provide them with additional info.

>Github doesn't require a phone number to use 2FA

Are you referring to the application? That might be a fair compromise depending on what kinds of permissions the application requires.

>A hardened email server doesn't prevent a targeted attacker from just redirecting your mx records (as happened to OP).

What on earth are you on about? Of course it does. The OP had a crappy password, which is about as fundamental as it gets with regards to server hardening...


If you don't understand the different types of 2FA available, I don't think you're really in a position to be making blanket statements about if people should be using it or not.

"The application" is an open standard called TOTP, which doesn't require any specific application. I just set up 2FA for my personal GitHub account using a TOTP app on my Pebble watch. The actual crypto for the 2FA is done locally on the watch, no other device can spoof it, and GitHub get exactly zero feedback or information from this - other than the 2FA code that I type, of course.

There's other ways to do 2FA - U2F is an upcoming standard and people have been using dongle-generated codes from RSA for years.

Before making pronouncements that go against current best practices for end users, you really need to get a much better understanding of what you're talking about. Saying nonsense in a confidence voice is enough to convince people that your terrible advice is worth following, and could result in them having worse information security as a result.


> What on earth are you on about?

You can have the most secure/hardened email server on the planet but if I can change your mx record at your dns provider it will do you no good. And there are multiple attacks against DNS that don't require me to target your provider even (e.g. https://en.wikipedia.org/wiki/DNS_spoofing)

And once all your emails are being sent to me, I can own every account you've signed up for using that email (through reset password). And currently the only reliable defence against that is 2FA.


Having a hardened email server and good passwords does nothing for your security if the attacker targets and takes over either:

* Your DNS Records - MX records point to your email server, the attacker just changes the record to point to their email server, taking your hardened email server out of the loop.

* Your Domain Registrar - the attacker can just change your DNS records to point to their own DNS servers (with their own MX records pointing to their own email server).

Do you trust your DNS provider is completely secure from hacking and social engineering?

Do you trust your Domain registrar to adequately secure their systems from hacking and social engineering? Remember, you only pay them ~$10/year for your domain.


In theory, if the receiving end enforces TLS and if the sending end checks that the certificate is valid for the domain, you don't need to trust the DNS or the registrar. But nobody does that, and you still have to trust the sender.


I'm beginning to think that a possible solution is to use an email account entirely different from your primary email for all website registrations.

Which leads me to another idea. If websites supported separate email accounts for messaging and authentiaction/resets and kept the second account name hidden from general public it would (somewhat) help in a lot of cases.

Also, I would be much more likely to do 2FA on more accounts if websites gave me the option to use printed lists of tokens instead of my phone number.


Umm what valuable data exactly does TOTP (which for example Github uses) give the companies?


Kenneth should repeat N's big takeaway:

• Avoid using custom DNS emails (e.g. yourname@yourdomain.com) for any login purposes. It basically opens you up for these kind of attacks (where a hacker breaks into your domain name account and forwards your custom email to his own).

Read N's story at https://medium.com/@N/how-i-lost-my-50-000-twitter-username-...


A better takeaway is to use a better registrar with good security.

I almost lost an old @gmail.com address when they asked me to verify a really old phone number for no reason. Never trusted them again.


But what is a better registrar? I've heard most all of the main players seem to have social engineering issues, regardless of twofactor, and google domains is still US only


Cloudflare made a product exactly for this purpose: https://blog.cloudflare.com/introducing-cloudflare-registrar...


Gandi.net maybe. I haven't used them, a friend said that I should consider moving there, any thoughts on this, what alternatives is there that offer 2FA and operate in EU?


I have been using Gandi for 10 years and they are definitely great. I see no reason not to use them, except maybe that they usually aren't the cheapest.


Gandi is great! I'm a long time user and highly recommend them.


Gotta say I disagree with this. I'd say what it means Is recognize that DNS security is important and treat it as such.

The reason I disagree is that if you use something like gmail or outlook.com for logins/ password resets there Is a nasty potential problem which Is if the provider locks you out of your account you could be completely stuffed.

There have been cases of people losing access to their accounts because of ToS breaches in the past. If that account is your login account for other systems you could also lose access to those.


But I don't want to use @gmail.com because that ties me down to the vendor.

What am I supposed to use then?


I'm rather not a fan of Google in general, and particularly not these days (though they're hardly the only tech company backing the TPP).

That said: on account of size, targeting, procedures, and what I find are generally fairly diligent employees on the tech side (design, products, ads, and gov't rel'n are another story), you're probably as safe with Google as with any other large vendor.

That said, the basic problem here -- getting locked out of your account or profile, or allowing the wrong person in -- is a HYUUUGE problem. And the 2nd Amendment people can't do anything about it either, to continue the allusion....

I wrote of my own "I've been locked out of a Google account" account, well, twice. It's been pretty annoying (particularly as I'm paranoid and don't trust Google to know who I really am, because reasons). It's been resolved within a few days, though it leaves me scratching my head a bit.

As I noted the first time, and have adopted as a slogan for this type of event, "Who are you is the most expensive question in information technology. No matter how you get it wrong, you're fucked." See: https://redd.it/2w618r https://redd.it/3mo7l6

Unfortunately, that issue is paired with another, also sloganed and given to much use: Data are liability.

If you hold data about people, or state they consider important (e.g., a widely used codebase), or other elements, then you've got control point others may well find they wish to avail themselves of.

I don't have solutions to either of these problems (I'm paranoid, not narcissitically delusional). I can see the shapes of possible solutions, including reducing attack services and possibly having a more widely distributed and socially-integrated identity verification mechanism. Or offering far more services as stateless and without locally-maintained data, at least in cleartext.

Better notifications, recovery, and encryption methods for mail would also help -- capture of email accounts would matter far less if they were encrypted to keys held only by the user (and absolutely not on the control path involved in accessing or specifying them, such as MXs).


>It's been resolved within a few days

Could you share a way to resolve it? I've been in such situation recently. I was forced to change VPN I used for long time to access my Gmail account. And I haven't 2FA enabled because I didn't want to give out my phone number.


In one case, personal appeal to a Googler.

In another, fallback/recovery ultimately worked, but I needed to try from several devices.

The "security questions" proved worse than useless. Unless exercised periodically, I think people forget or lose the answers (or even questions). Worsee, vendors change their strategies.

Seveeral of my Google IDs started from entirely different services, with different rules. And privacy guidelines. E.g., YouTube's old "never use your real name" advice.

How quaint!

Identity is weird.


It'd be an imperfect solution, but use Gmail (or equivalent) address(es) for logins elsewhere, and then your personal email communications via a custom domain hosted with your provider of choice (e.g. FastMail, Zoho, StartMail, Tutanota, etc.) Preferably, the Gmail addresses should not be directly tied to your identity IRL, such that your email is secured but somewhat sandboxed from the large data piles companies store about Your Name.


I'm using a POP3 mailbox hosted by the registar of my custom domain and I download all the messages on my computer. No web mail. I'm trusting the DNS of my registar in the same way I would trust the one of FastMail, Zoho, etc. Is that any different?


That's kind of an "all your eggs in one basket" approach - you're relying completely on the security of one 3rd party (your registrar). Whether this is better or worse than relying on two 3rd parties (your registrar and a different email provider) is a good question.

Are you sure your POP3 mailbox is using encryption for the password? The original port 110 POP3 protocol sends it in cleartext unless there's an STLS command sent (which is MITM-able) - POP3S over port 995 (or 997?) will be encrypted (but then you need to consider whether all the software in the chain is actually checking SSL certs and their chains...)


You could use @[IP address]?

No guarantees this will pass whatever-service's arbitrary validation filters though.


Or that you can maintain that IP address if it was directly assigned to you by IANA/RIPE. (And in some cases, even if it WAS assigned to you).

@[IP address] is a really bad idea.


Yes, as it turns out using the internet requires reliance on third parties. Is the IP address of your mailserver as easy to compromise as your domain name? I don't know, it depends on your ISP, but it almost certainly requires a targeted attack. Using email to "secure" access to accounts is already "a really bad idea", so what can one do?


IP address is harder to compromise, however, unlike domain names you cannot guarantee to own it permanently.


> unlike domain names you cannot guarantee to own it permanently.

I'm not convinced this is a property of domain names. It seems like as long as you pay your bills, your provider stays in business, and no governments get involved, both these identifiers are effectively permanent. Your IP is likely provided by the same people who own or co-locate your physical server anyway. In any case, you're likely to get enough advance notice of ip changes to update your various accounts.


Build your own security mechanisms (alerting, etc.)


It seems he got his account back https://twitter.com/N


Some domain registrars/DNS management services support multi-factor authentication. If yours does not, you should migrate to one that does.

DNS is the foundation upon which everything else is built. And, it's been my experience that DNS and email attacks are very common.

If an attacker can compromise DNS and email, then they can compromise all the higher-level services that send password resets by email (twitter, github, facebook, whatever).


I'm still trying to wrap my head around DigitalOcean emailing me the root password to my new instances.


If you specify a keypair to be used, they won't do that.


The correct solution is to display it to the user on an HTTPS page.


Why do you think this is more correct than the user providing the public side of an ssh key pair?


If DigitalOcean and your mail server both talk TLS, what's the problem?


Some random old bit of Cisco gear sitting anywhere in between with it's default configuration set to strip out STARTTLS commands...

http://www.cisco.com/c/en/us/about/security-center/intellige...

"When Cisco ASA is configured for ESMTP inspection, the ASA is not able to examine the TLS session because it is encrypted. Therefore the ASA will prevent the establishment of the STARTTLS session and allow the SMTP endpoints to determine whether the SMTP session should continue in clear text (that is, with no privacy)."

(I once billed a client just over $30k to investigate/diagnose/resolve that problem - there was a piece of Cisco gear on the edge of their network that nobody ever admitted to even knowing existed which was stripping out the STARTTLS instruction between a webapp running inside their own datacenter and their own 3rd party mail service - and everybody was pointing their fingers at _me_ for the mail not coming through encrypted... Twitch. Twitch. Twitch...)


Ah. So we need more of that Postini option to require STARTTLS is successful.


The problem is that email is insecure. It isn't good security practice to send root passwords over email under any circumstances.


It'd be nice if you could flip a setting on GitHub so that password-reset emails are encrypted with a GPG key. They already have an interface for uploading GPG keys.


Thanks for sharing your story Kenneth. Unfortunately it will be a common one... Maintainers of open source projects will be increasingly target by sophisticated hacking teams, sometimes government funded. They will often win but the best thing you can do for yourself and your users is to practice good security hygiene and this story is a perfect example why. Strong random passwords everywhere (no repeated passwords) and 2-factor auth should be the minimum. Thankfully there are plenty of free apps out there that help you manage this process. Nobody can have perfect security but you can easily raise the bar high enough to force an attacker to move elsewhere. Also the Op's password was most likely taken from the recently leaked LinkedIn breach (educated guess).


Github authors - Sign your commits and tags with a PGP signature. https://help.github.com/articles/signing-commits-using-gpg/

It doesn't look like the authors/contributors of requests are using Github signed commits either.


Is there a comprehensive list somewhere of which websites/services support 2FA and where to go to enable it on each one?



Thanks, that's just what I was looking for!


How could such a list possibly stay current short of having a web crawler and natural language processor? Or did you not mean an exhaustive list of all websites?



Seems I misread the question as asking for a complete list of websites that allow 2FA and websites that do not, the latter requiring an exhaustive listing of all sites on the internet.

> figurative language

I read the Wikipedia article. It indicates our friend rcthompson expected his question to be interpreted literally, not figuratively. He did not use a simile, nor metaphor, nor any of the other techniques Wikipedia lists.

Perhaps you're suggesting that "comprehensive" should be interpreted as "many" or "a large variety" when the proper meaning is implausible. In that case, what use is the word "comprehensive"?


Given that the literal reading of my question is absurd for the reasons you cite, the obvious interpretation is that my use of "comprehensive" was shorthand for "as comprehensive as possible".


Fair enough.

I should point out that I've seen lots of comments lately that actually are absurd. Forgive me for the mistake?


Kind of related, does anyone know if it's possible to mandate two factor auth across a github organisation? I know you can see if it's enabled on your users list but that's a bit arduous. Seems like any one user not having 2fa enabled would be the weakest link other wise.


Looking at the two best guesses: a reasonable assumption, if the Certifi bundle was in fact the target of this attack, is that some consumer of that bundle is that true target of this attack.

(Incidentally: I'm not familiar with what the Certifi bundle is, and some quick DDGing didn't turn it up.)

As a recent convo I'd had here on HN turned up, key management is a crucial element of PKI, which includes not only SSH and PGP, but the CA-based measures: SSL and TLS.

Your web link is only as secure as the least-paranoid developer's MX registrations in your entire development toolchain.


Yes, you should enable two-factor authentication for all your import accounts. My iCloud account was compromised two months ago, after that I turned on two-factor auth for all my important accounts.


Does anyone know the Twitter handle in question? I'm curious to read about that incident.

(edit: Oops, I guess I didn't realize the bold were hyperlinks in the article. Thanks for the pointer.)


Yes, it's @n. This is linked from the article: https://medium.com/@N/how-i-lost-my-50-000-twitter-username-...



Is it possible to enforce two-factor authentication for all developers with merge rights on github?

Also is it possible to check if someone has 2-factor authentication?


In an organisation on github you can see all members who don't have 2FA (indicated by a red !) - https://github.com/orgs/<org-name>/people

No way to mandate 2FA yet though.


Would using 2FA on the DNS provider have prevented this? It's unclear to me how exactly the attacker got into DNSsimple.


Password reuse with some other site that had it's database stolen.


I would be nice if the popular packages would have to be audited by the community before being pushed, would make it harder to do attacks with with such a large possible target (all the tech companies).


Wait a minute, why does requests come with a cert bundle?


Just wanted to say I love requests. Thanks for it.


Don't insurance companies require you to make any kind of internet security audit ?

I mean if this doesn't happen, and if government don't take steps to improve the situation in the next 10 or 15 years, won't things get worse enough that politicians notice?


Thanks for sharing!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: