The amazing thing to me is that I don't [know] anybody who thought it was coming in the next couple years who based their estimate on anything but hope and magic.
I'd agree that a bit of the proper sort of analysis would show that this wasn't easily happening. But I think the kind of thinking that imagines this as possible is extremely easy to fall into.
We've all been on software projects where the launch date was the earliest day nobody could prove it was impossible to finish.
Sure and I'm pretty sure a lot of software engineers have given or agreed to completely unrealistic estimates after having had this experience. If anything, the "software crisis", the inability of software engineers to provide good estimates even after hard experience and warnings is an expression that over-optimistic thinking is natural for human beings in this situation.
My theory is that in situations where a human is asked about the difficulty of a mental task, they essentially can only fall back on a basic language-expression of said task and rationally try to calculate with that - the brilliance and the limitation of we language-generating, language-using creature is the ability to correctly distill a complex and messy task into fairly simple sequence of symbols. But that correctness actually exists in a complex and dynamic context, most of which we screen out. A command like "Drive to the store" doesn't take into account all the compensations needed to do the driving just as "Create an inventory system" fails to specify all the client-specific "gotchas" involved in such a task.
So going from our normal language-distillation process to a complete software system that doesn't have the continual dynamic compensation of our intelligence is an inherently difficult problem - even though we can probably only do it at all because of our "language facility".
> the inability of software engineers to provide good estimates
I don't think it's inability. I think they're embedded in a system that doesn't encourage or reward honesty or accuracy.
I agree it's easy for people to say, "Gosh, how hard can that be?" Which is where we get the classic, "I could build that in a week" estimate. But I don't agree that there's something inevitable about people going from a finger-in-the-air SWAG to a "we'll have self-driving cars in 2021" business plan.
Normally I'd say that it's a straight up failure of both management techniques and the professional standards of the engineers. Which is true. But I think the problem is that a lot of fundraising in the "Uber for X" era is only slightly less of a con than Theranos. So I think it's more correctly seen as a failure of founders and VCs.
At some point, when you're talking about a concrete and complete plan, you likely go from naive self-deception to the willful deception of others.
Of course, one should consider how people's tendency to naive self-deception makes things easier for willful deceivers.
As a scheme for selling an impossible dreams takes shape, I suspect the actors have a strange and contradictory mindset. A rational conman is going to sell a smallish scam and vanish. The folks running Theranos certainly deceived many but they also rode the train far past the point where they could avoid being caught in the wreck (so they wound-up ruined, looking at jail time, etc). Maybe they had belief that if they kept the charade going long enough could make the original scheme work but they could also have a sort of thinking that simply doesn't look at the possibility of failure once they have decided to seek success.
I'd agree that a bit of the proper sort of analysis would show that this wasn't easily happening. But I think the kind of thinking that imagines this as possible is extremely easy to fall into.
We've all been on software projects where the launch date was the earliest day nobody could prove it was impossible to finish.
Sure and I'm pretty sure a lot of software engineers have given or agreed to completely unrealistic estimates after having had this experience. If anything, the "software crisis", the inability of software engineers to provide good estimates even after hard experience and warnings is an expression that over-optimistic thinking is natural for human beings in this situation.
My theory is that in situations where a human is asked about the difficulty of a mental task, they essentially can only fall back on a basic language-expression of said task and rationally try to calculate with that - the brilliance and the limitation of we language-generating, language-using creature is the ability to correctly distill a complex and messy task into fairly simple sequence of symbols. But that correctness actually exists in a complex and dynamic context, most of which we screen out. A command like "Drive to the store" doesn't take into account all the compensations needed to do the driving just as "Create an inventory system" fails to specify all the client-specific "gotchas" involved in such a task.
So going from our normal language-distillation process to a complete software system that doesn't have the continual dynamic compensation of our intelligence is an inherently difficult problem - even though we can probably only do it at all because of our "language facility".