Sure, predictions are hard. We often get them wrong on the upside (over-hype) and the downside. As a result, "AI isn't going to happen" really is not a good argument against discussing the potential risks of it. (... and I say that as someone who is an "AI is around the corner" skeptic.)
Yet that doesn't change the basic risk calculus. In his previous post, Sam advocated imposing draconian licensing and observation requirements on what in practice would be the majority of non-trivial CS research. He advocated this on the basis of the potential risk that as-yet-to-be-developed hypothetical AIs might pose to human beings.
In addition to what I wrote there, I think that the risk of dramatically slowing progress in CS/AI also has to be taken into account. There is risk of doing and there is the risk of not doing.
The problem is that we currently face a number of existential risks -- like catastrophic economic collapse due to fossil fuel depletion -- where the majority of the risk is in the "risk of not doing" category. We know with total certainty that if we continue business as usual with no change, our civilization will collapse. It's simple physics and high school math -- exponential growth in consumption of a finite resource without any substitution or path to replacement can only end in one way.
Smart computers might help us crack tough problems like fusion, safe and scalable fission, better batteries to make renewable energy more practical, etc. That in turn might help us avoid an absolutely real, tangible, non-hypothetical, definite existential risk. I see no reason to hamstring that kind of progress to defend against extremely hypothetical low-probability risks.
That's why I consider Sam's suggestions to regulate CS research more dangerous than any risk posed by speculative AI scenarios.
I am not opposed to all regulation, but I am opposed to regulations based on extremely hypothetical hand-wavey risks. I'm also opposed to regulations that are virtually impossible to define accurately or enforce fairly. Regulations should be clear, objective, rationally justified by tangible problems or risks, and minimal. We should have regulations around, say, the use of nuclear materials, but that's because we know for an absolute fact that it is dangerous. We should have financial regulations because we know financial fraud has happened and will continue to happen without them. ... etc. But I positively cringe at the imposition of ill-defined broad regulations based on fear-mongering and "precautionary principle" thinking -- a.k.a. institutionalized paranoia and cowardice. Such regulations can do nothing other than halt progress in the name of vague paranoia.
Make no mistake: Sam's proposal in his previous post would halt all non-trivial CS research, or at least would slow it to such a crawl that it would effectively stop. It would also cause a mass exodus from the field, since nobody wants to operate under that kind of nonsense. Given that CS is the primary driver now of progress in other fields, that would also likely halt major progress in energy, materials, propulsion, transportation, etc.
If you read my blog post above, I take this in almost a conspiracy direction and speculate that this is some sort of political power play to lock down the field. The reason for this is that I find it hard to believe that someone of Sam's intellect and education would not realize the implications of what he's suggesting.
Yet that doesn't change the basic risk calculus. In his previous post, Sam advocated imposing draconian licensing and observation requirements on what in practice would be the majority of non-trivial CS research. He advocated this on the basis of the potential risk that as-yet-to-be-developed hypothetical AIs might pose to human beings.
I did a short post on it here: http://adamierymenko.com/did-sam-altman-of-y-combinator-just...
In addition to what I wrote there, I think that the risk of dramatically slowing progress in CS/AI also has to be taken into account. There is risk of doing and there is the risk of not doing.
The problem is that we currently face a number of existential risks -- like catastrophic economic collapse due to fossil fuel depletion -- where the majority of the risk is in the "risk of not doing" category. We know with total certainty that if we continue business as usual with no change, our civilization will collapse. It's simple physics and high school math -- exponential growth in consumption of a finite resource without any substitution or path to replacement can only end in one way.
Smart computers might help us crack tough problems like fusion, safe and scalable fission, better batteries to make renewable energy more practical, etc. That in turn might help us avoid an absolutely real, tangible, non-hypothetical, definite existential risk. I see no reason to hamstring that kind of progress to defend against extremely hypothetical low-probability risks.
That's why I consider Sam's suggestions to regulate CS research more dangerous than any risk posed by speculative AI scenarios.
I am not opposed to all regulation, but I am opposed to regulations based on extremely hypothetical hand-wavey risks. I'm also opposed to regulations that are virtually impossible to define accurately or enforce fairly. Regulations should be clear, objective, rationally justified by tangible problems or risks, and minimal. We should have regulations around, say, the use of nuclear materials, but that's because we know for an absolute fact that it is dangerous. We should have financial regulations because we know financial fraud has happened and will continue to happen without them. ... etc. But I positively cringe at the imposition of ill-defined broad regulations based on fear-mongering and "precautionary principle" thinking -- a.k.a. institutionalized paranoia and cowardice. Such regulations can do nothing other than halt progress in the name of vague paranoia.
Make no mistake: Sam's proposal in his previous post would halt all non-trivial CS research, or at least would slow it to such a crawl that it would effectively stop. It would also cause a mass exodus from the field, since nobody wants to operate under that kind of nonsense. Given that CS is the primary driver now of progress in other fields, that would also likely halt major progress in energy, materials, propulsion, transportation, etc.
If you read my blog post above, I take this in almost a conspiracy direction and speculate that this is some sort of political power play to lock down the field. The reason for this is that I find it hard to believe that someone of Sam's intellect and education would not realize the implications of what he's suggesting.