I have doubts about the Einstein quote. Atoms had been split at will for decades by then.
Szilard hadn't yet proposed a theory of nuclear chain reactions, but according to some cites of the quote Einstein didn't say it until 1934 - which was after Szilard.
I don't have a problem with the possibility that suprahuman intelligence may be possible. I do have a problem with the fact that currently we have no idea what the concept may even mean - and right now, more immediate cybersecurity issues are being neglected.
Computers are already better than humans at many activities. From playing chess to landing planes to learning how to play a video game - a computer with the right software is much better at these than an average human, and is often at least good as the best humans.
Take that to the black corner, and worms and botnets are already a serious problem.
We don't need to wait for the Internet to become sentient and start talking to us in a deep echoey robot voice to worry about cyberthreats.
There's more than enough to deal with already. And if you're going to try to regulate and contain a future AI, making current systems as secure as possible seems like a realistic place to start.
"I have doubts about the Einstein quote. Atoms had been split at will for decades by then."
Not in a chain reaction. When Szilárd described the concept of a chain reaction to Einstein, Einstein was shocked. He said "I never thought of that!"
Until then, nuclear physics was purely an academic enterprise. There were few applications for radioactive materials. Radioactive decay just happened at its own slow pace, and not much could be done with it. X-rays could be used to pump the process, but less energy came out than what was put in. Suddenly the nuclear physicists realized they had a tiger by the tail. This was going to change the world, not necessarily for the better.
Like @TheOtherHobbes above, I had disbelief about the Einstein quote ("Wasn't Einstein presciently aware of where nuclear fission technology was going?").
But, poking around a bit, I came to the same understanding you have. Here's some more of the time line:
The quote in the OP (which I can't find online; the Einstein archives at Caltech are, alas, not indexed) about Einstein's skepticism about nuclear energy is dated 1932. The first demonstrations of nuclear fission were years later, in late 1938 and into 1939. And as you said, Einstein is reported to have said, "I had not thought of that." -- regarding the chain reaction.
The fabled Einstein-Szilard letter to Franklin Roosevelt, warning about the Nazis getting the atomic bomb, was written in August 1939 (http://en.wikipedia.org/wiki/Einstein–Szilárd_letter), and then relayed to Roosevelt in October after the flurry of activity due to the Nazis invading Poland had died out.
Many good points. I can't argue for or against you about resource allocation because I have no idea what resources are available. I can't even argue for regulation, because I know so little about the current landscape. But I can say that AI is possible, people are working on it, therefore people should be encouraged to discuss the potential threats and safeguards for it.
My opinions currently stop at this is an important topic and anyone who is interested should be exploring it via their chosen medium.
Szilard hadn't yet proposed a theory of nuclear chain reactions, but according to some cites of the quote Einstein didn't say it until 1934 - which was after Szilard.
I don't have a problem with the possibility that suprahuman intelligence may be possible. I do have a problem with the fact that currently we have no idea what the concept may even mean - and right now, more immediate cybersecurity issues are being neglected.
Computers are already better than humans at many activities. From playing chess to landing planes to learning how to play a video game - a computer with the right software is much better at these than an average human, and is often at least good as the best humans.
Take that to the black corner, and worms and botnets are already a serious problem.
We don't need to wait for the Internet to become sentient and start talking to us in a deep echoey robot voice to worry about cyberthreats.
There's more than enough to deal with already. And if you're going to try to regulate and contain a future AI, making current systems as secure as possible seems like a realistic place to start.