The post I was replying to said that "current ML techniques aren't anywhere near creating general intelligence". I wasn't suggesting the doomsday scenarios are likely, just that some people, Jeff Hawkins at least, believe they are close to developing general intelligence.
Indeed, to quote Hawkins in the very article you linked to:
"What is new is that intelligent machines will soon be a reality, and this has people thinking seriously about the consequences."
I don't have enough expertise in ML to judge whether he's correct, but I'd be curious to hear from the OP because it seems to contradict his claim that current ML techniques are limited to finding optima in functions over high dimensional spaces.
So essentially the way ML works is you have some error function that you test your output against and then you have some model (neural networks and their variants seem to be performing the best currently, but different models do better at different tasks) that can be thought of as a function between inputs and outputs. Typically you have lots of inputs (for example a picture could be represented as an array of pixels, so you'd have one input for every pixel). The model then guesses how to transform the input into an output and measures the result against the error function. The goal is then to improve the model iteratively (and hopefully not overfit!) to eventually minimize the error.
I'm not very familiar with Numenta or Hawkins, so take what I'm saying with a grain of salt, however, I think there are two important things to consider.
First, you can have intelligence without having superhuman any sort of general intelligence. For example, the best neural networks for handwritten digit recognition can perform better than I or most other humans can at the task. However, if you asked this same neural network to predict whether an image is a dog or a cat, it wouldn't terribly. At least until you retrained it and tweaked it a bit. But it certainly wouldn't have human level performance. Similarly, the best computers can beat any human at chess. So these are all demonstrations of "intelligence", and I'm sure you can think of a lot more.
Second, the problem isn't so much that any given task is very difficult (although many are), but that generalizing solving one task requiring intelligence from the specific cases we're good at (namely classification) is a really hard problem and we don't seem to be getting better at it. We're pretty good at learning a model on narrowly defined tasks, but then if you take that model and use it on something it wasn't designed for it will just give you garbage.
So I'd say that intelligent machines are already something we're seeing, but machines that learn how to navigate the world and retrain themselves on arbitrary problems are a long ways off and this seems like it will be the case for the foreseeable future.
Indeed, to quote Hawkins in the very article you linked to:
"What is new is that intelligent machines will soon be a reality, and this has people thinking seriously about the consequences."
I don't have enough expertise in ML to judge whether he's correct, but I'd be curious to hear from the OP because it seems to contradict his claim that current ML techniques are limited to finding optima in functions over high dimensional spaces.