I wonder what the possibilities are for adding a neuromorphic chip to a normal stack for specialized tasks such as the image/video recognition (cpu, gpu, npu). GPUs are very similar in their need for specialized code vs cpus.
This is something I'm interested in discovering as well. I view most of these developments as modular components that could be used in conjunction with existing processor pipelines. For instance, with these 'neural' chips, I could imagine an existing processor querying the neural chip to look for particular activation patterns. Though I'm not too sure on the language one would use to specify which patterns to look for... Perhaps you could extract the parameters from the neural chip itself through a learning process, which you'd then use to bootstrap the process a bit and know what to look for? I'd imagine a lot of formal research is still needed here.
Neat developments, excited to see how they shake out.
One possibility is to use the neuromorphic chips as souped-up branch predictors -- instead of predicting one bit, as in a branch predictor, predict all bits relevant for speculative execution. This can effect large-scale automatic parallelization.
That is definitely be something plausible. From what I've seen, there's a lot of work at the moment in trying to write languages and toolkits to automatically target hetrogenous platforms - which this could be slotted into nicely.
Just an uneducated wild-thought.