> Or here is another question: how do you keep incentive to optimize Emscripten generated code so that eventually you can kill "use asm" and go full speed without it?
Why is that important? I don't get it.
Emscripten -> JS is lossy; it throws away information that the code generator could use to generate more efficient code. A clever JIT can re-discover that information by observing the code in action. And sure, that's a satisfying challenge from a VM implementor's perspective, why is it a priori worse to just tunnel that information through via asm.js rather than throwing it away?
Even if the VM is perfect at this, there is a non-zero runtime cost that, I would expect, scales linearly in the size of the application.
I believe it can be done, thus because it can be done I don't see why it should not be done. Why keep two front-ends (two parsers even!), two separate IR generators etc in the system if you can have one?
I also think that such code can occur and occurs in real world applications as well. And I want any JS code go faster to its limit, without requiring people to rewrite anything.
It is true that dynamic compilation incurs certain overhead and requires warm up. But it is also true that AOT compilation is not cheap either (that is why a special API to cache generated code is being suggested).
Why is that important? I don't get it.
Emscripten -> JS is lossy; it throws away information that the code generator could use to generate more efficient code. A clever JIT can re-discover that information by observing the code in action. And sure, that's a satisfying challenge from a VM implementor's perspective, why is it a priori worse to just tunnel that information through via asm.js rather than throwing it away?
Even if the VM is perfect at this, there is a non-zero runtime cost that, I would expect, scales linearly in the size of the application.