> It uses a subset of the hugely successful and important LLVM IR to represent bytecode. IR that's way more suitable for the task than a subset of JS.
Speaking as someone who works with LLVM IR on a daily basis, I really dislike the idea of shipping compiler IR to users. Believe it or not, asm.js is actually significantly closer to the ideal bytecode that I would ship, except for surface syntax.
> And the sandboxing model PNaCl uses also makes way more sense than asm.js's use of typed arrays to represent the heap. Where is your memory model, asm.js? Do you think it's an easy problem to solve?
You could implement asm.js in the exact same way. There's nothing stopping you from doing that.
> I don't see threads with shared memory coming to asm.js in _years_, while PNaCl already has them.
Years? Not according to any discussions I've been privy to.
> So 2x native vs. 1.15x native is a huge difference.
I have not verified the PNaCl performance numbers, but I would be surprised if they counted compilation time. The asm.js performance benchmarks do count compilation time. So I suspect the apples-to-apples gap is actually much smaller.
>It's much simpler than LLVM and reuses the components of the JavaScript engine that already must exist in browsers.
But Google say they used simplified subset of LLVM IR. In addition as far as I understand PNaCl has its own triple which parks down some things like endinaness and pointers size etc. which make IR much more portable.
> You could implement asm.js in the exact same way. There's nothing stopping you from doing that.
Yes, I don't argue you cannot keep evolving JS to be closer to real bytecode, I'm just arguing this in my opinion is not the right way to go. And moreover I think it will fail. People tried retrofit JVM to be bytecode for C/C++ too, and where has this efforts lead?
In contrast LLVM IR /is already/ IR for C/C++. In its full form it's a compiler IR but making it more stable and portable seem like much smaller task than retrofitting all featuers needed for native execution onto JS.
> I have not verified the PNaCl performance numbers, but I would be surprised if they counted compilation time. The asm.js performance benchmarks do count compilation time. So I suspect the apples-to-apples gap is actually much smaller.
Compilation time matters only the first time app is run, Google said. If you have a game or large application, you run it more than once. Since their compilation generates native binary, they just need to load it next time so it's 0 compilation time. I don't know how their first time compilation will compare to asm.js but all subsequent ones are likely better.
Speaking as someone who works with LLVM IR on a daily basis, I really dislike the idea of shipping compiler IR to users. Believe it or not, asm.js is actually significantly closer to the ideal bytecode that I would ship, except for surface syntax.
LLVM IR is a compiler IR: http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-October/0437...
> And the sandboxing model PNaCl uses also makes way more sense than asm.js's use of typed arrays to represent the heap. Where is your memory model, asm.js? Do you think it's an easy problem to solve?
You could implement asm.js in the exact same way. There's nothing stopping you from doing that.
> I don't see threads with shared memory coming to asm.js in _years_, while PNaCl already has them.
Years? Not according to any discussions I've been privy to.
> So 2x native vs. 1.15x native is a huge difference.
I have not verified the PNaCl performance numbers, but I would be surprised if they counted compilation time. The asm.js performance benchmarks do count compilation time. So I suspect the apples-to-apples gap is actually much smaller.