Quantcast
Channel: Hacker News 50
Viewing all articles
Browse latest Browse all 9433

azakai: The Elusive Universal Web Bytecode

$
0
0

Comments:"azakai: The Elusive Universal Web Bytecode"

URL:http://mozakai.blogspot.com/2013/05/the-elusive-universal-web-bytecode.html


It's often said that the web needs a bytecode. For example, the very first comment in a very recent article on video codecs on the web says
A proper standardized bytecode for browsers would (most likely) allow developers a broader range of languages to choose from as well as hiding the source code from the browser/viewer (if that's good or not is subjective of course).  And other comments continue with
Just to throw a random idea out there: LLVM bytecode. That infrastructure already exists, and you get to use the ton of languages that already have a frontend for it (and more in the future, I'm sure). [..] I also despise javascript as a language and wish someone would hurry up replacing it with a bytecode so we can use decent languages again. [..] Put a proper bytecode engine in the browser instead, and those people that love javascript for some unknowable reason could still use it, and the rest of us that use serious languages could use them too. [..] Honestly, .Net/Mono would probably be the best bet. It's mature, there are tons of languages targeting it, and it runs pretty much everywhere already as fast as native code Ignoring the nonproductive JS-hating comments, basically the point is that people want to use various languages on the web, and they want those languages to run fast. Bytecode VMs have been very popular since Java in the 90's, and they show that multiple languages can run in a single VM while maintaining good performance, so asking for a bytecode for the web seems to make sense at first glance.

But already in the quotes above we see the first problem: Some people want one bytecode, others want another, for various reasons. Some people just like the languages on one VM more than another. Some bytecode VMs are proprietary or patented or tightly controlled by a single corporation, and some people don't like some of those things. So we don't actually have a candidate for a single universal bytecode for the web. What we have is a hope for an ideal bytecode - and multiple potential candidates.

Perhaps though not all of the candidates are relevant? We need to pin down the criteria for determining what is a "web bytecode". The requirements as mentioned by those requesting it include

  • Support all the languages
  • Run code at high speed
To those we can add two additional requirements that are not mentioned in the above quotations, but are often heard:
  • Be a convenient compiler target
  • Have a compact format for transfer
In addition we must add the requirements that anything that runs on the web must fulfill,
  • Be standardized
  • Be platform-independent
  • Be secure
JavaScript can already do the last three (it's already on the web, so it has to). Can it do the first four? I would say yes:
  • Support all the languages: A huge list of languages can compile into JavaScript, and that includes major ones like C, C++, Java, C#, LLVM bytecode, and so forth. There are some rough edges - often porting an app requires changing some amount of code - but nothing that can't be improved on with more work, if the relevant communities focus on it. C++ compilers into JavaScript like Emscripten and Mandreel have years of work put into them and are fairly mature (for example see the Emscripten list of demos). GWT (for Java) has likewise been used in production for many years; the situation for C# is perhaps not quite as good, but improving, and even things like Qt can be compiled into JavaScript. For C#, Qt, etc., it really just depends on the relevant community being focused on the web as one of its targets: We know how to do this stuff, and we know it can work.
  • Run code at high speed: It turns out that C++ compiled to JavaScript can run at about half the speed of native code, which in some cases outperforms Java, and is expected to get better still. Those numbers are when using the asm.js subset of JavaScript, which basically structures the compiler output into something that is easier for a JS engine to optimize. It's still JavaScript, so it runs everywhere and has full backwards compatibility, but it can be run at near-native speed already today.
  • Be a convenient compiler target: First of all, the long list of languages from before shows that many people have successfully targeted JavaScript. That's the best proof that JavaScript is a practical compiler target. Also, there are many languages that compile into either C or LLVM bytecode, and we have more than one compiler capable of compiling those to the web, and one of them is open source, so all those languages have an easy path. Finally, while compiling into a "high-level" language like JavaScript is quite convenient, there are downsides, in particular the lack of support for low-level control flow primitives like goto; however, this is addressed by reusable open source libraries like the Relooper.
  • Have a compact format for transfer: It seems intuitive that a high-level language like JavaScript cannot be compact - it's human-readable, after all. But it turns out though that JS as a compiler target is already quite small, in fact comparable to native code when both are gzipped. Also, even in the largest and most challenging examples, like Unreal Engine 3, the time spent to parse JS into an AST does not need to be high. For example, in that demo it takes just 10 seconds on my machine to both parse and fully optimize the output of over 1 million lines of C++ (remember that much of that optimization time would need to be done no matter what format the code is in, because it has to be a portable one).
So arguably JavaScript is already very close to providing what a bytecode VM is supposed to offer, as listed in the 7 requirements above. And of course this is not the first time that has been said, see here for a previous discussion from November 2010. In the 2.5 years since that link, the case for that approach has gotten significantly stronger, for example, JavaScript's performance on compiled code has improved substantially, and compilers to JavaScript can compile very large C++ applications like Unreal Engine 3, both as mentioned before. At this point the main missing pieces are, first (as already mentioned) improving language support for ones not yet fully mature, and second, a few platform limitations that affect performance, notably lack of SIMD and threads with shared state.

Can JavaScript fill the gaps of SIMD and mutable-memory threads? Time will tell, and I think these things would take significant effort, but I believe it is clear that to standardize them would be orders of magnitude simpler and more realistic than to standardize a completely new bytecode. So a bytecode has no advantage there.

Some of the motivation for a new bytecode appears to come from an elegance standpoint: "JavaScript is hackish", "asm.js is a hack", and so forth, but a new from-scratch bytecode would be (presumably) a thing of perfection. That's an understandable sentiment, but technology has plenty of such things, witness the persistence of x86, C++, and so forth (some would add imperative programming to that list). It's not only true of technology but human civilization as well, for example no natural language has the elegance of Esperanto, and our currently-standing legal and political systems are far from what a from-scratch redesign would arrive at. But large long-standing institutions are easier to improve continuously rather than to completely replace. I think it's not surprising that that's true for the web as well.

(Note that I'm not saying we shouldn't try. We should. But we shouldn't stop trying at the same time to also improve the current situation in a gradual way. My point is that the latter is more likely to succeed.)

Elegance aside, could a from-scratch VM be better than JavaScript? In some ways of course it could, like any redesign from scratch of anything. But I'm not sure that it could fundamentally be better in substantial ways. The main problem is that we just don't know how to create a perfect "one bytecode to rule them all" that is

  • Fast - runs all languages at their maximal speed
  • Portable - runs on all CPUs and OSes
  • Safe - sandboxable so it cannot be used to get control of users' machines
The elusive perfect universal bytecode would need to do all three, but it seems to me that we can only pick two.

Why is this so, when supposedly the CLR and JVM show that the trifecta is possible? The fact is that they do not, if you really take "fast" to mean what I wrote above, which is "runs all languages at their maximal speed" - that's what I mean by "perfect" in the context of the last paragraph. For example, you can run JavaScript on the JVM, but it won't come close to the speed of a modern JS VM. (There are examples of promising work like SPUR, but that was done before the leaps in JS performance that came with CrankShaft, TypeInference, IonMonkey, DFG, etc.).

The basic problem is that to run a dynamic language at full speed, you need to do the things that JavaScript engines, LuaJIT, etc. do, which includes self-modifying code (architecture-specific PICs), or even things like entire interpreters in handwritten optimized assembly. Making those things portable and safe is quite hard - when you make them portable and safe, you make them more generic pretty much by definition. But CPUs have significant-enough differences that doing generic things can lead to slower performance.

The problems don't stop there. A single "bytecode to rule them all" must make some decisions as to its basic types. LuaJIT and several JavaScript VMs represent numbers using a form of NaNboxing, which uses invalid values in doubles to store other types of values. Deciding to NaNbox (and in what way) or not NaNbox is typically a design desicion for an entire VM. NaNboxing might be well and good for JS and Lua, but it might slow down other languages. Another example is how strings are implemented: IronPython, Python on .NET, ran into issues with how Python expects strings to work as opposed to .NET.

Yet another area where decisions must be made is garbage collection. Different languages have different patterns of usage, both determined by the language itself and the culture around the language. For example, the new garbage collector planned for LuaJIT 3.0, a complete redesign from scratch, is not going to be a copying GC, but in other VMs there are copying GCs. Another concern is finalization: Some languages allow hooking into object destruction, either before or after the object is GC'd, while others disallow such things entirely. A design decision on that matter has implications for performance. So it is doubtful that a single GC could be truly optimal for all languages, in the sense of being "perfect" and letting everything run at maximal speed.

So any VM must make decisions and tradeoffs about fundamental features. There is no obvious optimal solution that is right for everything. If there were, all VMs would look the same, but they very much do not. Even relatively similar VMs like the JVM and CLR (which are similar for obvious historic reasons) have fundamental differences.

Perhaps a single VM could include all the possible basic types - both "normal" doubles and ints, and NaNboxed doubles? Both Pascal-type strings and C-type strings? Both asynchronous and synchronous APIs for everything? Of course all these things are possible, but they make things much more complicated. If you really want to squeeze every last ounce of performance out of your VM, you should keep it simple - that's what LuaJIT does, and very well. Trying to support all the things will lead to compromises, which goes against the goal of a VM that "runs all languages at their maximal speed".

(Of course there is one way to support all the things at maximal speed: Use a native platform as your VM. x86 can run Java, LuaJIT and JS all at maximal speed almost by definition. It can even be sandboxed in various ways. But it has lost the third property of being platform-independent.)

Could we perhaps just add another VM like the CLR alongside JavaScript, and get the best of both worlds that way, instead of putting everything we need in one VM? That sounds like an interesting idea at first, but it has technical difficulties and downsides, is complex, and would likely regress existing performance.

Do we actually need "maximal speed"? How about just "reasonable speed"? Definitely, we can't hold out for some perfect VM that can do it all. In the last few paragraphs I've been talking about a "perfect" bytecode VM that can run everything at maximal speed. My point is that it's important to realize that there is no such VM. But, with some compromise we definitely can have a VM that runs many things at very high speeds. Examples of such VMs are the JVM, CLR, and as mentioned before JavaScript VMs as well, since they can run one very popular dynamic language at maximal speed, and they can run statically typed code compiled from C++ about as well or even better than some bytecode VMs (with the already-discussed caveats of SIMD and shared-mutable threads).

For that reason, switching from JavaScript to another VM would not be a strictly better solution in all respects, but instead just shift us to another compromise. For example, JavaScript itself would be slower on the CLR, but C# would be faster, and I'm not sure which of the two can run C++ faster, but my bet is that both can run it at about the same speed.

So I don't think there is much to gain, technically speaking, from considering a new bytecode for the web. The only clear advantage such an approach could give is perhaps a more elegant solution, if we started from scratch and designed a new solution with less baggage. That's an appealing idea, and in general elegance often leads to better results, but as argued earlier there would likely be no significant technical advantages to elegance in this particular case - so it would be elegance for elegance's sake.

I purposefully said we don't need a new bytecode in the last paragraph. We already have JavaScript, which I have claimed is quite close to providing all the advantages that a bytecode VM could. Note that this wasn't entirely expected - not any language can in a straightforward way be transformed into a more general target for other languages. It just so happens though that JavaScript did have just enough low-level support (bitwise operations being 32-bit, for example) to make it a practical C/C++/LLVM IR compiler target, which made it worth investing in projects like the Relooper that work around some of its other limitations. Combined with the already ongoing speed competition among JavaScript engines, the result is that we now have JavaScript VMs that can run multiple languages at high speed.

In summary, we already have what practically amounts to a bytecode VM in our browsers. Work is not complete, though: While we can port many languages very well right now, support for other languages is not quite there yet. If you like a language that is not yet supported on the web, and you want it to run on the web, please contribute to the relevant open source project working on doing that (or start one if there isn't one already). There is no silver bullet here - no other bytecode VM that if only we decided on it, we would have all the languages and libraries we want on the web, "for free" - there is work that needs to be done. But in recent years we have made huge amounts of progress in this area, both in infrastructure for compiling code to JavaScript and in improvements to JavaScript VMs themselves. Let's work together to finish that for all languages.


Viewing all articles
Browse latest Browse all 9433

Trending Articles