Hacker News new | comments | show | ask | jobs | submit login
Brendan Eich: WebAssembly is a game-changer (infoworld.com)
261 points by alex_hirner 12 hours ago | past | web | 231 comments

Personally, I think this is terrible (and it really is a game-changer, only not the kind that I'd be happy about). The further we get away from the web as a content delivery vehicle and more and more a delivery for executables that only run as long as you are on a page the more we will lose those things that made the web absolutely unique. For once the content was the important part, the reader was in control, for once universal accessibility was on the horizon and the peer-to-peer nature of the internet had half a chance of making the web a permanent read/write medium.

It looks very much as if we're going to lose all of that to vertical data silos that will ship you half-an-app that you can't use without the associated service. We'll never really know what we lost.

It's sad that we don't seem to be able to have the one without losing the other, theoretically it should be possible to do that but for some reason the trend is definitely in the direction of a permanent eradication of the 'simple' web where pages rather than programs were the norm.

Feel free to call me a digital Luddite, I just don't think this is what we had in mind when we heralded the birth of the www.

I hate to break it to you, but we're already there. People have been using the web to deliver desktop-like applications for the past decade. Over that period of time, the number of people connected to the Internet has more than doubled [1] and will continue to increase. Whether or not an application is delivered through a browser or natively is inconsequential to most of these users. If we look at the list of the most popular websites [2] we see mostly content-delivery platforms (Google, Bing, Wikipedia etc.) with some popular web apps which resemble desktop software in complexity (Facebook, YouTube, Windows Live etc.)

So we have two paths forward. One, we could try to influence the habits of billions of Internet users who use desktop-like web applications in an attempt to restore the document-based nature of the web; or two, we could provide an alternative to the artifice of modern JavaScript development which allows for better applications to be written and distributed to users that use and rely on them. The latter initiative is the more realistic and productive one, in my opinion.

WebAssembly will not lead to the end-of-times of the Internet as a content delivery vehicle. It is a net positive for the parts of the web that do not fulfill that purpose. If you're worried about the free and open web as a publishing platform, look more to governments and corporations around the world that collude to limit our freedom of expression (Facebook, we're all looking at you [3]).

[1] http://www.internetlivestats.com/internet-users/

[2] https://en.wikipedia.org/wiki/List_of_most_popular_websites

[3] http://www.theatlantic.com/technology/archive/2016/02/facebo...

> Whether or not an application is delivered through a browser or natively is inconsequential to most of these users.

This is probably the most true statement in your entire comment.

Running with the assumption that of course we should ignore the users who aren't "most" users (not a safe one).... I'd go farther, though. Whether or not these users are getting access to the utility they're looking for through:

(1) a web page with some layer of interactivity on top of more-or-less legible document semantics or

(2) whether they get it through a giant-blob-of-code SPA

is also irrelevant, right? Except to some minority of power users who can actually get more utility when someone uses a more basic semantic-document resource-oriented approach.

So it isn't the users who are driving the "let's use the web as a VM for desktop apps!" trend. It's the developers. And it isn't the billions of Internet users whose habits would need to change.

I'd guess the most ready response to this would be something like "How would you have the current Facebook/YouTube/WebMail Experience without this Desktop-Experience-Focus?" But as far as I can tell, the essentially utility involved here -- and to a large extent, the best things about the experience -- haven't really changed since a FB page was actually a legible document (a milestone we left down the road a looooong time ago).

The document-centered approach does have its limits. There are some applications it will not support. First-person shooters.... sure, deploy a black-box binary to a runtime.

Facebook, YouTube, and web mail are not really in that category at this point, and developer choices are the primary thing driving the march away from documents here.

So it isn't the users who are driving the "let's use the web as a VM for desktop apps!" trend. It's the developers.

Really? I was in a public middle school this afternoon that, just a few years ago, was having to buy expensive computers and licenses for MS-Word. Only a few could be bought, and each was its own little separately licensed, separately maintained world that easily became unusable. If a server had problems, your work required access to a specific machine. Access required lots of waiting.

And now? Chromebooks for a tenth the price (don't have to support full, expensive Widows/OSX), which they bought in order to "use the Web as a VM for desktop apps" (the horror!) such as Google's free equivalents of Word, Excel, and PowerPoint. Now everyone gets access, and they can even continue working on their own documents after school from the public library, a home tablet, or any number of other options made possible by the Web (and ever-improving hardware options).

And do you imagine that our science teacher and her students are more interested in A) "pages" about physics with "more-or-less legible document semantics", or B) live physics simulations to experiment with on their Chromebooks and phones? Are uncaring developers the only ones driving these poor users to the latter?

The problem with this "apps are oppression, simple docs for the masses!" meme is that the narrative excludes a vast number of those "underserved" who want good apps more than simple docs and need the Web-as-VM to finally make it possible.

> The problem with this "apps are oppression, simple docs for the masses!"

I kindof knew someone was going to confuse my argument and think I was arguing against web applications. Or against the browser as a platform.

It's a sign of how deep a certain kind of thinking goes in the industry. If we're talking orienting around semantic documents and resources.... we can't be talking about real applications! There's just no way to create interactive applications around those!

(And conversely, unless we're talking about application frameworks, lots of code --preferably not written in JavaScript, of course, which everyone knows isn't a serious language compared to Python -- unless we're working with GUI toolkit metaphors, unless the finished product simply isn't meant to ever be read by a human, we can't be talking about an actual application, right?)

> And do you imagine that our science teacher and her students are more interested in A) "pages" about physics with "more-or-less legible document semantics", or B) live physics simulations to experiment with on their Chromebooks and phones? Are uncaring developers the only ones driving these poor users to the latter?

Like the astute reader would note I said in my earlier post, some applications really don't fit inside a document/resource-oriented paradigm. Some simulations for sure, including first-person shooters and other games.

Other simulations, of course, fit rather nicely as a document with a layer of interactivity, and in that case, yes, the problem would indeed be uncaring developers. Perhaps particularly those who think interactive and semantic are exclusive.

> was having to buy expensive computers and licenses for MS-Word

how did the developers force that one upon you ?

The developers (and commercial environment) hadn't come up with anything better yet, so the local OS with local MS Word application was the best choice.

Now things are different.

It's only a small minority of users who are affected, but historically how much innovation happened due to the web's openness and that small minority's tinkering.

The majority of web "innovation" already existed at Xerox PARC hypermedia applications.

The web is just a worse experience of what Xerox already had.

We have been doing this for -decades- with Shockwave, Java applets, and Active X controls.

But they always end up abused. Everyone hates all of those, and that's good, because it causes people to steer away from them. Instead of throwing our hands up, and giving up, we should continue to try to keep those out.

How long before the first WebAssembly exploit?

(And what advantage does WebAssembly have over Java applets? We've been down this road.)

The advantage? Instead of having access to a large, stable ecosystem with 20+ years of engineering behind it (Java/JVM), you'll get to use fun, new half-baked tools that barely work to build your webassembly apps.

In another 20 years time, we might be at the level of productivity we had 20 years ago with VisualBasic.

The toolchain is C/C++/LLVM, which is pretty well baked.

The toolchain, yes. I'm not sure about the libraries - especially the ones that provide the glue to the JS world (or DOM, directly). I heard there is something, say, SDL support for asm.js or emscripten, but I don't think there's a large and stable ecosystem. More like "some stuff patched to some extent".

Oh, and I think in webdev, there is a lot of... err... modern creative and innovative guys and gals out there who are always happy to rewrite the world again, because the last week's tech isn't cool anymore.

Worse than that, in 2016 it still hasn't matched what VB and Delphi were capable of in the 90's.

I am really happy that our current customer projects that I am involved with, are all native mobile and desktop applications.

Never used Delphi but after reading about it I would love an equivalent for web dev. I wrote small abstractions over Rails and or React but it's still too low level for my work.

The advantage is that in time people agree about web standards.

Good luck getting Visual Basic adopted as an standard ;)

Fun fact: Netscape promised to include vbscript in Navigator but it never happened.

We've gotten worse.

You really think closed source Oracle is going to make applets better? People don't like applets cause the experience sucks, the tooling generally lacks, and the libraries are old.

You say that webdev is Applet 2.0? ok sure, at least it's open and moving instead of EOL'd [1] and the experience is better else wouldn't we all still be using Applets...

[1] http://www.v3.co.uk/v3-uk/news/2443810/oracle-signals-the-en...

Oracle isn't. Sun dropped the ball.

My point is we could've embedded JVM in the browser "correctly" instead of reinventing the wheel with another byte code format.

webasm is not a bytecode format

Yes, it's an IR. It still has all of the issues a bytecode format has, and none of the size advantages.

No it is Applet 5.0.

Applet 2.0 was Flash, followed by Applet 3.0 aka Silverlight, followed by Applet 4.0 aka ecmspriten/asm.js.

Imagine you can run (efficiently) Eclipse or Lucene in any browser.

Someone try to do that with WebKit:


Eclipse (Che) is already coming to the browser!


Eww. One of the most beautiful parts of eclipse is the extensive level of plugins. A browser-based IDE means more painful maintenance, more painful extensibility, and a generally worse user experience. How am I going to tie into native binaries for C/C++ compilation like I can do with eclipse as is? Or the many plugins that give static checking, etc? A web-based IDE is a step backwards.


> (And what advantage does WebAssembly have over Java applets? We've been down this road.)

The same advantage as Javascript, I'd assume: easy access to the DOM and the concomitant natural integration into a webpage. Java applets were like a portal into a weird world that started with a security warning and all of the widgets looked wrong.

Well unlike third party plugins (such as java, flash, ect) wasm will be executed by the browser (like JS) and likely won't have the same privileges that java/flash have while running.

Why would WebAssembly be any more prone to exploits than JavaScript?

Because there will probably be bugs that make it possible to break out of the sandbox and run arbitrary code on the target machine. Just like Java applets.

There's no reason to suspect that browser implementors would sandbox wasm any less strictly than JS. Heck, there's no reason to suspect that they wouldn't just re-use the existing JS sandbox.

Thanks, kibwen. I'll make a stronger statement. By definition, wasm and JS are two syntaxes (initially co-expressive, wasm and asm.js) for one VM.

Do people actually read docs any longer? https://github.com/WebAssembly/ has some, my blog covered the 1VM requirement. There won't be a new "sandbox". JS and wasm interoperate over shared objects.

> How long before the first WebAssembly exploit?

It will happen...

> And what advantage does WebAssembly have over Java applets ?

Microsoft supports asm.js.

Do people outside of "The Web is for documents only!" camp really hate Flash? I've had many good web experiences thanks to Flash (granted, mostly movies and games, but still...)

People on HN seem to discount how absolutely amazing it is to have a VM that can almost seamlessly connect to a network and pull down awesome programs!

I'm a big fan of Web Assembly but I've hated Flash for a long time. Flash apps tended to peg the CPU and shorten battery life on my laptop, even though most were inconsequential to the web pages that used them (ads). Constant security holes left me paranoid. Crashes left me frustrated. I can't count how many times that Flash crashed on me. The UI of most Flash apps left many things to be desired. From a technical perspective, it was also a deeply flawed design with a mediocre implementation from one vendor. Most video sites which use Flash seem to be far inferior to their HTML+JavaScript counterparts, only the biggest sites ever had decent flash players (Vimeo, YouTube), and I've noticed that some Flash video players can't really handle full screen video with the performance that they should. Ugh. I was glad so many years ago when Flash stopped being bundled with the OS, so I didn't need to take any extra steps to avoid it.

The main thing I didn't like about those was that they were proprietary single vendor blobs that stagnated and ran poorly.

Well, it's no different. Say, ActiveX were just x86 binary blobs. Java applets were JVM binary blobs. (Flash applets were somewhat worser kind of binary blobs with proprietary and undocumented bytecode.) WebAssembly is... again, binary blobs.

The runtime (which I was taking about) isn't a blob.

Plus minified and obsfucated JavaScript is already quite common. See closure and asm.js.

Well, my point is OpenJDK/IcedTea-Web isn't a blob either, but it doesn't make Java applets any better. Neither is Wine (which can host ActiveX controls), but I'm not going to try it at home.

And I agree that modern minified, packed and obfuscated JavaScript is also more like a binary blob than source code. WebAssembly is just a logical conclusion of this.

I guess I fail to see how the end result of this will end up any different

Considering WebAssembly is a feature that going to be standardised and supported in all JavaScript engines while those other things were either proprietary runtimes or browser specific API's I'd say you have no reason to think this will end up like them.

I wonder if we will see ios/android apk app running in browser soon.

We already got DOS, Linux kernel, Window 95 running in the browser as very cool demo projects.

Maybe Angry bird Apk running unmodified inside browser at close to device speed would be next?

They (APKs) already do in Chrome.

Except for, security constraints, currently require that APKs are packaged as Chrome extensions, and can't be served from web directly.

> I wonder if we will see ios/android apk app running in browser soon.

or apt-get, yum or pacman...

You can already do this with App Runtime for Chrome.

Eh, I think things were a lot worse during the heyday of Flash and Java applets. Sites have been delivering this kind of content since WebRunner in 1997. At least now we have an open, vendor-neutral, consensus-based standards process and a commitment to multiple major open source implementations, which is something we never had with Java or, worse, Flash.

> ... which is something we never had with Java or, worse, Flash.

I liked a lot Java Applets. It was fantastic to be able to draw pixels or use a toolkit in a web browser at that time.

I gave up Java near after the dispute between Sun and Microsoft about J++. Java was far less interesting for me without that feature.

I think you and I must be looking at different 2016 Webs. In the one I see, content is increasingly locked up in closed, proprietary and/or centralised systems. In the one I see, standards don't matter much any more because there are no usefully stable browsers anyway. In the one I see, one browser is rapidly becoming so dominant that the existence of several others that were formerly influential and competitive is all but irrelevant for many web development projects, just as happened in the days of the IE vs. Netscape browser wars.

We all know how this story ended last time. And yet, here we are watching it all over again and most of us are powerless to do much about it. For all the superficial openness of the modern Web, the reality is that it is now utterly controlled by a small group of browser developers, a small group of centralised content hosts, and a small group of curators, several of which overlap with each other but almost none of which have goals that necessarily align with those of either the average web surfer or the average small content producer.

> ... it is now utterly controlled by a small group of browser developers ...

If the group is small, it's mainly because the job is very difficult. All browsers are a big mess and it takes time to be productive.

The idea of asm.js is one of Alon Zakai. He explains on his blog all the experiments he made before concluding with that.

A quote from the oldest post I know about emscripten:

"I want the speed of native code on the web - because I want to run things like game engines there - but I don't want Java, or NaCl, or some plugin. I want to use standard, platform-agnostic web technologies."


Yes there are few full implementations of the Web. I hope eventually a more modular layout engine comes into being that allows for others to create their own implementations of things without having to wrestle with a large codebase.

> I hope eventually a more modular layout engine comes into being that allows for others to create their own implementations of things without having to wrestle with a large codebase.

Maybe Servo?

The more comprehensible browser codebase I know is Dillo.



I used to recommend HTML3.2 and sandboxed/MAC'd Dillo for a safe, formatted documents to people who insisted on web browsers being the medium. Worked out fine w/ much less risk. I was grasping at straws when a Chromium member asked me how I'd secure that code with minimal work and no rewrites. How would I even understand it all was my first thought.

I agree. I'm really running dry on respect for Brendan Eich. None of the moves he's making are for the benefit of user privacy - look at Brave, his new browser project. It replaces ads on the web with his own ads, tracks you, and puts money in his pocket instead of the publisher's pockets. I'm struggling to remember why he was respectable in the first place - for making JavaScript, an awful programming language we've spent 20 years trying to fix? I don't think that his word on these issues is worth anything any longer.

Brave does not track anyone remotely, all data in the clear (which browsers all keep in various caches and history lists) stays on device. We will pay publishers 55% directly, and users 15%.

I think you didn't read our FAQ or other site docs, and just assumed the worst. Why is that?

Ignoring the negativity and taking it as an opportunity to improve your product and / or understanding of a segment of your audience.. I'll remember that!

> ... for making JavaScript, an awful programming language we've spent 20 years trying to fix?

First-class function is not bad for 1995.

Yeah, I have to say that there are many things I love about Javascript. Even the quirky this pointer is at least interesting (though not particularly useful). Internally JS is very elegant IMHO and it is easy to write beautiful code in many different paradigms.

On the downside: Type coercion is always a bad idea. There is an inconsistency between built in types and user constructed types. It has terrible standard class libraries. It is verbose.

But I can name half a dozen other languages that suffer from these problems and more. Judicious use of a transpiler (like coffeescript) and choice of third party libraries will go a long way. Personally, I enjoy working in coffeescript more than Ruby specifically because of the first class functions.

Not the best programming language in the world, but not the worst either, IMHO. It's just unfortunate that some of the more obvious shortcomings weren't fixed early on.

It's also quite average for languages of that time to have first-class functions.


He created JavaScript in 10 days. It's still in use 20 years later. That's very respectable.

Because it had (and still has) a monopoly on code that runs inside the browser. And it's a pig that's had quite a bit of lipstick put on it. It wouldn't be in widespread use if people had the choice.

That all said, WebAssembly is potentially an avenue to break that monopoly, and I do give him credit for seeking to address the issue.

People did have a choice: Java Applets, ActiveX, Flash, VBScript. Those all lost.

>He created JavaScript in 10 days.

It shows. JavaScript is fucking horrible, to put it mildly.

Its got bad parts and its got good parts. Once you learn how to ignore the bad parts (and the bad examples) the language really shines. Especially with the newer versions.

Haha, God made the world in 7 days, it's even worse.

I've written DSLs in a day that have seen usage beyond its original intent. Javascript's a dirty hack that's only had dirtier hacks stacked atop it.

Creating JS in two weeks was an amazing accomplishment. Unfortunately we'd be better off if he had been a worse developer, because then Netscape wouldn't have had the option of shipping it and enshrining its defects as features.

I'm not sure people really ever respected him, or JavaScript for that matter, but sort of just recognized it can pay to pay attention to one of the most popular languages and the man who invented it.

He also founded and built Mozilla, which arguably saved the open web from extinction.

If you don't respect the founder of JavaScript and Mozilla, your standards are very high. I doubt many people commenting here would meet them.

Look, if given a chance, I'd shake his hand. But that doesn't mean I hold him to high regard or agree with his views, standards, or ideals. I respect some of the things he's done, but I also disrespect other things he's done.

Mozilla is pretty cool I guess, JavaScript is pretty cool I guess. It's all really just whatever and I think he was very much in the right place at the right time for a lot of this, not that he isn't a brilliant man who's obviously far more accomplished than me.

The web is an amazing content delivery vehicle, I totally agree! There's a whole class of content I want to be able to just `wget` and be done with it.

The goal of WebAssembly is to open up the reach and easy user experience of the web browser to new types of applications that just aren't possible to build efficiently with current tech.

We're also making sure that wasm is a first-class citizen of the open web: see, for example, our thoughts around ES6 module interop, GC + DOM integration, and view-source to see the textual encoding of wasm [0][1].

[0]: https://github.com/WebAssembly/design/blob/master/Web.md

[1]: https://github.com/WebAssembly/design/blob/master/TextFormat...

(Disclaimer: I work on V8.)

Yes, seeing a text format, much like has been possible, and to the same effect, of a disassembly view of bytecode. Hell, the plaintext s-expression format is IMO /less/ readable than most assembly formats. Regarding DOM integration, wouldn't that have been possible with other formats by giving plugins access to the DOM?

I'd much rather see that time on current tech spent on harnessing the tools we have right now better, rather than new pie-in-the-sky tools that bring a host of disadvantages, such as being yet another jit language which resembles a processor from twenty years ago. With an MVP that doesn't include multiprocessing, doesn't include SIMD, etc, I fail to see how this is really better than the status quo.

I prefer the term "mindful" to luddite. Luddite's opposed technology, you oppose the direction it's headed.

But I think you're completely right. We took all that made the web unique, and turned it into a black box for abstracting away hardware/OS.

It's hardly surprising though... you can decentralize a network, but power and control over the medium was bound to become centralized in some form.

Luddites actually opposed the "direction technology was headed" as well, not technology for its own sake[0]. Specifically, they were the highly skilled technology workers of their time (factory and textile workers) made obsolete by automation, protesting the way technology made it easier to replace them with unskilled, low-wage labor and to flood the market with low-quality, mass produced goods. Sound like a familiar refrain?


Looking at technology's influences alone strikes me as a shallow viewpoint. You can find other important factors if you look a bit deeper:

"The British weavers known as Luddites, who destroyed looms precisely 200 years ago, thought rising unemployment within their ranks was due to machinery. But there’s a case to be made that inflation, money supply expansion, budget deficits and trade barriers were equally to blame.

... [The] overall picture was of cheap money leading to labor-saving capital investment, while wages were eroded by inflation and economic activity was dampened by restrictions and excessive government deficits.

The Luddites have been mocked for attacking the productivity-enhancing machinery that was to improve living standards unprecedentedly. But given the economic policies of the time, which bear an uncomfortable resemblance to some of our own, the Luddites were right to believe that only higher unemployment, with no discernible improvement in conditions on the horizon, was their fate."


Were Luddites the victims of 2011-style finances? | By Martin Hutchinson | March 11, 2011

huh, I never knew that. Thanks!

I stand by the rest of what I said though... I think we need to be mindful not just of where society is headed, but what kind of society we will live in when we get there.

Technology is important, but it's society that separates utopia from dystopia...

It's not the web that we are losing -- it's the browser.

It's all still HTTP requests and responses, but the web browser itself is becoming something very different from what it was a decade ago.

is it the web without hyperlinks?

Yes and with a low-level interface to a (virtual) hardware it defines a VM that can be run standalone (thus without a browser).

Don't we already have that with the JVM, the CLR, Smalltalk, many Lisps, Erlang, MIX, MMIX, etc?

Despite all the javascript, the content stayed on the web. That's the big win, thats what draws people to use it, and what guarantees its popularity for a long time. The really great, awesome thing about the web is its openness. No company can tie you to their programming language , their "app store" ecosystem , insane policies and authoritative restrictions. We should be eternally greeatful to Berners-Lee for that.

The silos exist today, facebook platform etc. Despite how hard they tried, they did not take over the web.

I think what I worry about the most is how poorly the modern web partners with assistive technology: it feels like we're actually taking steps backwards.

That is a very large part of my point, the viewer is no longer in control. That was one of the beautiful concepts of the web, that the reader could be anything, a person, a screen reader, a computer program and so on.

To me, it seems the trend is to ditch both the app and the browser, and just provide a web api that can be used with anything that supports http.

I upvoted you, but I want to add my voice too.

The web succeeded in part because it was possible for anyone to do "view source" and see what was going on under the hood.

Having that source available is also an important aspect of software freedom.

Losing all that--especially the freedom to see exactly what your browser is executing--for a slight speed increase is ludicrous and I'm very, very sad to see this is being taken so seriously.

We had a huge opportunity here to shape an open and free web. Turning the web into nothing more than a binary distribution platform will undo decades of work and we may never again find ourselves in the lucky confluence of economic prosperity, technological advancement, and governmental benign neglect, that made the open web possible.

Minified JS is hardly "source" though.

the web is and will always be a 'content delivery vehicle'. Apps which run on the web are a form of content.

Not all data is open, that is unfortunate, but realistic. At the same time, huge amounts of data is open and available without an app.

What is the use case where we lose to something because of native performance improvements in javascript?

Anybody who wants to build a simple static site can still do that, and I'd suggest the majority of the web is still just that, or very close to it.

I really don't understand your comment about 'executables that only run as long as you are on a page'. You can only read content as long as you are on a page as well. Or are you concerned about our ability to do search and data-mining on large volume of available data?

I can hit ctrl-s and save a page, and with modern browsers, all the inline content too. Webapps tend not to have that function.

The further we get away from the web as a content delivery vehicle and more and more a delivery for executables

Interactivity is an increasingly important aspect of media and of our culture. People spend more money and time on games than on movies and TV.

I just don't think this is what we had in mind when we heralded the birth of the www.

It's never like the framers imagined. It's always stranger and more wonderful than they could have imagined. (And horrible in some ways they couldn't have imagined.)

I agree with quite a lot of your critique and share some of those concerns, but something doesn't quite add up to me. Why are you assuming that the two models of web-as-delivery-system are mutually exclusive? I don't see how web assembly competes with or causes movement away from the web as we've known it.

Check how many pages of a random sampling will even work at all without running code on them. It's quite worrisome, and I think web assembly will accelerate rather than slow down that trend. 10 years ago you could download a page and it would most likely have the relevant content in the downloaded page, the exceptions were the idiots that would make a page that was a frame around some flash applet with the actual content.

Now the page is a blank template that will fill itself in using a bunch of under-water calls to the server. Those under-water calls are the result of running a bunch of code and that code will sooner or later end up being written in web assembly and will be mostly opaque.

It's just like those flash only pages of old.

Yeah? I don't see the problem. Why does it matter how a web page is constructed? If it is fast, secure, and does what the user needs, what is the problem?

Because people will take it and say "Hey, with WebAssembly I can rip out the entire DOM and roll my own layout and text rendering directly to a canvas like some sort of game engine! By avoiding standard webpage structures, we can make our ads harder to block!"

And really, who cares about copy/paste, accessibility, fair use, user stylesheets, or any of that, when you could trade it for more resilient advertising? Not publishers, that's for sure.

It's going to open the floodgates to a new generation of those crappy bundled-up Flash websites that gave you very limited ability to interact with their content.

There are of course better things to do with it, but from my limited understanding of WebAssembly this is my biggest prediction for what it'll get used for.

We should not judge the potential of a technology only by the worst-case scenario for which it may be used.

If a promising technology has a high likelihood for serious abuse but also a highly desirable upside, instead of tossing the baby with the bathwater, we should preempt the abuse through culture, policy, and possibly engineering.

Oh definitely. People did some great stuff with Flash, and I'm excited to see what people come up with using WebAssembly. I'd just temper our excitement because it's going to make some things better and it's going to make other things shitty.

Somebody's going to write a DOM-replacement page layout and rendering system, other people are going to adopt it, and we'll have to go through another whole phase of

"Guys, we made this shim layer that will make your website compatible with screenreaders and HTML5 semantic tags again!"

"I dunno, that sounds like work. And people could copy/paste from our articles? I'll pass."

<5 years later>

"The new version of PageRenderingFramework natively supports screen readers. Can we maybe make your content accessible to non-visual browsers now?"

"Yeah, I guess. We'll install it next time we rebuild our website."

On the plus side, I think most websites have outgrown the autoplaying music, so maybe we'll skip that part in this cycle.

> We should not judge the potential of a technology only by the worst-case scenario for which it may be used.

That's exactly how we need to judge any important technology. This is a basic part of making things that "fail safely".

The word 'only' means not to exclusively judge it in that manner; it does not exclude risk analysis and mitigation. By your statement's logic, banish or redesign the kitchen knife, and put the genie back in the bottle, since it is a technology that can be used in very bad ways in a 'worst-case scenario', and it is hard to make it 'fail safely'.

>Because people will take it and say "Hey, with WebAssembly I can rip out the entire DOM and roll my own layout and text rendering directly to a canvas like some sort of game engine! By avoiding standard webpage structures, we can make our ads harder to block!"

You say that as if it was a bad thing. There is two kind of content (relevant to this diacussion at least) that get served trough the web - documents/media and apps. DOM/HTML is ok for the former it's a horrible hack in the latter and most of your objections to custom gui also apply to DOM gui frameworks + they are dog slow and annoying to use. There is room in the middle (webapps) but there is definately a class of apps out there that just don't fit in to the DOM/HTML and get zero benefits from it outside of delivery which is why they get done that way.

Couldn't you do that with JavaScript anyway?

Sure, it's just optimizing some bottlenecks at this point. WebAssembly is just what JavaScript was evolving/heading to - I think it was a clear that it'd end up somewhere there for almost a decade.

It matters because (1) not everybody can see that web page, (2) the ability to inspect the pages is what caused the knowledge about how the web was made to spread and (3) it is a step backwards in terms of being an 'open' vehicle. That it's fast, secure and does what the user needs is not being disputed.

I think you're possibly underestimating the cost and difficulty of developing apps using low level code (or whatever web assembly is, lower level than html+css anyway).

its technically difficult enough that I think its main use case will be for somewhat large and well funded companies that have the budget and expertise to deliver a fairly complex client side app that has behavior and design that is not easily done in existing web technologies.

for example, all the stuff that people attempt to do now on HTML canvases seems like a likely candidate to be replaced by web assembly apps. this could lead to an era of high performance streaming apps with pretty impressive graphics capabilities. basically just a huge upgrade to emscripten, right?

I'm just not seeing this as being a real threat/competitor with traditional document based content on the web. I don't think that's going anywhere and I don't think web assembly interferes with that on any significant level.

It might get distributed to the browser as low-level code, but it won't be for the developers - they'll just use compilers and toolkits. In fact, I'd bet an integrated tool for compiling the codebase of an Android app to WebAssembly won't take long to appear. It'll only be technically difficult in the first couple of years, if that.

> Why does it matter how a web page is constructed? If it is fast, secure, and does what the user needs, what is the problem?

Because by definition if it is constructed with JavaScript it is neither fast nor secure.

Because it often doesn't do what a user needs. Try using a screen reader on a lot of "modern" websites some time. There are specifications to make this easy, but people ignore them because accessibility is "too expensive".

There's always an element of "one step forward, two steps back" with major technology shifts. Every time we've had a big change in platforms, we also had to redo existing engineering work.

I don't think this is doom and gloom for accessibility, though. The future is in general-purpose assistance technologies that mediate any application. You can smell it with the new work in ML. It is not here now, but as with everything in technology, by the time it's mature and widely available, it's nearly obsolete.

> "one step forward, two steps back"

This only works if you're facing away from your destination.

> Feel free to call me a digital Luddite

Sure. But don't despair, because this future isn't as bleak as you'd assume. With things like Hoodie[0], GunDB[1] and other amazing bits of technology, we can keep the benefits of web-tech for application development, but allow the user to own their data still. Offline-first, easy sync when needed. And really, anything more complex than delivering static HTML pages has the downsides you're mentioning, so unless you want to live in 1995 I can't really understand it from a practical perspective ;)

[0] http://hood.ie/

[1] http://gun.js.org/

Why can't both styles coexist? Declarative hypertext resources for things that are document like, and dynamic applications for things that are app like? We simply are augmenting the plain old HTML web with the ability to link to resources with new capabilities. We haven't subtracted anything.

I don't see what vertical data silos have to do with the technology the OP is talking about. Vertical data silos would exist without the ability to run programs client-side, they'd just be less pleasant (e.g. lots more reloading), and less accessible for many people (harder to make usable AI).

The tech helps those data silos to be more opaque, to treat the browser as a one way consumption device and to make it harder to link that data in the 'normal' way. It's like audio that can't be re-mixed. That's also a reason why things like RSS and other easy and open standards disappear, they make it harder to lock down the data in a silo.

I expect a redecentralization of the network with wasm.

It could destroy the web and something new could emerge. Something simple and open again.

The web of today is like the X protocol in 80s. Too much sophisticated and very hard to implement. VNC can be seen as a replacement of X. Simpler and better in many aspects.

wasm can fail too or have a limited success like webrtc.

In a world full of robots, markdown is enough to render most contents like JSON is better than XML.

I hope (and suspect) that Web Assembly will find a lot of use for people who need to use it. But I also think that CSS and HTML will remain the presentation technology of choice for the vast majority of Web sites. That's because, contrary to popular opinion, HTML and CSS are actually technically pretty good. They have a lot of flaws, but every proposed JS- or wasm-based replacement for them that I've seen has been significantly worse.

> They have a lot of flaws, but every proposed JS- or wasm-based replacement for them that I've seen has been significantly worse.

I'm a fan of Gopher and Wikipedia mobile.



HTML and CSS are not bad. The problem (for me) is the lack of diversity in the layout engines. It can be explained by the complexity of the standards. It's near impossible to create a new browser from scratch (that can display modern websites).

BTW people can't choose something else because there is nothing else and every computer has a web brower. It's more a monopoly.

Well, most of the time, when audio that can't be remixed[1] is usually because whichever audio engineer was in the studio and did the mix-down and/or whoever did the mastering afterwards lost the tapes. Mixing down to stereo from a thirty different takes of a dozen different audio tracks is inherently going to be a little lossy. It's sort of like cutting up the first draft of a novel, where the author presumably has made some editorial decision.

I'm with you and actively avoid any remote SaaS for this reason, discussed at length here[2]. I don't care about the recurring fees, I'll pay them no problem as long as I can run the binary locally, so I can hedge against the acquire-hire-kill by AppleGoogTwit. When that happens not only do I risk losing data, but I risk losing a tool I depend on within my workflow. If my data has even a remote chance of being locked into a specific platform, i.e., if I can't host it Atlassian style[3], I'm not going to use it. This could rapidly turn into a bunch of mini-App Store instances -- where good apps and/or information disappear on a whim of the developer because they're tired of working on an Angular base. Not only does your data live in a walled garden, but you also risk being the unfortunate victim of the Bored Developer Syndrome.

Side-note: Most of the good engineering blogs I read are just stock WordPress themes with RSS enabled. None of that "RSS only the first paragraph so I get more hits" junk. If someone's intentionally silo'ing their data, more often than not it's one of those "4 things you didn't know about Bash!" blogs. Not much value when you can read the GNUinfo docs on it.

[1] I'm presuming you mean, in the sense of actually re-producing off the master tracks, not just EQ'ing [2] https://news.ycombinator.com/item?id=11250108 [3] https://news.ycombinator.com/item?id=10753650 - Paragraph 2

I hear you, but what is there to do about it? You can't stop people from wanting these kinds of features, and there are huge, real benefits.

There's . . . already a complete programming language in web pages. I'm not sure what you're trying to accomplish here.

The WWW is currently a bunch of apps. Mediawiki, Wordpress, Node, Django, and thousands of others.

What is the difference between serving HTML/CSS/JS to the browser, and some other stack of UI and algorithms?

They're not pulling JavaScript out of browsers, it's time to give up that dream.

If you're really so passionate about static content why not be part of a project to that aim? You can host a Gopher server and publish to it, it's stunningly easier to do.

Or maybe an alternative that runs on https? I'd be interested in that personally.

> "It's sad that we don't seem to be able to have the one without losing the other, theoretically it should be possible to do that but for some reason the trend is definitely in the direction of a permanent eradication of the 'simple' web where pages rather than programs were the norm."

What does it matter what is the norm? Static HTML/CSS is going nowhere, you can still create static content, as you well know (IIRC you run a static blog). The improvements to the dynamic side of the web do not come at the expense of the document-oriented side, both currently coexist and I see no reason why making the dynamic side faster will change that.

Furthermore, changes to dynamic content can enhance the functionality of the document-focused side of the web. Consider Wikipedia. In some ways a Wiki is a set of documents, but it's a set of documents that grows based on utilising input from those using the service, democratising the accumulation of knowledge. For all its flaws, I can think of no other resource that better embodies the virtues of the web than Wikipedia, and Wikipedia would not have grown to the size it is now without the technology that supports web apps.

That said, I don't agree with the trend for moving everything to the cloud, and I hope we can see that trend reverse with better tools for people to take control of their own data. If more people had cheap home servers that were easy to maintain then the issues surrounding lack of control should be greatly reduced.

I think the solution is to think of code as data that should be freely distributed and hackable as well. It isn't the garden that is the issue, but its walls.

We're heading straight for native code sandboxed in the browser via a bit of a detour. It will be just as hackable as any closed source code that you get from some vendor. Think of it as a slightly more modern version of Java, this time it really is run 'anywhere' as long as 'anywhere' is a browser and the source code stays with the supplier of the web-app.

The source code of my latest web app is over 1 MB of minified and obfuscated JavaScript. Technically you have the source but it's only marginally more useful to you than looking at assembly generated by a C program.

And the benefits of having source are overstated.

I didn't learn web programming by looking at other people's html and javascript. There was a little bit of that but most of the learning is from tutorials and books and experimenting.

And in the old days of C programs delivered as executables, were people unable to learn how to program in C?

Finally, WebAssembly is not a replacement for JavaScript but fills a gap that JavaScript can't (really fast code).

WebAssembly is basically a compiler target for C-like languages, which means that it's dramatically more expensive to "write" WebAssembly code than it is to use JavaScript, so people will only reach for it in cases where there's compelling reason. It'll never be a default choice for writing web apps.

> people will only reach for it in cases where there is a compelling reason.

Why? Won't any javascript (or any other source code) be compiled to webassembly before going to production?

WebAssembly is a low-level statically typed language. JavaScript is a high-level dynamically typed language.

Compiling JavaScript to WebAssembly can't be done as a simple compilation step, at least not with fast results - you really need a JIT or multiple JITs. In other words, you'd need to compile a full JS engine to WebAssembly together with your code.

I had a lot of fun to see that the other day :-)

> "Binaryen" is pronounced in the same manner as "Targaryen": bi-NAIR-ee-in. Or something like that? Anyhow, however Targaryen is correctly pronounced, they should rhyme. Aside from pronunciation, the Targaryen house words, "Fire and Blood", have also inspired Binaryen's: "Code and Bugs."


Do you know XZ Embedded?


I wonder if it could improve startup (with emterpreter?).

I hadn't heard of XZ embedded. Might be worth measuring it, but it would be competing with native gzip in the browser, which is hard to beat on speed. But maybe better compression would be worth it?

LZHAM is interesting in that area. Speed that competes with gzip and compression that competes with LZMA.


WebAssembly is a special encoding for a subset of javascript that's easier to optimize. If all arbitrary javascript could be compiled to WebAssembly and get any benefit, then browsers wouldn't bother with asmjs/WebAssembly and would just implement those optimizations in their general Javascript engines.

WebAssembly is a good compilation target for languages with a flat memory space and no garbage collection.

Do you really feel like optimized JavaScript is source code any more than this is?

It's an incremental process. It started with some innocent fluff (and client side input validation) and it ends with signed binaries shipping from trusted sources.

We're somewhere in the middle.

Yep. The web-heads are slowly reimplementing Unix in the browser. [0] We could have avoided this with better, wider-spread support for sandboxing of untrusted executables.


[0] They don't have crashdumps yet, but I expect that they soon will.

If you ask me, the rise of the Web as app platform is a scathing indictment of the state of operating systems, both in research and in practice. Operating systems are so bad at providing a safe and mostly stateless sandbox for untrusted code that the Web, despite being a crappy app platform in every other way, has won.

Look at WebGL for example. Over the past 20 years there has been practically zero interest in making a safe, sandboxed version of OpenGL. It wasn't until browser vendors got involved that people even seriously considered it. Browsers had to implement most of the safety features themselves, in user space. The operating system level GPU interfaces will likely never be anywhere near as secure, because apparently OS vendors don't care about running untrusted code.

> Operating systems are so bad at providing a safe and mostly stateless sandbox for untrusted code that the Web, despite being a crappy app platform in every other way, has won.

"Worse is better."

> The operating system level GPU interfaces will likely never be anywhere near as secure, because apparently OS vendors don't care about running untrusted code.

It's handled with emulation and virtualization. The future of safety looks like QubesOS.


Stallman was right, just for the wrong reasons. Scary. Offline computing is probably going to die.

You can run web apps fully offline since like 2011. The only difference is that the initial "install" is 100kb instead of 100mb.

What's hilarious is that A) you're pining for something that hasn't existed for years, and B) you're arguing against something (Service Workers) that would bring it back.

On the Rust side, we're working on integrating Emscripten support into the compiler so that we're ready for WebAssembly right out of the gate. Given that the initial release of WebAssembly won't support managed languages, Rust is one of the few languages that is capable of competing with C/C++ in this specific space for the near future. And of course it helps that WebAssembly, Emscripten, and Rust all have strong cross-pollination through Mozilla. :)

If anyone would like to get involved with helping us prepare, please see https://internals.rust-lang.org/t/need-help-with-emscripten-...

EDIT: See also asajeffrey's wasm repo for Rust-native WebAssembly support that will hopefully land in Servo someday: https://github.com/asajeffrey/wasm

Thanks for that update that no one asked for.

As we get closer to having a WebAssembly demo ready in multiple browsers, the group has added a small little website on GitHub [0] that should provide a better overview of the project than browsing the disparate repos (design, spec, etc.).

Since the last time WebAssembly hit HN, we've made a lot of progress designing the binary encoding [1] for WebAssembly.

(Disclaimer: I'm on the V8 team.)

[0]: http://webassembly.github.io/ [1]: https://github.com/WebAssembly/design/blob/master/BinaryEnco...

About the binary encoding... It's a bit easy to armchair these things, and it's too late for WebAsm now... but if you're on the V8 team, you have access to Google's PrefixVarint implementation (originally by Doug Rhode, IIRC from my time as a Google engineer). A 128-bit prefix varint is exactly as big as an LEB128 int in all cases, but is dramatically faster to decode and encode. It's closely related to the encoding used by UTF-8. Doug benchmarked PrefixVarints and found both Protocol Buffer encoding and Protocol Buffer decoding would be significantly faster if they had thought of using a UTF-8-like encoding.

LEB128 requires a mask operation and a branch operation on every single byte, maybe skipping the final byte, so 127 mask operations and 127 branches. Using 32-bit or 64-bit native loads gets tricky, and I suspect all of the bit twiddling necessary makes it slower than the naive byte-at-a-time mask-and-branch.

    7 bits -> 0xxxxxxx
    14 bits -> 1xxxxxxx 0xxxxxxx
    35 bits -> 1xxxxxxx 1xxxxxxx 1xxxxxxx 1xxxxxxx 0xxxxxxx
    128 bits -> 1xxxxxxx 1xxxxxxx 1xxxxxxx ... xxxxxxxx
Prefix varints just shift that unary encoding to the front, so you have at most 2 single-byte switch statements, for less branch misprediction, and for larger sizes it's trivial make use of the processor's native 32-bit and 64-bit load instructions (assuming a processor that supports unaligned loads).

    7 bits -> 0xxxxxxx
    14 bits -> 10xxxxxx xxxxxxxx
    35 bits -> 11110xxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx
    128 bits -> 11111111 11111111 xxxxxxxx xxxxxxxx ... xxxxxxxx
There's literally no advantage to LEB128, other than more people have heard about it. A PrefixVarInt 128 is literally always the same number of bytes, it just puts the length-encoding bits all together so you can more easily branch on them, and doesn't make them get in the way of native loads for your data bits.

Also, zigzag encoding and decoding is faster than sign extension, for variable-length integers. Protocol Buffers got that part right.

Note that for security reasons, if there are no non-canonical representations, there can't be security bugs due to developers forgetting to check non-canonical representations. For this reason, you may want to use a bijective base 256[0] encoding, so that there aren't multiple encodings for a single integer. In the UTF-8 world, there have been several security issues due to UTF-8 decoders not properly checking for non-canonical encodings and programmers doing slightly silly checks against constant byte arrays. A bijective base 256 saves you less than half a percent in space usage, but the cost is only one subtraction at encoding time and one addition at decoding time.


It's not too late! The wasm binary encoding is open to change up until the browsers ship a stable MVP implementation (then the plan is to freeze the encoding indefinitely at version 1).

The primary advantage of LEB128 is (as you mentioned) that it's a relatively common encoding. PrefixVarint is not an open source encoding IIUC.

We'll do some experiments in terms of speed. If the gains are significant we may be able to adopt something similar (this [0] looks like a related idea).

Thanks for the suggestion.

[0]: http://www.dlugosz.com/ZIP2/VLI.html

PrefixVarint isn't open-source, but the encoding is trivial.

PrefixVarints are a folk theorem of Computer Science, (re-)invented in many times and places.

I actually coded it up once in Python and once in C before joining Google, and was chatting with an engineer, complaining about the Protocol Buffer varint encoding. The person I was complaining to, said "Yea, Doug Rhode did exactly that, called it PrefixVarint. He benchmarked it much faster."

I have been advocating for the PrefixVarint encoding you mention for a while.

One thing I'd mention though: as you've specified it here, it puts the continuation bits as the high bits of the first byte. I think it may be better to put them in the lower bits of that byte instead. It would allow for a simple loop-based implementation of the encoder/decoder (LEB128 also allows this). With continuation bits in the high bits of the first byte, you pretty much have to unroll everything. You have to give each length its own individual code-path, with hard-coded constants for the shifts and continuation bits.

The downside is one extra shift of latency in the one-byte case, imposed on all encoders/decoders.

Unrolling is probably a good idea for optimization anyway, but it seems better to standardize on something that at least allows a simple implementation.

Here is some sample code for a loop-based implementation that uses low bits for continuation bits:

    // Little-endian only. Untested.
    char *encode(char *p, uint64 val) {
      int len = 1;
      uint64 encoded = val << 1;
      uint64 max = 1 << 7;
      while (val > max) {
        if (max == 1ULL << 63) {
          // Special case so 64 bits fits in 9 bytes.
          *p++ = 0xff;
          memcpy(p, &val, 8);
          return p + 8;
        encoded = (encoded << 1) | 1;
        max <<= 7;
      memcpy(p, &encoded, len);
      return p + len;

    const char *decode(const char *p, uint64* val) {
      if (*p == 0xff) {
        // 9-byte special case
        memcpy(val, p + 1, 8);
        return p + 9;

      // Can optimize with something like
      //   int len = __builtin_ctz(!*p);
      unsigned char b = *p;
      int len = 1;
      while (b & 1) {
        b >>= 1;

      *val = 0;
      memcpy(val, p, len);
      *val >>= len;
      return p + len;

You can have an equally simple implementation (plus one mask operation) if you put the length encoding in the most significant bits. The advantage of having length in the most significant bit is that in the common case (1 byte integers), the decoding is faster.

Are you sure? It does not seem like it will be as simple. When continuation bits are at the top of the first byte, they come between the value bits in the first byte and value bits in the subsequent bytes. This means you have to manipulate them independently, instead of being able to manipulate them as an atomic group. With low continuation bits, all the value bits get to stay together.

If it would be as simple, you should be able to easily modify my sample encoder/decoder above to illustrate.

I think this is definitely an improvement over the wasm varint implementation. However, wasm bytecode is almost always going to be delivered compressed with gzip or brotli, so measurements of compression and speed should be taken after those. In particular, I'm wondering if a plain non-variable integer encoding would be best, considering how brotli and gzip operate on byte sequences.

This is definitely something I'd really like to see benchmarked: how valuable is it to pile two different "compression" compared only the "complicated" one (gzip or brotli).

Can you please explain how you'd use "bijective numeration" specifically? What do you think has to be changed or added to your proposal:

    7 bits -> 0xxxxxxx
    14 bits -> 10xxxxxx xxxxxxxx

Since I started hearing about WebAssembly I cannot stop thinking about the possibilities. For example: NPM compiling C-dependencies together with ECMAScript/JavaScript into a single WebAssembly package that can then run inside the browser.

For people thinking this will close the web even more because the source will not be "human"readable. Remember that JavaScript gets minified and compiled into (using Emscripten) as well. The benefits I see compared to what we have now:

- Better sharing of code between different applications (desktop, mobile apps, server, web etc.)

- People can finally choose their own favorite language for web-development.

- Closer to the way it will be executed which will improve performance.

- Code compiled from different languages can work / link together.

Then for the UI part there are those common languages / vocabularies we can use to communicate with us humans: HTML, SVG, CSS etc.

I only hope this will improve the "running same code on client or server to render user-interface" situation as well.

More importantly, if we want to make "view source" more palatable in a WebAssembly age, we need to have it support source maps from day 1.

Yes, that would be good for development / debugging (like debug symbols) or as an optional way to give people access to the source.

Considering how critical SharedArrayBuffer is for achieving parallelism in WebAssembly, I'm hoping we see major browsers clean up their Worker API implementations, or even just comply with spec in the first place.

Right now things are a mess in Web Worker land, and have been for quite some time.

Absolutely agreed. We should be able to debug Web Workers with development tools (e.g. set breakpoints / examine state / etc), nest Web Workers, use console.log from within a Worker, and construct Web Workers from Blob URLs. It's infuriatingly difficult to work with Web Workers without these features, which are missing from most browsers!

I think there's a chicken-and-egg problem with regards to Web Workers: Developers do not use Web Workers because they are hard to use/debug/develop with, and browser vendors do not improve their Web Worker implementations because they have limited adoption. Someone needs to break the cycle.

There's also a host of nasty bugs and implementation deficiencies, depending on the browser.

My favorite is in Chrome with simple DedicatedWorker instances communicating with each other directly via the MessageChannel API. It works, except when the UI thread is blocked, because messages are routed through the UI thread. Firefox doesn't have this problem, but it has its own issues—namely the UI thread for each tab runs in the same OS process, unlike Chrome where tabs are isolated processes.

That said, Firefox and Chrome are far ahead of every other browser in terms of how they implement workers. Other implementations are borderline destitute by comparison (e.g. no DOMHighResTimeStamp available in worker context, no Transferable support for important items).

>Developers do not use Web Workers because they are hard to use/debug/develop with, and browser vendors do not improve their Web Worker implementations because they have limited adoption.

You hit the nail on the head. I think it's also a dislike for the API as a whole, because really workers are just a convoluted way of enforcing thread safety. Personally I'd prefer a far more simple and traditional shared memory model, with developers being afforded enough rope to hang themselves with if they so desired.

The existing methods of transferring data to and from workers are frankly crap, the only light at the end of the tunnel being SharedArrayBuffer and the Atomics API designed around it. The problem is that both are essentially designed for compiled applications, ala asm.js and WebAssembly. In a compiled [browser] environment, the heap is seamlessly allocated on a SharedArrayBuffer, so writing parallel code is nearly identical to the traditional desktop experience from a developer's point of view. In plain old Javascript however, you have to serialize and deserialize native types to and from the buffer, which is expensive. It really makes Javascript seem like a second-class citizen with regards to parallelism.

> It works, except when the UI thread is blocked, because messages are routed through the UI thread.

This bug actually causes a ~40% slowdown in one of my projects, since I use sendMessage() in a Worker context to quickly yield to the browser. I didn't mention it, since I figured it may have been fairly obscure... I'm somewhat glad to hear that others are experiencing pain from it. It gives me some hope that it'll eventually be addressed!

SharedArrayBuffer has been accepted as stage 2 in tc39 so there is now a good chance you will have this from JavaScript as well.


Right. My point was that the way the API is built is definitely more friendly towards compiled use cases. With JS, you have to manually serialize and deserialize virtually everything that transits the buffer.

Still, it's nice to see the spec move forward.

> We should be able to debug Web Workers with development tools

Being actively worked on, as far as I can tell. I agree that not having this sucks.

> nest Web Workers

Works in Firefox; haven't tested in Edge.

> use console.log from within a Worker

Works in Firefox and Chrome, fails in Safari, haven't tested in Edge.

> construct Web Workers from Blob URLs

Works in every modern browser, I believe.

If anyone at infoworld.com reads these comments:

On the top of the page, there is a horizontal menu containing "App Dev • Cloud • Data Center • Mobile ..."

When I position my cursor above this menu and then use the scroll wheel to begin scrolling down the page, once this menu becomes aligned with my cursor, the page immediately stops scrolling and the scroll wheel functionality is hijacked and used to scroll this menu horizontally instead.

It took a few seconds to realize what was happening. At first I thought the browser was lagging - why else would scrolling ever abruptly stop like that?

I closed the page without reading a single word.

I still think there is a lot of room for static pages with links in the style that people seem to be prematurely waxing melancholy about when forecasting where WebAssembly _may_ lead the internet. I was always able to find sites of interest that didn't include Flash, Java applets, and company when I just wanted to read something. I find some of the scroll-hijacking, and other javascript goodies on modern pages to either be a distraction, or non-functional on some different devices. On the other hand, I am particularly happy about, and working with Pollen in Racket, a creation by Matthew Butterick. Pollen is a language created with Racket for making digital books, books as code, and bringing some long-needed, real-world publishing aesthetics back to the web [1,2]. I may even by a font of his to get going and support him at the same time!

   [1]  http://docs.racket-lang.org/pollen/
   [2]  http://practical.typography.com

To me, it's more about choice of programming language than performance. Though the latter is very important, I think the former is what will open up doors to making the browser a platform of choice (pun intended). Currently, it feels like JavaScript is the Comcast of the web. Everyone uses it, but that's only because there aren't any other options available to them.

Definitely agree ! I really hope that web asm will kill javascript and (css by the way). I just hate this language.

If you want to see Brendan's keynote from O'Reilly Fluent yesterday, a sample went up https://www.youtube.com/watch?v=9UYoKyuFXrM with the full one at https://www.oreilly.com/ideas/brendan-eich-javascript-fluent...

I think the web may split into two.

1) 'Simple' web pages will stick with jquery, react, angular, etc type code. Where you can still click view source and see whats going on. Where libs are pulled from CDNs etc.

2) 'Complex' saas web apps, where you need native functionality. This will be a huge bonus. I'm in this space. I would love to see my own application as a native app. The UI wins alone make it worth it!

What does 'native functionality' mean for a web app?

Do you mean skipping the DOM and making a Canvas for displaying content? Or do you mean something else?

WebAssembly... Wow, if we keep going, we'll re-invent what Sun achieved 20 years ago with Java. If only they hadn't f-ed it up...

The JVM problem was that it had applets and did not have the DOM integration of Javascript. I do often wonder if instead of Javascript in 1995 we had got WebAssembly and WebSockets.

You could actually call into JavaScript from Applets, using something called LiveConnect. See https://docs.oracle.com/javase/tutorial/deployment/applet/in...

Was it simply/easy? No. But, you probably wouldn't want to.. The DOM is a crappy way to build an application UI. Someday we might figure that out.

If you consider custom elements, how is it different from any other way of building ui?

Video of the talk?

EDIT: Here is the full-length one - https://www.oreilly.com/ideas/brendan-eich-javascript-fluent...

Here's his Fluent keynote from yesterday: https://www.youtube.com/watch?v=9UYoKyuFXrM .. full at https://www.oreilly.com/ideas/brendan-eich-javascript-fluent... (click X on the popup window, you don't need to sign in)

I want to agree with him, I'd like to see a future where WebAssembly closes the gap between native apps and the web. For better or worse browsers are the new OSes, and I dream of a future were all vendors come up with the equivalent of a POSIX standard where any web application can access all (or a wide common subset) of any device's capabilities, from the filesystem to native UI elements.

Your comment reminded me of this highly entertaining talk - https://www.destroyallsoftware.com/talks/the-birth-and-death...

Although tongue in cheek, I think it gives some food for thought. I feel like WebAssembly is to asm.js what the modern JS profession is to old follow-your-cursor effect on webpages - it becomes something to take seriously and use, and having done a bunch of porting things with Emscripten the idea of a browser within a browser doesn't sound as crazy as it used to!

To be honest, WebAssembly isn't really javascript anymore. asm.js was, albeit only sorta-kinda-just-barely (but in an important way), but WebAssembly isn't. There's a reasonable case to be made that in 20 years "everything" will be WebAssembly, but we won't be calling it Javascript, thinking of it like Javascript, or using it like Javascript.

In the long term, this is the death knell for Javascript-as-the-only-choice. Javascript will live on, but when left to fend for itself on its own merits, it's just another 1990s-style dynamic scripting language with little to particularly recommend it over all the other 1990s-style dynamic scripting languages.

But Javascript programmers need not fear this... it will be a very long, gradual transition. You'll have abundant time to make adjustments if you need to, and should you not want to, there will still be Javascript jobs for a very long time.

You act like JavaScript's only upside is the fact that it's required in the browser.

IME the opposite is true. I'm seeing companies flock to it outside of browser contexts in areas where "code reuse" or "isometric/universal" style programs aren't even possible.

You're asking for the ability to make perfect UI-spoofing attacks (among other types of attack). It is vitally important to maintain a wall between the browser's UI anything the remote code can touch.

Allowing web sites to access the file system is not a great idea.

Having the capability does not imply having the permissions.

What could possibly go wrong with that.

The permission to read a file's contents in a web app is given by either dragging the file from your desktop to the web app or choosing it with file picker. To save a file, a web app will have to ask the user to do a save as dialog.

This permission model has pretty much always existed it was just extremely wasteful because you had to first send the file to server and then send the contents back. The new web file apis therefore don't add any new security issue but add massively better ux.

To me this is much smarter model than something like "can I have full access to your fs yes/no? Btw this app doesn't work if you say no". I think you are thinking of this stupid model when you say "what could possibly go wrong with that?" but if you don't, please elaborate.

Not exactly disagreeing with you but for a long time now the web has been our primary path to most forms of code execution hasn't it? I mean if you count HTTP as the web in addition to browsers?

Hey, I've got an idea. How about we just implement this POSIX like standard at the OS layer.

We can call it POSIX.

It's a shame you got downvoted for that, as you make a very good point. This whole trend of making the web browser a poor man's OS is definitely a bit hinky. I mean, how many layers of abstractions built on top of other (redundant) layers of abstraction really make sense?

This stuff is one reason that, despite the advances associated with Moore's Law, the advent of SSD's, and increasing RAM counts, computers don't feel any faster than they did in 1995. It's ridiculous in a way.

Just to play Devil's Advocate: maybe web browsers should be good at, ya know, browsing and leave the other stuff for something else.

> how many layers of abstractions built on top of other (redundant) layers of abstraction really make sense?

As many as needed. This is a political problem, not a technical one.

OSs don't want to provide one common framework for writing and distributing sandboxed one-click-install write-once-run-anywhere applications. So browsers are solving this problem on their own.

Maybe you don't care about this, but users, developers who need to write universal apps, and their marketing managers definitely do.

> It's a shame you got downvoted for that, as you make a very good point.

I think this is because the comment comes off as flippant and snarky.

> This stuff is one reason that, despite the advances associated with Moore's Law, the advent of SSD's, and increasing RAM counts, computers don't feel any faster than they did in 1995. It's ridiculous in a way.

That statement is ridiculous. I've never heard anyone claim that the computers of today don't "feel" any faster than computers of 20 years ago, but if you feel that way I just don't think you're living in the same universe as those of us who walk around with quad core computers in our pockets.

> maybe web browsers should be good at, ya know, browsing and leave the other stuff for something else.

Please define "other stuff" and where you draw the line between that and simply "browsing"

As someone who has been around since before the Web, I can confirm that computers today do not feel any faster... despite the fact that your phone is faster than the fastest computer in the world from that time.

In fact I gave a speech about this at Berkeley last week. I think it'll be online pretty soon.

So now you have at least heard someone claim this.

> As someone who has been around since before the Web,

While this is an impressive credential, it's one that I can also claim.

> I can confirm that computers today do not feel any faster... despite the fact that your phone is faster than the fastest computer in the world from that time.

I appreciate your confirming a subjective feeling based on anecdote, but as someone who was also around in 1995 and has continued to use computers daily since, I'll respectfully provide my own experiences as counter-anecdote to your own. I don't think there's any point in trying to debate our subjective opinions regarding how fast computers feel, but I'll assert that if you sat someone down with a 133mhz P1 desktop with 32MB of ram and a 2ghz i5 with 2GB of ram, 9/10 they'd agree that the 2ghz computer unequivocally feels faster than the 133mhz one.

Dude my first professional programming experiences were on a 486/33. Compared to that a P1/133 is pretty darn fast!

But as you say, there is not much point debating subjectivity here. It's not like I had the foresight to record benchmarks of how long it took web pages to appear, or to open a window, etc, back in the mid-90s.

Edit: How about if I put it this way:

If you go back in time to the 90s and tell everyone "20 years from now, we will have a much more advanced web where EVERYONE WILL HAVE A SUPERCOMPUTER IN THEIR POCKET", people would imagine the web would be amazing, and responsive and beautiful, and we would be doing some seriously intricate stuff.

Instead ... no, we have a pile of junk that only kind of works, and slowly at that. In terms of potential unreached, the web is kind of a massive failure. (Yes, it is "successful" in the sense that we are able to do a lot with it that we could not 20 years ago, but the mediocre is the enemy of the good, and all that).

> people would imagine the web would be amazing, and responsive and beautiful, and we would be doing some seriously intricate stuff.

I think this is what happened. Everyone can agree that there are many examples of extreme over-engineering on the modern web, but sites like gmail, facebook, youtube, twitch, google docs etc, by the standards of 1995 are pretty damn amazing, responsive, and beautiful. Concerns about privacy and ads have made us wary of these trends on the web, but from a purely functional perspective, the modern web has achieved incredible technical feats compared to what was possible in 1995.

> Instead ... no, we have a pile of junk that only kind of works, and slowly at that.

Yes, there is a lot of junk on the web, and yes, it "kind of works", but this is true of all software on all systems. There are plenty of compatibility issues with native software across operating systems, there's also plenty of junk software on the desktop and in mobile app stores. All software is crap and the web is no different, but it isn't especially crappy, it's just that we see a lot more crap on the web because visiting a URL is a lot easier, safer, and more discoverable than executing arbitrary binaries.

> by the standards of 1995 are pretty damn amazing, responsive, and beautiful

Yeah no. If you had gone back to 1995 and told me that gmail was what you would get when I have a supercomputer in my pocket, a super-super computer on my desk, and all web pages are served by SUPER-super-super computers, I would have quit the industry out of depression.

It is some horrible bullshit when you look at it in perspective.

About the quality issue, no surprise that I also disagree there: the web is especially crappy.

I do not consider any piece of software that I use to be performing acceptably (native or web), but there is a stark difference between the native apps and the web apps, in that the native ones are at least kind of close to performing acceptably, and also tend to be a lot more robust.

Web apps not working is just the way of life for the web. Any time I fill out a new web form I expect to have to fill it out three times because of some random BS or another.

Look at all the engineers employed by Facebook and especially Twitter. WHAT DO MOST OF THOSE PEOPLE EVEN DO? Obviously the average productivity, in terms of software functionality per employee per year, is historically low, devastatingly low. What is going on exactly??

I think it's a shame you got downvoted as well. Have an upvote on me.

As to the rest:

I've never heard anyone claim that the computers of today don't "feel" any faster than computers of 20 years ago,

Interesting, I find it to be a fairly common refrain. In fact, what I'm saying is basically just a paraphrase of Wirth's Law:


but if you feel that way I just don't think you're living in the same universe as those of us who walk around with quad core computers in our pockets.

Well, I walk around with a quad core computer in my pocket as well, and I still stand by that assertion.

Please define "other stuff" and where you draw the line between that and simply "browsing"

I'll allow that there's some subjectivity there, but when you're talking about a "web application" like, say, Microsoft Outlook online or something, or a programming editor or a CAD program or an image editing program, I can't help but wonder if that stuff should really be done purely "in browser" as opposed to being handed off to another program.

OTOH, I understand (some of) the arguments for doing it this way. Having a uniform experience for all clients, the security holes associated with plugins, avoiding the need to deploy software to individual machines, etc. I'd just like to suggest that people spend some time considering if there are other ways to achieve the same end(s) other than continuing to bloat the web browser until it replicates all the functionality offered by the underlying OS.

> I find it to be a fairly common refrain.

I don't object to the idea that some software trends towards sluggishness because of feature creep or lazy developers, but I take issue with the statement that computers of today don't feel any faster than computers of 20 years ago because there is a large cross section of computing tasks that are wrapped up in the notion of what constitutes a "fast" computer.

For example, in 1995, running Paintshop Pro and netscape on the same machine was about the limit of what my computer could handle at once. Today, I can run photoshop, chrome, Visual Studio and 2 VMs simultaneously without skipping a beat. In 1995 just trying to minimize netscape could result in a 30 second wait while the system attempted to redraw the windows beneath it.

I have distinct memories of how my brain was conditioned to avoid certain actions because it would render the machine practically inoperable if care wasn't taken to ensure that no more than a few programs or operations were performed simultaneously. Today, even on Windows, I can leave dozens of programs (including the browser with a dozen tabs of it's own) open for months at a time and experience zero slow down; compare this with 1995 where restarting a sluggish Windows PC was a daily ritual because it would just become unusable if left with multiple applications running over night. Even in 1999, if I decided I wanted to play the original starcraft, I needed to ensure that I closed all other applications if I wanted to avoid game-breaking slowdown, and even with that, accidentally hitting alt-tab resulted in a 30 second wait while the desktop rendered itself and another 15 seconds for the system to return context to the game. Today, I can leave all my work open in the background, play a few games or seamlessly alt tab to adjust my playlist and then continue working afterwards without any impediment. in 1995 it took my computer 20 to 60 seconds to boot, today, thanks to the SSD, it takes 8 seconds maximum from boot to desktop on Windows, and even faster on Linux.

Today, you don't even have to think about performance (as a user) because the vast majority of common computing tasks can be performed effortlessly by modern systems.

> Having a uniform experience for all clients

An experience that is uniformly slow and uniformly broken a different way on every browser...

I largely agree, but the argument is "If we rely on plugins, some users will have the plugin and some won't and since users don't install plugins, not everybody will be able to use our $THING".

And it is a somewhat legitimate argument. Whether or not it justifies having the browser subsume everything is, IMO, an open question.

I think if we decide heavily siloing / sandboxing is the right thing for software generally, then what you want to do is build an operating system that works that way (kind of like iOS, but with provisions to enable better data sharing so that you can actually make things with that OS).

This would be TREMENDOUSLY better than trying to make the browser into an OS.

I agree. I used to be anti mobile app until I came to the same realization. Did you see https://www.qubes-os.org/ posted earlier today? It looks like an interesting sandboxing approach.

What do you think about a browser tab that loads a VM running Linux running OpenJDK that runs a full Java application in its own sandboxed OS instead of an applet, with some mechanism for file transfer to the host OS? You could also support any other language, WINE, Mono, whatever. The point is having a sandboxing mechanism that gives existing native code first class status in the browser. Too hacky?

> I've never heard anyone claim that the computers of today don't "feel" any faster than computers of 20 years ago

It's a fairly common observation. What Andy giveth, Bill taketh away, and so on.

Someone else linked to a talk that mentioned removing all the layers in some theoretical architecture called METAL (this is a old talk) basically running asm.js (again talk is old) directly through the Kernel and even removing overhead that Kernels need for making native code safe (such as the Memory Management Unit) and as a result it would run faster than normal native code.


The major thing to be gained from all this then is software that can run fast but not have to be recompiled for all the different systems and hardware.

Build some nice sandboxed hypermedia-application APIs into POSIX and get them adopted all over, and then sure, we can talk!

There is an effort in this direction in emscripten (with musl).

For example with pthreads:


Are those boxes in the picture Firefox OS phones?

Is this an old picture?

Good catch. The URL of that image [1] seems to indicate it's from April 2014 (or earlier).

Seems Brendon Eich resigned that same month/year [2].

[1]: http://core0.staticworld.net/images/article/2014/04/brendan-...

[2]: http://recode.net/2014/04/03/mozilla-co-founder-brendan-eich...

What is the upgrade path for Emscripten users? I understand that LLVM will have WebAssembly backend, but how will OpenGL to WebGL translation work, for example?

Emscripten can already compile to both asm.js and WebAssembly, with just flipping a switch between them.

All the JS library support code is unchanged, so Emscripten's OpenGL to WebGL layer is used just like before, and the same for all the other libraries.

The WebAssembly backend in LLVM will eventually be used by Emscripten as another way to emit WebAssembly (right now it translates asm.js to WebAssembly), but the new backend is not ready yet.

See also https://github.com/kripken/emscripten/wiki/WebAssembly

I wonder if along with these byte code engines we'll get capability grained control systems too. Somehow I doubt it though.

So in the future, when you visit a website they'll be able to Eg: open windows, pop up unblockable modals, webGL, bytecode loaded spam/ads, etc. The end users option will be to block everything, or live with it.

I do not like this bold new world we're entering.

This is a common confusion somehow. The programming language has nothing to do with the APIs provided by an environment. JavaScript can do all those things now, as long as you run it in an environment that provides APIs to do those things (node.js, electron etc). The browser is not that environment. When you write a keylogger virus in C, you are relying on the APIs provided by the environment to do it, they don't come from the C language.

If you think WebAssembly (or asm.js) is a good idea, I would very much like you to do the thought experiment of what design decisions something like WebAssembly would have made 15 or 25 years ago, and what consequences those would have today.

Helpful research keywords: Itanium RISC Alpha WAP Power EPIC Java ARM Pentium4 X.25

I can't think of any software development API that ended up being perfect 15 or 25 years later. Javascript certainly isn't. Java applets, ActiveX controls and Flash very much weren't, but, at the time, they did things you couldn't with the standard web stack.

And we're better off for learning the lessons of the failures, creating improved technologies to replace them (HTML5, JIT Javascript engines, etc), and building on the successes to continuously do more things that previously couldn't be done in the browser.

Will WebAssembly be perfect? Of course not. Will there be unanticipated problems? Of course. I would not at all be surprised if it becomes the next Flash. But it's better to move forward and keep innovating with new web technologies instead of letting the platform stagnate.

We've tried feature-freezing the web for a few years; it was called "Internet Explorer 6" and it sucked.

WebAssembly shouldn't be for the end users to use, it should used for implementations of other languages so they can access the same APIs Javascript can.

Add Lua to the browser, add Perl 6 to the browser, etc. There are plenty of decade old W3C specifications that never made it to the browser properly, like XSLT 2.0, XQuery 1.0, XForms, never mind the latest versions of the specs.

What exactly will be better? One can compile a lot of languages to JavaScript today. JavaScript is fast enough and size doesn't really matter for most use cases. Is WebAssembly going to be much faster than JavaScript?

Compared to asm.js today it'll start-up faster because the format parses quicker. It will also be smaller in size compared to a gzipped asm.js equivalent.

Edit: Also, browser vendors will optimize it more consistently than they do with asm.js currently.

Also, compared to Emscripten the llvm backend for WebAssembly is upstream so you might see more frontend languages like Swift and Rust add support for it.

"Fast enough" still isn't fast enough for mobile devices. I have a high end phone and still get multi second freezes from all the JS parsing and executing on modern websites.

Webassembly is basically a portably assembly language (as in, lower level than C), that then gets translated instruction for instruction into actual assembly language.

It's several layers 'below' JavaScript. It's basically cross-platform, native code.

And moreover, the resulting native code is executed inside the same VM that executes normal Javascript, so it's not anything like starting some Flash file, it's just that we can have native code speed, when needed, inside of the Javascript. That existed already with asm.js. This step now should allow less overhead in parsing such code, which matters when there is a lot of code like in the games or big programs directly translated from lower-language code base. Less overhead means less battery drain and faster start of the program, as an example.

Any application relying heavily on 64 bit integer arithmetic will be vastly better...

Has anyone tried NativeScript? https://www.nativescript.org

Heard about it on a podcast recently, haven't had a chance to try.

Just pointing this out to absolve future confusion: This comment is very off-topic.

- WebAssembly is a new low level language for client-side scripting in web browsers. Future web browsers will support WebAssembly in the same way they currently support JavaScript. WebAssembly has a number of advantages over JavaScript, including performance and an AST-like syntax that makes it more suitable as a compilation target.

- NativeScript is a framework for developing "cross-platform" mobile apps. It achieves this through a JavaScript/TypeScript API and common UI components that are implemented natively on both iOS and Android.

I have not tried NativeScript and I am skeptical of projects that aim to "bridge the gap" in mobile development. iOS and Android are ever-evolving and so you must rely on the platforms that target them to stay up to date. Further, these platforms have very different design goals and the compromises that frameworks like NativeScript make often come at the expense of user experience.

NativeScript could be great! But please be aware of the shortcomings of eschewing native development.

Is WebAssembly going to be host url resource based (like current .js files are) or will it be used as part of some centralized global assembly cache (GAC) solution where assemblies are only usable from a CDN type of authority?

I wish the browser vendors focused on CSS Grid module support as much as they did WebAssembly.

If we keep this up, the web will be almost as good of an application framework as a '90s era desktop application. Yay, progress!

WebAssembly = SWF with diff name. Come on!

The format of WebAssembly could be Java ByteCode.

It's lower level than that.

Yeah, great. Transform everything into opaque binary blobs, as far as the eye can see. Wonderful.

Thanks for nothing.

From http://webassembly.github.io/ : "Open and debuggable: WebAssembly is designed to be pretty-printed in a textual format for debugging, testing, experimenting, optimizing, learning, teaching, and writing programs by hand. The textual format will be used when viewing the source of wasm modules on the web."

Yeah, nice, this and and so many other formats, that people just throw up their hands and give up on, when confronted with the raw binaries they work with on a daily basis, are simply open and wonderful all the time.

Except not.

Portable Executables. ELF binaries. Zip Files. Open Image Formats.

All of these are theoretically open, and perfectly accessible to all, in their raw form.

And yet broadly inaccessible to like 90% of the world's lay people, since the concept of an interpreter eludes them, and in some cases is explicitly denied to them. The same will happen with this.

This puts things on a shelf, well out of reach to many more people. And a very small group of people love that.

So, while encrypted smart phones and email "go dark" on mass surveillance, the rest of everything else "goes dark" for ordinary people.

I don't know. I am not sure yet. What the HN folks think about this?

Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact