Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A world to win: WebAssembly for the rest of us (wingolog.org)
157 points by fulafel on Aug 17, 2023 | hide | past | favorite | 142 comments


In my opinion WebAssembly and all the in-browser applications are a waste of power. Please do a small experiment: make a 30 minutes video call on your laptop using both: natively compiled skype and google meet. In my case meet drains almost 2x amount of power and after an hour-lasting meeting I have no battery left, but on one-hour meeting on skype I still have 56% of battery.

It's because you cannot have native performance on something that is still the virtual machine.

Another reservation I have is global migrate-to-the-cloud trend, where I can see heavy-CPU programs like adobe photoshop moving to in-browser, while I feel much more comfortable using natively compiled applications. It's just faster and gives working smoothness feeling. Also when I have a locally installed photoshop or lightroom, nobody requires any money from me. I can have a dedicated laptop for my photos processing, and nobody will take it from me, because I didn't pay for the next month. However this is another story of making people dependant on subcriptions in all cases of our lives...


> Please do a small experiment: make a 30 minutes video call on your laptop using both: natively compiled skype and google meet.

I don't think this is a Wasm issue. IIRC Google Meet uses WebRTC for encoding, which is the compute-heavy part of a video call. They might use Wasm for background blurring or something like that.

In general I suspect that the issue here is that Skype as a native app can access some video-specific hardware acceleration that the browser can't. CPU-bound workloads should be close to native speed (OP says within 10% of native which tracks with my experience)


The user writing:

> In my case meet drains almost 2x amount of power and after an hour-lasting meeting I have no battery left, but on one-hour meeting on skype I still have 56% of battery.

is comparing apples to oranges via an uncontrolled comparison.

> WebRTC for encoding, which is the compute-heavy part of a video call

On Windows in Chrome, Google Meet will correctly use the same hardware acceleration for video encoding that Skype does. Both will ultimately use DXVA on modern hardware. On macOS, the story is complicated and I know less about Skype's internals.


> comparing apples to oranges

it is because the laymen simply doesn't know how to account for their own bias. They conceived an experiment to try to show an objective truth, but was in fact biased - and probably didn't even really know it.

This is why science isn't done this way; you need to put in a lot of effort to design the experiment, so that preconceived biases don't show up. Things like double blind, and randomization, etc.


I agree, virtualization layers always add a bit of overhead by principle, but in 2023 we use plenty of those and the hit is small enough to ignore it for most uses. Either some bad coding/lib dependencies or lack of direct access to better hardware as you mention, but then browsers should allow this, with some good permission process.


> but then browsers should allow this

Agreed! Between WebCodecs and WebGPU compute shaders, I think we’re getting closer to this ideal.


WebRTC can use hardware offload for video decoding, at least in Firefox.


Even that is both browser and OS dependent, in my experience.


This is highly likely to be Google Meet using CODECs (like VP8/9) where the hardware acceleration is nothing like as well developed (read available or power efficient) as the h26x series, which Google are resistant to for licensing reasons. They also like doing simulcast, where the burden of encoding different stream resolutions for different end clients is performed at the sender end, when others do this transcoding in the cloud.

Edit: Meet also apparently uses MediaPipe, which in browser does results in heavy WASM/GPU usage which on a native API could go to a NN accelerator instead. It is fair to say that stuff can melt batteries too.


It is a waste of power (currently).

But you can have native performance on a virtual machine by translating your VM code to machine code and just running the machine code. There are ways to sandbox without interpretation. WASM goes to some extent in that direction, but not all the way, mostly for safety.

So it wastes some power. The question is: so what? WASM is attractive because it's an instant deployment. You open a URL, boom. Nothing to explicitly download, install and launch. Nothing to maintain. And eventually uninstall. It removes the friction for apps you use once in a while, or... just once, even.

In engineering & design, efficiency comes in many kinds. You waste power on the user's machine, but you got your foot in the doorstep by them opening your URL and running your app. It's not how many of us, including me, want the world to be, but it... be how it be.

P.S.: The real flip-side is when sites start figuring out ways en masse to profit from your compute by visiting their sites. Crypto mining as one example. Oh well.


You don't get "native performance" (as in the speed that a native application would run) even with JIT. You still need to implement the semantics of the virtual machine, and generate the native code, add safety guardrails for memory safety, etc. The Hotspot JVM is a world class implementation that does all of that well and still loses to C/C++ outside specialized benchmarks.


The JVM has different semantics than C/C++, so it is not an apt comparison. Asymptotical performance-wise there is nothing inherent with a JIT compiler that would put it behind an AOT one.


Skype is an Electron app currently. If you think current Skype is the better performer imagine the old Skype that was truly native.


i'm not talking about GUI but rather the camera/picture processing. Yes, I know that electron is fast, I have very good experience with VSCode, which is maybe even faster than "native" Idea in Java, however IDEA does much more for me, so can't really compare.

Maybe I'm too old fashioned, I don't know. What I know is that creating multiplatform native application is really hell (I did that), and Javascript is much cheaper, that's why everyone is doing electron, which literally launches up another separate google chrome in your memory... Yes, it's cheaper, but memory is much more expensive nowadays, than it used to be when I started programming...


> i'm not talking about GUI but rather the camera/picture processing.

That's usually done by the browser when using WebRTC (except for postprocessing, e.g. background blur), not the application (using WASM or JavaScript).

Some browsers do this in hardware, others don't – just like some native videoconferencing applications are horrible CPU hogs, and others are quite efficient.


> It's because you cannot have native performance on something that is still the virtual machine

JIT compilers are a thing, so this statement is just plain false.

Also, apples to oranges comparison. I’m not saying that in this particular case it is not true, but that can’t be deduced from this datapoint.


JIT works after some time (spin-up), and still it's always memory vs cpu tradeoff. But you're right, today java's JIT gives near native performance. Btw I wonder how does memory cost compare native vs in-browser for the same application, like e.g. Photoshop. Anybody knows?


A JIT can also collect information about your workload at runtime, leading to better specific optimizations in some places. You can supply this data yourself for some native compilers: https://en.wikipedia.org/wiki/Profile-guided_optimization


> JIT compilers are a thing, so this statement is just plain false.

Do you know of any JITed languages that approach SIMD optimized native code in terms of performance and overhead?

Even the best ones (eg. LuaJIT) fail to come close to even moderately optimized C code, with much higher memory usage. On top of that, JIT optimizations don't always trigger and you fall back to interpreted levels, which are orders of magnitude more expensive.


Why would LuaJIT be the best? I’m fairly sure that V8 and HotSpot that had million times more dev time spent on are way better on every front.

And autovectorization is definitely done by HotSpot (I would guess also by V8, though not sure), but autovectorization in itself is very finicky both AOT and JIT, so there is now a stronger move towards user-exposed controls, e.g. Java’s new Vector API, but C# also had something like that for years. Funnily enough, C doesn’t have any standard way to control vectorization, only compiler-specific extensions.

Also, memory layout is a completely different problem space, don’t judge JIT compilers on that. On numerical code they are very competitive with AOT compiled, low-level languages, both JavaScript and Java. Of course if a given object could not be escape analyzed than the semantics doesn’t allow further inlining and somewhere that pointer indirection have to be paid, but don’t forget that besides benchmarks, most real life code in C/C++/Rust whatever will also happily chase pointers, so the difference is again, not as stark as some benchmark may tell you. Also, this is more related to GC+JITted languages, but are not necessary for JIT alone. With that said, I just realized that there is Julia in this exact niche, that also demonstrates my point very well.

> On top of that, JIT optimizations don't always trigger and you fall back to interpreted levels, which are orders of magnitude more expensive.

An optimization “triggering” is exactly similar to what an AOT compiler would do, only with a smaller time/resource budget. What you are getting at - speculative optimizations - are a different story, and are not a necessity in case of a JIT. And they get deoptimized when the assumptions they made for the optimization are proven incorrect (e.g. in java’s case virtual calls may have been replaced by static ones as only a single class implementing the given interface were loaded at the time - but upon loading a new implementation this will have to be deoptimized). You are right that deoptimizations can be expensive, when they go wrong, or in the worst case go into some cycles, but in practice with V8/HotSpot, these work well enough. The initial warmup is more likely to cause some jitter if any.


Thank you for your answer, interesting read!

> Why would LuaJIT be the best?

It was one example out the group of best ones, sorry for not being clear enough.

> Also, memory layout is a completely different problem space, don’t judge JIT compilers on that.

Exactly; you're stuck with the memory floor the language provides you with, JIT or not. This means a fixed minimum cost (resources, heat, electricity etc) that would otherwise not be there.

> On numerical code they are very competitive with AOT compiled, low-level languages

How competitive? I've never seen a benchmark show a JITed language compare to the equivalent program in C, with the few exceptions where the C implementation was written in such a way as to enable the author to say "we're faster than C".

> most real life code in C/C++/Rust whatever will also happily chase pointers, so the difference is again, not as stark as some benchmark may tell you

Oh most definitively. On the other side of that same coin, most real life code in JS/Java/C#/PHP/Ruby consume orders of magnitude more resource than their lower level counter parts.

Often, a lot of that is due to writing idiomatic code in languages where idioms are expensive (eg. `[... new Set(arr)]` in JS), which often also means that underlying libraries and frameworks also use these idioms. Even worse when libraries like lodash/ramda become popular and you have another order of magnitude more overhead compared to the equivalent for loops.

You could of course optimize that, just like you could optimize your C/C++/Rust/Zig program.

> You are right that deoptimizations can be expensive, when they go wrong, or in the worst case go into some cycles

If what you're saying is true than that's great news! My own experiences (>decade ago) of observing other teams experiment with JITed languages showed more pessimistic worst case outcomes (embedded engineers trying out embedded scripting languages).


I' ve seen quite a few posts like https://users.rust-lang.org/t/a-performance-problem-compared... , https://discourse.julialang.org/t/why-is-julia-faster-than-c... ... And many are from average programmers writing Cpp/Rust vs Julia, this is really good news for scientists/engineerers who need quick experiments.


Julia is a good example for a language with JIT that doesn’t follow the stereotypical OOP layout, so its memory overhead will be only that of the runtime (which is definitely not zero, but nor is it too big).

> Oh most definitively. On the other side of that same coin, most real life code in JS/Java/C#/PHP/Ruby consume orders of magnitude more resource than their lower level counter parts.

Java is actually within the compiled language’s groups’ results with regards to energy efficiency. A good GC can allow to be very power efficient and tradeoff memory for running the CPU — only has to run the GC when it really is close to filling up. After running, it can free the former region in one `nop` by declaring the former region as unused (at least in the moving GC kind).


Julia. In fact delaying code generation as late as possible in that language is key to SIMD efficiency because the JIT can re-order loops, etc, with the knowledge of runtime values.


Java and .NET, because you can use SIMD primitives directly.


The clutching of pearls here is not going to change things in the slightest.

I personally have pretty bad battery drain from Zoom. I think there's definitely a lot of futures where WebCodecs (for even better hardware accelerated encoding) & WebTransport & Media-over-QUIC open up technical paths to dominate & crush most native app implementations.

And unlike native apps, which rely on mountains of proprietary systems, the web can be a platform where anyone can put together new ideas quickly & easily, that leverage enormously advanced common platforms.

I am just so tired of all the web whining. None of these people have the faintest clue what the web is moving towards, have the slimmest interest in understanding why their gripes are actively being made irrelevant. It's brutal, enduring such strong & wrong anti-vision, having to forever be handling these very loud persistent minorities whose motto is, to quote the band Kula Shaker, "whatever it is I'm against it".


FWIW, the performance of my emulators [0] (mostly integer bit twiddling code written in 'portable C') is pretty much identical in the browser and natively compiled version (within around 25% of each other).

Of course one could go farther by using cpu- and compiler-specific optimizations (for instance SIMD intrinsics), but some of those are also portable to WASM. In general, memory layout optimizations translate directly to the WASM linear heap, and that's by far the most important performance factor before it makes sense to look into platform-specific optimizations.

[0] https://floooh.github.io/tiny8bit/


... 25% ? I'd sell my soul for a computer that's single-thread 25% faster than what we have right now


Any code in interactive applications needs to have wiggle room to allow for a much greater variation in performance than 25% anyway, WASM will just "hit the wall" slightly earlier.

If macOS decides to run your code on an efficiency core or a mobile device throttles the clock to avoid overheating, you'll lose a lot more performance than just 25%.

Or if some users have a lower end device, or you use different compilers for different target platforms, or a new hardware exploit hits and the mitigation results in a general performance loss.

The only situation where such 'wiggle room' isn't required is when you completely control the environment your code runs in, but in this case you don't need runtime portability nor safety, and thus WASM wouldn't make sense in the first place.


There is the hope that in time webassembly can and will be implemented directly in silicon.


One might think this would have emerged from some university-level fab already !


Here's a youtube video of this talk: https://www.youtube.com/watch?v=lUCegCa7A08

Really excited about Scheme compiling to WebAssembly. A whole world of possibilities indeed!


Yes. The Spritely Institute is working here and have some nice blog posts and example apps.

https://spritely.institute/news/scheme-to-wasm-lambdas-recur...


Follow along with Guile Hoot (Scheme -> WASM compiler and general WASM toolchain) here: https://gitlab.com/spritely/guile-hoot


Apparently "the rest of us" is people interested in using garbage-collected languages with WASM, because the post is mostly about the new-ish GC support in WASM.


I don't believe that's the correct way to use WASM. If one has a GC collected language one may be better off compiling to JS, or at least a combination of JS and WASM for some linear C-like computation that's then offloaded back to JS.

People need to look at JS, WASM, WebGL/WebGPU and the other browser APIs as a single platform, and use the right parts for the right job, instead of reimplementing JS functionality in WASM and vice versa etc.


A platform shouldn't dictate details like what programming language to use to create applications for that platform. It's already bad enough with iOS and ObjC/Swift or Android with Java/Kotlin, but at least there are escape hatches on those platforms. WASM is the escape hatch that was needed on the browser platform.


I'm discussing JS and WASM as two compile targets for your preferred language. No one is "dictating" anything. But JS and WASM are complementary in some important ways. GC being one.


Emscripten has shown that Javascript is a terrible compilation target, that's why WASM exists in the first place (Emscripten started with compiling to vanilla JS, which was then 'standardized' into the asm.js subset, and finally ended up in WASM ... each of those steps improved performance, and a Javascript/asm.js backend would never have been accepted into upstream clang, while WASM is "just another ISA").


I find it frustrating you casually ignored the primary context of the comment/thread you're replying to, namely: suitable targets for GC (garbage collected) languages, not for C and C++, which you surely know are not garbage collected. WASM is a suitable target for Emscripten. But I never said otherwise, and this was not the discussion.

Also.

WASM is not "just another ISA" at all, as I explained in other comments throughout. For one it has an opaque call stack and no free jumps inside from one code location to another, unlike any other ISA in the world. This means it's terrible for languages which lean heavily into tail calls and coroutines, among other. It's also bad for GC languages as a custom collector can't compete with JS's GC.


Support for tail calls are in Phase 4 of the standardisation process: the last phase before becoming part of the Webassembly standard.

There is already a WASM back-end for Golang, which implements coroutines without tail calls or sparse stacks.

BTW. ISAs with opaque call stacks and verified branch targets, respectively do exist, but I dunno if there is hardware that does both. You could use Intel CET for the latter, and enable "Safe Stack" or a flavour of "Control Flow Integrity" instrumentation in LLVM to emulate the former. There are also projects that recompile existing binaries. Google Fuchsia has Safe Stack enabled for all binaries, except for Golang unfortunately.


You can implement coroutines on anything but it’s slow.


I simply don't see that there's a big enough difference between traditional garbage collection, refcounting and manual memory management. Each of those can already be implemented in pure WASM (without the new GC support), just more or less awkwardly.

As for "just another ISA", there have been CPUs which had separate call- and data-stacks, with the call-stack living on the CPU and not directly accessible to code. In that sense WASM isn't much different then those esoteric CPUs, yet those also implemented "just another ISA".

And even though WASM might not allow free control flow, I yet have to see a noticeable performance difference between WASM and native for this type of "worst case code":

https://github.com/floooh/chips/blob/f5b6684ff6e899429544b21...


We can implement everything on an infinite tape read/writer, but the point is performance. I'm trying to find something to say about you calling a CPU "esoteric" due to its ISA, and then calling its ISA specifics "just another ISA". Kind of seems like a 180 in the middle of your sentence. But one way or another TCO and indirect calls are crucial for modern languages and I'm glad eventually it'll be usable in WASM as you pointed out in another comment.


> Each of those can already be implemented in pure WASM (without the new GC support)

I'm not super familiar with WASM, but (according to this article/talk) implementing GC in pure WASM doesn't really work cross-module: in particular, there's no way to trace liveness across modules (e.g. if some JS is holding a reference to one of our WASM values), and the least-privilege/capability approach is lost since we have to grant access to all of our memory at once (e.g. if we're storing the heap as one big buffer).

> just more or less awkwardly

There are hacks and work-arounds, but it's a pretty silly situation when there's already a GC right there for the JS engine. It's important to get the details right when standardising (e.g. to allow future advances, rather than locking-in current techniques), but the current situation of downloading multiple standalone GCs isn't really acceptable.


Are you aware that WASM is going to have a GC soon?

It's in phase 3 (Implementation Phase): https://github.com/WebAssembly/proposals


Despite its name, people are using WebAssembly outside of the browser. And it could be useful to have a GC in those contexts. Hooking in something like V8 is counter-productive in those cases.


JS is still going to be used by those compiling to WASM. WASM modules will need to import host functions from the JS environment. JS is an unappealing compiler target for a number of reasons, but I'll highlight one relevant to Scheme: no tail calls.


You may unpleasantly surprised by this, but WASM also has no tail calls.

No, despite it's "asm", it has no jumps ("goto") to addresses and labels except structured (nested blocks) ones, like in JS, and with a hidden/implicit call stack. Like in JS.

This means if you want tail calls you need to emulate them with similar techniques you'd also use in JS.

So, what else makes JS a less appealing compiler target? Let's unravel this myth.


I'm just more excited about WASM as a compilation target and I don't really feel like getting into details. The current WASM spec does not have tail calls, but there is a proposal for tail calls that is sitting behind a feature flag in V8 right now. I am currently working with Andy on Guile Hoot, a WASM toolchain for Scheme, and we use the instructions (return_call and friends) in that proposal. Like GC, the tail call proposal has high likelihood of landing across all browsers.


> but WASM also has no tail calls

Is this still true? I don't know if I'm reading this right, but this page says the tail call extension is shipping since Chrome 112:

https://chromestatus.com/feature/5423405012615168

PS: yep, shipped in Chrome 112, but not yet in any other stable browsers version (only in nightly or behind feature flags): https://webassembly.org/roadmap/


I didn't know they're working on it, so thanks for the pointer. But clearly not something you can use yet. Unless somehow you only target bleeding edge Chrome.

But that's an important advancement.


Okay, someone tell me why JVM isn't it? Is there something fundamentally flawed in the JVM that can't be fixed?


Quoting from Stackoverflow:

- WebAssembly was designed with delivery-over-HTTP and browser-based in mind. For that reason, it supports streaming compilation - i.e. you can start compiling the code while downloading.

- WebAssembly was designed to have rapid compilation times (resulting in web pages that load quickly), this is supported by having very simple validation rules compared to Java / JVM languages.

- WebAssembly was designed with the concept of a 'host' environment, i.e. the browser.

- WebAssembly was designed to be secure and simple, minimising the overall attack surface.

- WebAssembly was designed to support a great many languages (C, C++, Rust, ...), whereas the JVM was initially design for a single language, Java.

https://stackoverflow.com/questions/58131892/why-the-jvm-can...


The JVM requires a huge standard library, is tightly tied to implementation details of the host, is not easily compiled to by any language with stack types and pointers, embeds a number of historical mistakes, makes sandboxing basically impossible, and must be very carefully written if performance approaching native is required.


> is tightly tied to implementation details of the host

Heh?

> embeds a number of historical mistakes

Well, you need history for that first. Wasm won’t be any different.

> makes sandboxing basically impossible

Why would it? It just turned out that whitelisting should be the default approach to security over blacklisting. There is nothing inherent in the JVM that would preclude one from doing that.


Safely sandboxing java is difficult because a lot of the standard library is written in native code with full filesystem access. And java code can dynamically load other java code. So its much easier for a security vulnerability in the standard library or runtime can quickly escalate into a RCE vulnerability. Java applets being secure largely depends on the entire JVM + JRE being secure. Given how massive that is, there have (of course) been many sandbox escapes over the years. Oracle's security vulnerabilities database[1] lists 730 CVEs in the java runtime environment. (They aren't all sandbox escapes but wow thats a lot!)

WASM is far, far easier to reason about from a security perspective because its orders of magnitude simpler. There is no standard library. All wasm code (including any library code you compile in to your software) runs in an entirely encapsulated virtual machine with no ability to interact with anything outside of its little bubble. (Except for any functions the application explicitly passes in to the wasm module.) The wasm module boundary is fixed and simple, and as a result its much easier to secure.

I'm sure someone will eventually find a sandbox escape for wasm code somehow. But I bet there'll be at least an order of magnitude fewer sandbox escapes in wasm than we've had with java because of the differences in the security model.

[1] https://www.cvedetails.com/vulnerability-list/vendor_id-93/p...


We are not talking about Applets, those came from a time when the rest of the web technologies were way behind, and the best we could do is to put up a canvas to render into.

Also, why would we expose the full filesystem api to a theoretical JVM for the Web? The core of the JVM is a simple stack-machine and a GC, there is nothing demanding all these additional hooks to useful, but potentially unsafe APIs. Of course WASM is safe without any outside functionality, so is brainfuck in and of itself.


> The core of the JVM is a simple stack-machine and a GC, there is nothing demanding all these additional hooks to useful, but potentially unsafe APIs.

So you're suggesting we put a subset of the JVM in the browser, with no access to the java standard library?

Why?

You couldn't run any existing java code or any code in other languages without a compiler. It would just be a weird sub-language that nobody really wants to use, with no existing code or libraries. The pressure would be on to add all the crap from the JRE - putting a tension between security (who knows how secure any of that code is) and usability (who wants a language with no standard library?). This has all played out before and java ended up giving up and deprecating SecurityManager.

Even if you succeed, to what end? So we can run java in the browser? So we have 2 average languages instead of 1? Should we add a C# subset next? The beauty of wasm is that any language can target it.

Or would we compile our C, Go, C# and Rust to this new java subset? But that wouldn't work well - java is missing many features which would be needed to run C code properly. You could make it work using some horrible hacks on top of ByteBuffers, but then it'd run much slower than native code. Much slower than wasm runs today. Whats the point when we could just do the same thing on top of javascript? (And we did - it was called asmjs).


I have replied to many of these points in another thread where we talk, so I won't repeat those.

I suggest putting a subset of the JVM that has access to many parts of the standard library, but certain APIs would need proper browser permissions, the same way it works with JS today.

It could run the vast majority of the very high quality of Java ecosystem, and every JVM guest language code out of the box. It is also not a bad compile target -- I would argue that most mainstream languages would actually prefer a managed runtime, and C/C++/Rust are the niche. Aren't there orders of magnitude Java+Python+C#+Haskell+... devs that would prefer to write in their language over JS? Sure, you might say that the reason for WASM's existence is mostly the performance gains for small algorithms, and there certainly is a new niche made available thanks to it, but one way or another people will use it simply due to preferring their stack to JS. The performance aspect is also improved to the JS-only world by possibly allowing proper shared-memory multithreading.

> java is missing many features which would be needed to run C code properly

Why would ByteBuffers be a horrible hack? Is it worse of a hack then what C compilers have to do when targeting WASM, like only having structured control flow? It's just an array read mapped to a single instruction, but in context can also be optimized efficiently (e.g. autovectorizing them when they occur in a loop).

GraalVM actually has support for any LLVM language (though their way of interpretation is different enough from the standard JVM bytecode mode that I only mention it as an interesting data point) with quite great performance, certainly comparable to WASM.

Nonetheless, please don't take my possibly argumentative tone as a negative, I enjoy this discussion and you make great points. I also don't dislike WASM, it is imo a net positive thing. I just believe that a JVM-fueled web failed mostly due to politics, not something inherently technical.


> Why would ByteBuffers be a horrible hack? Is it worse of a hack then what C compilers have to do when targeting WASM, like only having structured control flow?

Mmm maybe it is fine in java. I've spent a lot of time benchmarking javascript which has a similar set of primitives (typed arrays). I'm not sure how they perform now, but at the time they were always significantly slower than native javascript arrays for some reason.

> GraalVM actually has support for any LLVM language

Cool - I didn't know that! Thats definitely a point in the direction of the jvm being viable.

> Nonetheless, please don't take my possibly argumentative tone as a negative, I enjoy this discussion and you make great points.

I'm enjoying this too. For what its worth, at a big picture level I think it would definitely be technically possible to do what you're proposing. Its just a question of tradeoffs. Is it easier to tame a subset of java that could usefully fit in the browser, or just start fresh? The downside of starting fresh is that we'd need new compilers and optimizers. And the downside of using java is that the security story is much more complicated, and you'd need to figure out how much of the standard library to bring across and figure out how to secure the FFI.

Obviously the wasm team chose the latter. But perhaps if there were more java fans amongst browser vendors, or the choice was made decades ago, I could easily see the JVM being chosen instead of inventing wasm.


Static initializers, and bytecode verification make it much harder to get fast application startup.

Class loaders, reflection, all methods being virtual, and iteration going through an interface make it much harder to get good performance with the complexity and warm-up time of a JIT.

Type erasure and single polymorphic dispatch makes the JVM's object representation a weird fit some languages.

UTF-16 strings are just awful.

Oracle's business practices rightly scare people away from wanting to build on top of anything they control.

The JVM is a very impressive technology, but it's also almost thirty years old and we've learned a lot since then.


Is there an easy to use C/C++ toolchain which targets the JVM? That may be the answer, because essentially that was what gave us WASM.

Getting C/C++ code running in the browser was the motivation for Emscripten, and eventually this evolutionary branch gave us WASM (via asm.js).

(and at the time there was plenty of competition: Adobe's Alchemy/Flascc which targeted Flash, and Google's (P)NaCl which compiled to native instructions and later (a subset of) LLVM bitcode, but notably, there was no similar attempt (that I know of) of a C/C++ compiler which targeted the JVM)



Also, GraalVM has support for every LLVM language.


I once wrote a Linux process emulator in java for static x86 binaries. It could run 80% native speed on x86 and 240% native speed on arm compared to gcc -o3. (I think gcc wasn't very good on arm at the time) Sadly it was proprietary.


Oracle


Wrong answer.

There is hardly a more open platform than the JVM, its reference implementation is open source, it has an open specification for both the language and the VM with plenty completely independent implementations, and is so big, running business critical infrastructure in basically almost every FAANG, that any single one would alone pay for its continuous development. Also, Oracle has been a surprisingly good steward of the platform, managing to retain almost the whole Java team from the Sun days (remarkably hard to do that at takeovers), increased the pace of its development and finished open-sourcing everything (formerly OracleJDK had proprietary extensions not found in OpenJDK, nowadays only some trademarked logos remain as the sole difference).

So they mostly own only the name’s trademark, which is no big deal. Remember, google vs oracle happened over Sun’s license that explicitly forbade using their software on mobile devices. No such restrictions exist anymore in case of OpenJDK, which has the same license as the Linux kernel.


I don't know why but discussions on WASM seem to be triggers for Java developer echo chambers.


Anything of substance to add?


Actually is the correct answer. Oracle and its shenanigans are the reason why I avoid their DB and Java.


Hopefully you're in better terms with Google, given their stance into WebAssembly.


You do you, but that is a thoroughly uninformed emotion-based decision.


My somewhat grumpy answer would be that web people almost never do their homework.

They are infamous for reimplementing things worse that were well available decades ago.


All the problems with the JVM have been written up to death, and many are easily inferable from surface-level background knowledge. If you don't know them, who's not doing their homework?


The only problem with the JVM were politics. Safety? Should have been whitelisting, nothing inherent with the JVM. Performance? We had shitty hardware back then, the improvements there alone would have solved the issue (see: we use Javascript), but especially today, the JVM is more than fast enough for most workloads.


Fundamentally object-oriented, fundamentally garbage-collected, carries around a massive stdlib that's tightly integrated with precise functioning of the VM. And no, the performance concerns do not reduce to 'hardware'; the point is to eke out more performance than a high-level object-oriented garbage-collected system can muster. V8 is already on par with the JVM; if the JVM's performance was good enough for you, you'd just use JS.


> Fundamentally object-oriented, fundamentally garbage-collected

And?

No one said that the performance concerns reduce to hardware — but at the Java Applets days it was most definitely the primary reason for their slowness. Since then, with the improvements of JITs, it became very fast.

> than a high-level object-oriented garbage-collected system can muster

That’s not a well-defined performance level and the ceiling changes each year.

And V8 has no real parallelism, so for many kind of workloads they are not drop-in replacements.


The JVM can't really be properly sandboxed, though. Even the JDK developers have stopped trying and deprecated SecurityManager. On the other hand, WASM is specifically designed to not really be able to do anything fancy unless you give it functions that actually do something externally. Besides, how would you even properly run C code on the JVM?


> The JVM can't really be properly sandboxed

What happened is that people realized that blacklisting does not work. Whitelisting is the correct approach. There is absolutely zero reason why WASM would be better for that over the JVM — the JVM spec in itself has no visible side effect, not even printing, so it can’t do anything nefarious (besides cpu vulnerabilities, but that also apply to WASM).

And you would run C code in a completely trivial way: you have a huge array which is your memory, and you read/write bytes to it.


> Whitelisting is the correct approach. Isn't most of the point of using the JVM thrown out if you have a limited standard library?

> And you would run C code in a completely trivial way: you have a huge array which is your memory, and you read/write bytes to it. That sounds like it would have terrible performance. Would every read of an int have to manually build it from the 4 bytes it's made of? This just seems like something the JVM won't handle that well.

Besides, this would mean that if you want to run a language like C#, which has references and value types (and you can make a reference to a value type from a pointer into a buffer), you would have to emulate them, which will hamper performance, or just use the same strategy as you used for C.


> That sounds like it would have terrible performance. Would every read of an int have to manually build it from the 4 bytes it's made of?

No, a method can have a native implementation, or a compiler intrinsic. Java has ByteBuffers (https://download.java.net/java/early_access/panama/docs/api/... ), and they have put/get{Long,Short,etc}, that will map to either a native single pointer read with the given size, or even optimize to a more efficient read inside the context of a bigger method, say vectorize them inside a loop (as the compiler knows about this method and can handle it specifically).

> Besides, this would mean that if you want to run a language like C#, which has references and value types (and you can make a reference to a value type from a pointer into a buffer), you would have to emulate them, which will hamper performance, or just use the same strategy as you used for C

So we are back at where WASM is? Though I think that there is nothing inherently impossible about mapping C# to more efficient Java primitives, especially that value types are the more constrained semantics, each instance being different is semantically the same, just less optimal. A pointer to a region of the aforementioned ByteBuffer with a special memoryToCSharpObject() that has some compiler intrinsics wouldn't perform too bad, I believe.


> So we are back at where WASM is?

Exactly. By the time you've done all that, whats the point? You'll have reimplemented wasm on top of the jvm in a way that:

- Can't use any of the java standard library (which all existing java code depends on)

- So you also can't use any existing java code

- You need a compilation step for any existing code in other languages to convert it into your special limited java syntax

- And the result will run slower than wasm because of the extra layer of abstraction going on

The one advantage is that you can output everything as standard .class files, so it'll be easier to pull this code into an existing java application.

And yes, you could absolutely do that if you want and you see value in it. Heck - maybe it'd be nice to have a wasm-to-java-class converter to let you reuse all the wasm infrastructure.

But this all sounds like the java equivalent to asmjs, which we already tried. Asmjs was the precursor to wasm. It was built on top of javascript primitives in much the same way as the proposal here builds on top of java primitives. As I understand it, the reason we ended up with wasm instead of asmjs was that:

- Wasm was easier to optimize, had a smaller file size and was faster to load at runtime (since you don't need a javascript compiler)

- Its much easier to make VMs for wasm in lots of languages and environments. Wasm modules can be loaded into programs written in python, rust, C, java, go, javascript, etc without some weird unnecessary dependency on javascript.


> Can't use any of the java standard library (which all existing java code depends on)

One can surely cherry-pick quite a lot out of it that are safe to use, e.g. anything not using `native` implementation can as per the former definition of JVM interpreter, also end up being safe. So many existing code would run without any change.

> Re: compilation step

Why? It has nothing to do with Java, the language, only the JVM class file format. Scala/Kotlin produce bytecode directly, not Java code.

> Run slower

Why would it? Wasm can run fast having grown out from asm.js, yet a JITted runtime that runs half of the internet can't?

> Its much easier to make VMs for wasm in lots of languages and environments

There are plenty toy JVMs out there as well, the core really is not difficult.

And the point of all this would be that large chunks of the JVM could have been reused, that runs on every platform with top notch performance already, without all the growing pains that WASM will experience. Also, the JVM's type safe cross-language capabilities are already here, while they are very experimental and rudimentary in case of WASM. I can just call a Kotlin class from Clojure, or JPython or whatever, and vice versa.


> One can surely cherry-pick quite a lot out of it that are safe to use, e.g. anything not using `native` implementation can as per the former definition of JVM interpreter, also end up being safe. So many existing code would run without any change.

Java already tried - many times - to have a "safe subset" of the language. As I understand it, the effort was finally abandoned with the deprecation of SecurityManager. But even if you made a safe subset of java, you've only gone from 1 working browser language (javascript) to 2 (javascript and java). Well, and the compile-to-JS & compile-to-JVM languages.

> Also, the JVM's type safe cross-language capabilities are already here

But that leaves out a lot of languages that people care about. Important languages, like C, C++ and Rust, C#, Go and Python. These languages don't compile to the JVM because of limitations in the java bytecode format and type system.

I mean, how would you make C or Go run on the JVM?

Perhaps java's type system could be extended to support these languages too. But it would probably take 20 years of committee meetings to do it. And the time to start that work was 20 years ago. At this point its much easier to start fresh with something new like wasm.

[Edit: In another thread you mentioned graalvm has been made to support any LLVM code. Thats great, and perhaps addresses this point entirely. I don't know enough about it to know.]

> Why would it [run slower]? Wasm can run fast having grown out from asm.js, yet a JITted runtime that runs half of the internet can't?

The challenge is making existing C code run at near-native speeds. How would you do that with your proposal? You'd need to first compile your C to java classes and then ran the code in the JVM. But java has no native pointers or direct access to an allocator. Java's GC alone would incur a massive performance penalty to C code.

Asmjs was made fast by making javascript engines contain a second, separate compiler for the asmjs dialect. The compiler detected asmjs explicitly and essentially used asmjs as a weird, expensive syntax for a different language. I guess java could do that too. We could have a special .class file syntax subset and add special compiler logic to the jvm to optimize that particular dialect. But if you're going to go down that road, why bother with the JVM? Javascript is already in the browser. For that matter, why not just compile your java classes to javascript?

> There are plenty toy JVMs out there as well, the core really is not difficult.

I'm confused - are you proposing just porting across the core of java, or java with a lot of its standard library? Given the size of the JRE, those are two very different propositions! The former is possible but pretty useless, and the latter is a massive job that the java community has already tried and as I understand it, eventually given up on.

> Also, the JVM's type safe cross-language capabilities are already here,

Oh no, this keeps getting worse. You want to put java's FFI system in browsers too?

The problem with Java's various FFIs is that (AFAIK) they're not designed to be a security boundary. We'd need to redesign them all. The surface area is massive and given java's historically abysmal security record, I don't trust the java engineers to make any of this stuff safe or secure. And neither should you.

No, it sounds like you want to bring most of the JVM into the browser to solve a problem I don't have (run existing java code in the browser). And this still doesn't address the problem that wasm solves (let us run any language in the browser, safely and without a loss in performance). And I can't see any straightforward way to address the security problems or the performance problems that this would entail.

In comparison, I'm much happier with the design of wasm. By making a small, simple new bytecode format which explicitly supports C and C-like languages, all wasm languages run fast by default. Wasm is dead easy to sandbox and easy to audit. And this gives us the best of both worlds, since we should be able to compile Java to wasm like any other language. Better yet - this way you can bring in as much of the JVM as you want into your projects, and there's no need to do the solve the impossible problem of making the JVM secure.


You beat me to it. I thought we covered this ground 20 years ago.


And rejected it because it was a nightmare of security issues and poor performance. The idea isn't bad. The implementation was.


Admittedly I only scanned the article, but I didn't see any mention of C# or Blazor. I've only fiddled with it, but it seems that (when targeting WASM) Blazor loads a (partial) .NET runtime in WASM and then happily lets your execute C# code - which seems in stark contrast to the article's claims about GC.


That's so far the strategy. There are all sorts of garbage collected and interpreted things that run fine in wasm. But at the price of having to include a heavy runtime; which bloats the binaries. Not a showstopper for some things; like enterprise applications running in a fast network. But a bit of a blocker for normal websites.

Firefox and Chrome actually have support for the new GC and threading support in wasm. That's behind a few feature flags. But e.g. Kotlin's new wasm compiler depends on that and manages to ship binaries that are quite small. That compiler is also being used for a new experimental target for compose multiplatform, which is based on Google's Android Jetpack Compose and extends it to other platforms (IOS, Desktop, and Web). Web now comes in two variants: html & kotlin/js and canvas + wasm. The latter renders using the same graphics libraries used on Android and IOS but compiled to wasm. None of this stuff is ready yet because the tooling and frameworks are pre-alpha quality and it requires feature flags to be set by the user. But, this is starting to look like a nice alternative stack to things like flutter and react native for cross mobile, desktop, and web development.

The feature flags are likely coming off fairly soon. Safari is a bit behind. 2024 is going to be interesting. I'm guessing there will be a whole lot of activity around this in most commonly used languages and stacks.


> But at the price of having to include a heavy runtime; which bloats the binaries.

The other problem is that the "bring your own GC" model means if you use WASM in the browser, you end up having 2 garbage collectors for your program. And that makes sharing objects between javascript and the wasm bundle much more difficult & uglier. You essentially need to do manual memory management for any object thats shared across the wasm boundary to make sure its freed at the right time.

Having a unified GC means object references can be passed naturally from javascript to C# and back without any special code at the boundary.


> The latter renders using the same graphics libraries used on Android and IOS

That's annoying. I was hoping WASM would be a way to break out of the js/java/swift/ObjC stranglehold, not another baton to enforce such tyranny. It would be cool if they were targeting toolkits like QT instead.


What does one have to do with the other though? You can use any programming language where the compiler supports WASM output, and any cross-platform graphics library that also has a (Web)GL or WebGPU backend.

(also not sure how Qt fits in here, but it also has WASM support: https://www.qt.io/qt-examples-for-webassembly, but IMHO Qt is way too bloated for this sort of thing)

In the end, WASM running in browsers can only use browser features that are also available to Javascript, e.g. for rendering that would be canvas-2d, WebGL, WebGPU or the DOM). But at least WebGL and WebGPU are close enough to similar native 3D APIs that you can share some code between native ports and WASM running in browsers - but it still makes sense to use a higher level graphics libraries which abstracts over the 3D-API differences, or alternatively use wgpu.rs or libdawn in the native versions (which are the standalone native WebGPU libraries used by Firefox and Chrome).


QT makes more sense with languages like C++ or languages that don't have their own UI frameworks. The whole point of running something like C# and Kotlin is using the frameworks common for those languages.


It doesn't claim DIY GC is impossible:

The article first talks about difficulties a ship-your-own-GC faces, and enumerates workarounds for them (slide "GC and WebAssembly 1.0 (3)" and coupe of next ones).

Then it starts talking about the planned GC support extension in WebAssembly and decides to go with that.


Sure, but I thought that a successfully implemented CG'd language running on WASM would have been something worth noting.


Blazor isn't unique in that. At least some of the "non-mainstream" languages that the article mentions have WASM solutions. But, because they have to ship a non-trivial runtime as part of each app, they tend to suffer from the same kind of problems Blazor has, which is what the article was addressing.


That's true.

Besides Blazor, the WebAssembly Go port[1] does this as well.

[1] https://github.com/golang/go/wiki/WebAssembly


The article addressed that in some detail. There's a number of downsides to implementing your own GC atop the linear memory, due to limitations in threading model, modular security requirements, etc. Meanwhile the browser has a very high performance GC sitting right there we could expose to WASM fairly easily. That's the basic idea of current efforts and what the article/talk is about.


Before WASM's GC feature was ready, it was basically "bring your own GC" which had to run entirely within the WASM context. With the new GC support in WASM, it's now possible to delegate garbage collection to the WASM runtime by exposing garbage-collected-references to the outside world (for instance in browsers that would be the JS engine garbage collector).


DIY GC is definitely harder in WebAssembly than on most targets due to, among other problems, the lack of stack-walking


The article references browsers and JavaScript, so that’s the context. Nevertheless I found it quite interesting.


Maybe we read different articles? I thought the context was how it was hard to do GC in WASM currently, which is why we don't have many language choices. With that context, C# running on top of WASM seems pretty relevant?


Garbage collected languages have been running in WASM for a long time already, but they had to "bring their own garbage collector" to run in WASM, which increases the size of the WASM blob.

Garbage collection can now be delegated to the WASM runtime instead (e.g. see the short description here: https://chromestatus.com/feature/6062715726462976)


> So why aren't we there? Where is Clojure-on-WebAssembly? Where are the F#, the Elixir, the Haskell compilers?

There's this and other similar projects: https://github.com/helins/wasm.cljc


I believe the original question concerned compilation of Clojure, et al programs to WebAssembly. Unless I'm misunderstanding, your link is to a project for manipulating (assembling and disassembling) WASM bytecode, which is both the wrong direction and not really a compiler at all--at best a constituent part of a compiler.


I use s7 Scheme compiled to web assembly and love it. It uses its own GC and is 100% ANSI C so was quite easy to get going (it is fairly similar to Guile but liberally licensed). This allows me to reuse my domain code from Scheme for Max projects in a webaudio/webmidi contex, and control a WASM synth in C++ from Scheme.

Here's is my ultra simple example page which I will eventually update with the progress from the private app I am doing.

https://github.com/iainctduncan/s7-wasm

Edit to clarify: I am using s7 specifically because that's the one I use in Scheme for Max, my main OSS project. It is great for computer music.


Spritely Goblins renews my faith in the future of circuit society.


My gut feeling is that the built in GC will either be complex and difficult to implement, each garbage collected language will use the new facilities in incompatible ways or languages will still end up rolling their own gc for some reason.

However, I hope I'll be proved wrong and the new gc facilities will be a great success.


My gut feeling is that it will just wrap everything inside of JS objects, and then reuse the JS GC. But then performance would be nearly identical to writing the same things in JS.


That is a possiblity, but hopefully that isn't what happens either.


How do you develop in wasm? I've been unable to get a decent debugging environment with breakpoints running, as the support for them seems experimental. Do you all just fly blind, debugging with print statements?


Does using the JavaScript GC for other languages pose any risks?

Like, does application code rely of GC implementation details to be performant that might be different in JS' implementation?


for haskell to work we still need

- stack support of the hs-wasm-cabal

- lots of rewriting to remove C string, etc, from many packages:

https://gitlab.haskell.org/ghc/ghc/-/issues/23354


I thought browser VMs lost (Flash, Applets, etc) or was that just because they weren’t open like WASM?


They lost because they were ahead of their times -- the rest of the web tech were way behind and the best one could do was to put a canvas and render real interactive content there. In the meanwhile, web tech has improved considerably so now you don't have to manually render to a canvas, the DOM is expressive enough for many things, so integrating with JS instead of wholesale replacing it makes much more sense. But besides starting from a clean slate, escaping a few historical warts (e.g. 32bit types would not have that big of a support in a newly planned JVM simply due to hardware advancements), there is nothing significantly better or novel about it, in my opinion -- maybe with the exception of its history: it being closely related to asm.js is imo a very interesting tidbit.


Flash and applets were replacements for visual sections of webpages, handling rendering, layout, and interactivity in a frame. WebAssembly is just code. (To put it another way: The JS browser VM hardly lost.)


[flagged]


Given that this article has nothing to do with actually promoting a political/economic view, this reaction may be unhealthy.


It's a joke. The whole presentation is riffing on the Communist Manifesto.


I Ctrl+F'd "spectre" to no results. Very disappointed.

Nice username by the way.


Ok, now I'm curious. How is it riffing on the communist manifesto?


There's a few references. 'A world to win' is from the closing remarks of the Manifesto. 'With GC, the material conditions are now in place' is a reference to the fundamental basis of Marxist theory. There's probably others that I'm missing. The general tack is to suggest that WebAssembly is being democratized and made available to the masses. Bit of an odd analogy but there you go.


What propaganda does to your brain.


WebAssembly is the Flash of 2010. Even if companies adapted I would rather block it from running on my machine. I prefer transparency these days because almost every company is abusing user’s privacy.


Obfuscated javascript is no more transparent than webassembly. And today the webassembly "state machine" is quite isolated from the web-page and web APIs, making it easy to observe all inputs and outputs.

Also, nothing is stopping a web page author from implementing a simple webassembly interpreter in javascript and using that to execute the code if the browser's webassembly engine has been turned off.


TBF, that stance makes sense only if you also disable Javascript, otherwise it's just superstition (look at any complex webpage and try to make sense of the minified Javascript code in there, even when pretty-printed that's about as useful as looking at the WASM disassembly).


There's literally zero external capability given to WASM in the browser. It must call out to JavaScript to interact with Web APIs. There simply is nothing to hide.


Quote: "Firstly it's "close to the metal" -- if you compile for example an image-processing library to WebAssembly and run it, you'll get similar performance when compared to compiling it to x86-64 or ARMv8 or what have you."

Is WebAssembly now able to run it outside a web browser or this guy is just "inflating" results, to put it mildly and not call him a liar. Because last I checked, there is no world where a browser application is in similar performance with a native one.


My home computer emulators (mostly integer bit-twiddling code written in portable C) have similar performance between native and WASM with -O3 (within 10..25% of each other) - so yeah, WASM absolutely is in the same performance ballpark for 'regular' C code that doesn't use platform- and compiler-specific language extensions.

Link: https://floooh.github.io/tiny8bit/


I have not seen the value in WASM just yet. To me WASM has always looked like a Flash replacement, and there is value in that but that isn't what developers wish it were. Developers wish it were a JavaScript replacement, which it isn't and does not not wish to be. As a result I see nothing happening here but the disappointment and lost dreams of sad Java developers over the last 6 years.

Being flamed on Reddit by desperate Java developers wishing for WASM who emotionally cannot figure out JavaScript to save their careers was the straw that broken the camel's back. I deleted my Reddit account and never looked back. Dire indications of far deeper problems not specifically related to Java, JavaScript, or WASM.


The way I see it, most application software is moving to the browser (compare the number of greenfield desktop apps that show up on HN to the number of browser-based apps).

In a world without WebAssembly, this would mean that all application software will be written in JavaScript. This is a bad outcome! JavaScript is a fine DSL for GUIs, but it is not a great programming language for applications, especially when you get into graphically-intensive, data-intensive, scientific, etc. software.

Wasm is an out that allows developers to write the application core in another language like Rust or C++, as they would on the desktop, while still delivering software in the browser.

A bunch of products are quietly doing this already. (One is in the process of a $20bn acquisition.)


JavaScript is bad for graphically intense application for the same reasons Java, C#, and host of other language families are bad at it, they are garbage collected languages. Garbage collection imposes a processing jitter. That is certainly a valid use case for WASM, but I am not this take hold in the wild. Again, valid use cases aside its not what most developers want to use it for.


> JavaScript is bad for graphically intense application

Are they? It’s not like C++ is tossing around pixels itself, it commands the GPU to do so. Sure, there is a point where the speed at which the CPU side of data being supplied will be the bottleneck, but it is so far above typical desktop applications that it is incorrect to call managed languages unfit for graphically intensive applications (as shown by all those indie games).

Also, there are many different kinds of GCs, and you can limit the max pause time to very low values for a slightly lower throughput (note: this same tradeoff will happen without a GC also, these two are very often opposite ends of the same spectrum) to the point that unless you are using special real-time kernel with all the necessary process configs, your OS’s scheduler will introduce bigger pauses.


You can pause GC and pre-allocate memory in C# in addition to stack-allocating anything you want. Just because people don't do it it doesn't mean it's not possible.

Of course in reality almost no one bothers because "the user can buy a newer machine anyway"-sort of thinking, sadly.


>Java developers wishing for WASM who emotionally cannot figure out JavaScript to save their careers

If you got flamed, I imagine it was more about tone than content :)


If the crowd is an echo chamber and emotions are high content is certainly more than enough to trigger negative behavior. I would think there are plenty of examples of this in a variety of venues. I can't read the news without being reminded of various examples of this.


The benefits I saw were high performance libraries in the browser and runtimes for multiple languages by just integrating WASM.

Sure, not all languages, but getting Rust and C/C++ out-of-the-box for a start seems to be worth it.

Also, it's an open source runtime, so if you integrated it, you get improvements from the community, which is a huge gain for small projects.


C++ and Rust are already open source and there isn't reason to believe applications executed as WASM are faster than near JavaScript equivalents.

There is amazing advantage though to just exposing C++/Rust applications without requiring a separate installation or executable for both security and convenience, but I am not seeing much of this in the wild. I think the sand boxed nature of WASM is a bridge too far for many C++/RUST application developers who would then need to include a tremendous number of support library they take for granted outside the browser.


You really believe people want wasm based langs to replace js just because they cannot figure out js??


The more I "figure out" Javascript, the more desperate I am for something to replace it.


It does not matter what I believe. It only matters what people tell me.


let me guess, you are a full time JS dev? lol


If I were/weren't should that lend greater or lesser validation to the irrational behaviors of people seeking validation? Would this matter make more sense if I were motivated solely by bias?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: