Hacker Newsnew | past | comments | ask | show | jobs | submit | mpolun's commentslogin

It's for proper integration of garbage collected languages -- otherwise you need to embed your GC too, which bloats the wasm. JS host VMs have very good GCs these days, so hooking into them allows for better integration by e.g. go, Java, C#, etc.

Right now wasm is really designed for C/C++/rust


Also, WebAssembly can't support most GC runtimes now, because you can't scan the stack for roots. You can keep your own shadow stack on the heap, but that has a bunch of pretty bad performance implications. This actually impacts C/C++/Rust codegen as well, since they can't create references/pointers to anything on the stack, and have to build their own shadow stack (I think this is done in binaryen or LLVM itself?). I understand it's a security thing to disallow direct stack access, but most languages and runtimes expect it.

Anyways, that's why there needs to be support in WebAssembly for GC, because there needs to be a safe way to walk the stack (aside from the obvious JS interop considerations).


> JS host VMs have very good GCs these days

Aren't all of them single-threaded, though? That is, they only work in a VM with a single thread which the GC shares? Since WASM is supposedly finally bringing real multi threading to the web, how would that work? It seems like you'd need a new GC for WASM rather then just repurposing the JS GC.


Many JS GCs are internally multi-threaded and can handle multiple allocators. Even though JS isn't itself multi-threaded, the JIT and runtime systems now are, and they can concurrently allocate onto the heap. V8 has had to move towards this ability very gradually because of assumptions from the single-threaded world, but it's much more likely to support multi-threaded Wasm GC, should it be necessary, in the future.


Is bringing your own GC that hard? I think putting one in the spec would bloat Wasm runtimes which would be more concerning.


I've done it for Virgil. It's a major pain, because Wasm, by design, does not give access to the value stack. So to find roots you need to spill them into memory into what is called a "shadow stack".

The problem with bringing your own GC isn't just that, though. It's that using linear memory for everything, you're forced to put external references in a table, which roots them. Tables aren't weak...and even if they were, it's possible to set up a cycle between the linear-memory GC references and external references so that leaks happen. This was a problem in other contexts, for example, in V8 embedded into Chromium, and the ultimate result is that you need cooperative garbage collection across two heaps, with queues. While V8 and Chromium trust each other, both not to screw up, and to be fast, it's hard to see how to make that cooperative garbage collection contract work with untrusted Wasm code.


Go ships its own GC as part of the binary for non-wasm targets. Why should wasm be special?


If you share objects between wasm and JS then you want to share garbage collectors so you don’t have to pin them.


Flash, Silverlight, Java, TinyGo and .NET are doing just fine on WebAssembly.


In isolation, but no cross-language invocations, reference sharing etc yet.

It'll work as promised when I can import a C# class in my JS code, instantiate it and then pass one of its methods to a Python module and invoke it from there.

Great times are ahead of us.


You mean like using JScript alongside C# and IronPython on the CLR, ah the circles of rediscovery.


Exactly like that, but also in a secure sandbox and in the browser. Can CLR do that without Wasm?


In the browser, that is what Silverlight was all about.

Outside of the browser, WebAssembly is still a shadow of CLR capabilities, including sandboxing.

As for the "secure" in WebAssembly, I advise a good read about the security section, specially memory corruption

https://webassembly.org/docs/security

https://www.unibw.de/patch/papers/usenixsecurity20-wasm.pdf


Silverlight never had even the slightest hint of security, and no interop with JS or non-CLR (e.g. JVM) languages.

It was also comically slow and worked only in IE.

I'm well versed in security of Wasm. Much better than anything else available today or in the past, obviously still not perfect.

One of the best features of Wasm is entirely social - seems like everyone has agreed on it, finally. That's enough for me even if it was a 1:1 copy of JVM or CLR.


The new NoSQL and big data hype of bytecode based runtimes.


Not really, just finally a real common language runtime.


So says the marketing, usually pushed by those with an agenda with WebAssembly, forgetting about all those that trace back to the early 1960's and mainframe language environments with capabilities.

Lets sell old stuff as something new, never done before, rewriting history.

More recent chapter, application servers with WebAssembly, what a great idea!


I really don't think anybody is forgetting anything since people like you keep writing about it in every Wasm discussion thread since at least 2015. At this point, literally everybody involved with Wasm knows.

And people have seen what was in the past and created a modern, well-composed solution that is accepted by all major players. Excellent if you ask me. Yeah nobody has invented a new wheel here - but that's not necessary, actually it might be counter-productive to the goals of Wasm. Wasm wants to take stable, well-known ideas, improve upon the warts of previous tech like CLR and JVM and put it on 100 billion devices.


Supporting GC on WebAssembly has been a _major_ pain for TinyGo because WebAssembly doesn't allow access to the stack. It's really slow and I'm still not confident it's bug free. Having a GC integrated in WebAssembly itself should solve these issues.

Unfortunately, the current GC design for WebAssembly doesn't support interior pointers which is going to be difficult to work around (but I don't think it's impossible). Interior pointers are normally required in Go.


I don't see how it's any different than using any hosting provider. It's probably worth encrypting your databases, but if you don't trust your hosting provider you're hosed -- managed service or not.

If your paranoia is justified (which it may be, depending on your needs), you need to host the machines in your own datacenter


> If your paranoia is justified (which it may be, depending on your needs), you need to host the machines in your own datacenter

Indeed! That's the reason why the author shouldn't write "((use)) a managed database service", à la "whatever you have in hand, screws or nails, use a hammer!"


swc (https://github.com/swc-project/swc) is a rust equivalent to babel and has a bundler under development. Not sure how far along it is.

That would be an interesting comparison of performance and features.


SWC looks cool, but esbuild is more mature I think. The bugs in SWC's changelog make me pretty wary of using it in production just yet.


swc is more ambitious, it wants to re-implement all of tsc type checking, not only transpiling: https://github.com/swc-project/swc/issues/571


Same with postgres. I never use specific lengths for text on postgres.


It's not part of rustup. At least for the VSCode plugin, it'll prompt you to download when it first runs. As long as you've got a working rust installation it'll work pretty seamlessly in my experience.


I don't think this is accurate. Other people may know better, but from my understanding the maintainers of ifconfig realized that to update it to use the new behavior, they would have to end up changing the output. The output of ifconfig is used by a lot of scripts, so changing the output would break a million users, so they decided to write a new tool, rather than add a flag or something to opt into new behavior.

So it's userspace compatibility, not the linux kernel compatibility that lead to a new tool.

(and linux will drop old interfaces if it can be proven that no one uses them, or if they're insecure)


Rust actually has that they just can't be substituted automatically because they're different types:

    fn<T: MyTrait> compile_time(thing: &T) {}
    fn run_time(thing: &dyn MyTrait) {}
The reason that you can't just substitute them is that you can do more with compile-time generics than runtime ones -- runtime generics have unknown size (different concrete types have different size, so what size is the runtime generic version?) so they always have to be used through a pointer (and usually heap allocated). There are a variety of other restrictions on trait objects (runtime generics) for type-system reasons.

It would be possible to find cases where compile time polymorphism is replaced with runtime polymorphism, but I'm not sure it would really gain much given the restrictions.

Now if you start changing the language semantics a lot more becomes possible, but I don't even know what changes you would have to make to the language to let that happen.


That's when Jai (if it ever becomes real) hold promise: fast compilation is a first rate goal, not a second rate goal where you decide language features first then try to make the compiler "not too slow".


There's a permanent mirror of it: https://github.com/mirrors/linux


It's not official though - anybody can throw up their own kernel mirror


    > Now time for a concrete example: PaaS – Platform-as-a-
    > Service.  These modern cloud-based technologies enable an
    > extremely lean and agile dynamic response to delivering
    > software for a problem domain.  For a broad class of
    > business applications, this new era of development
    > represents a game changer.  Akin to 4GL for Web 2.0/Rich-
    > Internet-Applications, a team of 1-2 can easily run
    > circles around larger teams using antequated tools like
    > Eclipse or Visual Studio.
What the hell is this guy talking about? PaaS has nothing to do with what IDE you use. Guess what, two well known PaaS systems (Google app engine and Windows Azure) work with the two tools he mentions as "antiquated"!

To the larger point: Great people can do great work with crappy tools, crappy people will never put out great work, even with the best tools in the world.


Well, there's at least one difference: Android is open source, so anyone (even microsoft if they wanted to) could take android and fork it (and make everything but the kernel changes proprietary).

The problem with predatory pricing isn't that they're giving it away, it's that once they've established monopoly, they can change more for it, or restrict access, etc. With android that's just not possible.

Also, technically Android is the property of the Open Handset Alliance, it's just that google does most of the development, especially of big features.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: