Ah microbenchmarks, we meet again. (This refers to Bun vs Node only)
Basically my understanding is:
- Bun uses JavaScript core and Node uses V8. They’re a bit different, no clear winner.
- Node is bloated for historical reasons, or has an extensive stdlib, depending on your perspective
- Bun uses uWebSockets which is a C++ built purpose built low-memory HTTP1.1 event loop based web server that is extremely fast (and reliable). Credit for speed should go to them, for http at least
- You can just use uWebSockets with Node if you want, it will be roughly equivalent
- Bun otoh does not support HTTP2 but “they’re working on it”, but nevertheless this is a huge feature blocker and depending on the implementation you may see major performance regressions (uWebSockets are not currently planning to implement http2 afaik, which means they have to use http2 as implemented by a mortal instead)
Overall, I think Bun should stop promoting microbenchmarks (but whatever, marketing is what it is) and focus on their real innovation value: a single-tooling simplified and modern server-side JS environment. This is already compelling. If they insist on benchmarking, why not build a standard crud app with Postgres/SQLite, put it behind nginx with ssl and compare there instead? The difference won’t be that big.
Yes. I don't know where this obsession over making JS fast comes from, considering most web applications are http glue code to a database server and spend most of time waiting on I/O
Biggest selling point of Bun is the potential it has to empty my node modules directory and free my mind of worries about supply chain attacks.
Not scoffing, performance is nice to have. I really like how Bun replaces all development tools, is much faster for package management and bundling. The fast startup is also a bonus and means it could be used for writing command line tools.
However, as a server side runtime it doesn't matter much because the time spent on those libraries is minimal in compared to the time spent waiting for I/O
Fastify is not fast in Bun. Bun.serve() is 3x - 4x faster.
Our node:http code currently just wraps Bun.serve(). node:http has to go through many extra string -> buffer conversions and microticks and does a lot of unnecessary work. We have a branch that makes about 2x faster, but we need to prioritize our 1700+ open issues over performance optimizations right now.
> We have a branch that makes about 2x faster, but we need to prioritize our 1700+ open issues over performance optimizations right now.
Recently I have even tried to port onde of my nextjs projects to it, but to my surprise it does not work on Linux ARM64 due to a broken syscall.
To me, the biggest selling point of Bun is not the runtime performance, but the fact that it can replace all tools for installing packages and bundling apps. (No need for a node modules for with hundreds of packages for a simple hello world web app)
Runtime performance is not that important for JS servers, considering that most of the applications are just http glue code to a database server. (Maybe it is just me, but I would never use JS to write performance critical code).
So focusing on making things stable would be a really good move right now.
In any case I'm super enthusiastic about Bun and it has a lot of potential. I hope you can make it work!
They just released 1.0 with massive speed increases over node in many scenarios and still have to fix compatibility issues (like the lack of dgram). Let them prioritise coverage.
I mean, they do what they want but if they market themselves as much faster than nodejs and when people are trying it, it's no really faster (and it's not the first time I see such a benchmark), then it's gonna leave a bad impression. Why bother with all the instability that comes from the project's young age[1] if the project fails to deliver on its biggest selling point?
[1] and I'm not blaming them for that, their goal of being node compliant is a big undertaking
My company's frontend team (comprised mostly of people who believe anything written) jumped the wagon on Bun.
After a few days of working with it, it turned out that Bun isn't really faster in real world scenarios.
This brings up the question - why market it as so much faster then?
Many frameworks and tools market themselves primarily as faster than $otherThing but it won't fix suboptimal code, wrong choices for data structures, silly things like "let's select * and sort in browser". When you remove those mistakes that bad devs make, it turns out node/deno are more than fast enough. What we need is less bloat, more engineering and no lazy development by "packagemanager install lib-that-could-do-my-work"
>This brings up the question - why market it as so much faster then?
Speed is somewhat the main selling point. It loses in most other metrics to node or deno. I think speed is a dumb thing to have as a main selling point too; nodejs/v8 are already quite fast, if I needed more speed I'd try to optimise it, before then rewriting it to rust. I want stability, soundness, features, a good toolchain etc.
What parts of Bun were you using? All the features I've used (that work*) are faster, the hard part is the compatibility is still lacking so turning on the Bun runtime over just using Node is tricky.
The thing with these minimalist benchmarks is that they aren't comparing a real workload, or a real server/client pair that can leverage persistent connections (the SYN/ACK rounds in setting up a TCP connection have a lot of impact on this kind of thing), internal async/threading mechanisms in the server (different language runtimes and different threading models make a lot of difference as well), and $DIVINITY knows what other kinds of request parsing variations.
It's a fun exercise (I do it myself on occasion), but requires understanding what is actually happening (I see that Jarred has already remarked that Bun.serve() would be the right thing to do here, for instance.)
> Uber moved from Node.js to Go for better performance.
Technically true but very misleading. The linked article states that Uber switched to Go for 1 microservice that was doing a heavy amount of a specific mathematical processing that Go was better suited for.
A while back I personally tested performance of C (-O3 compiled to WASM and x64) vs JS for basic math and string operations. Overall results:
Almost no difference for string manipulation
little difference for floating-point math
big perf difference for integer math
WASM vs JS also saw little differences in performance (except for integer math), in some cases it was slightly slower
There are several reasons to use a lower level language, but raw speed is hardly ever of note. It usually comes down to how good the libs you are using are. As Bun is showing even the stdlib of your runtime can be lackluster
I wrote a few test cases in both JS and C doing some computations inside a loop. The code between the versions was equivalent and not micro-optimized. In JS was using normal variables so they were stored in memory as IEEE754 (unless the engine was doing some optimisations I was not aware of). So no manual binary math with buffers in JS
I was specifically looking how big of an impact it is to use normal JS numbers for integer-heavy math. It turns out it is quite big, but for float64 math, not so much.
I've stopped relying on those benchmarks for anything other than anecdotal data (if you check some of the server source code and how they finagle things to go faster for that specific benchmark you'll see why)
It's cool (and a bit surprising) to see that Nim was so much faster than Go, even on a toy benchmark. I wonder how Nim+httpbeast would compare against Go+fasthttp or Go+fiber on the same hardware, or also Rust and Haskell.
nim+httpbeast is a multi threaded library by default. Node, bun are effectively single threaded. A cursory search indicates that the go library is single threaded by default.
So the nim comparison is a bit of apples and oranges.
Also, the usual caveats of running micro benchmarks against localhost on an auto scaling CPU core...
This kind of benchmarks will always be unfair, from implementation to libraries to compilation and in other to know why certain perform such way one have to know the core of the language
That's probably why there are so many, and so many terrible options out there.
It's relatively simple to write a trivial web server in C. To make a fast and usable web server in C is most likely anything but trivial.
You'll need:
- a way to register URL patterns and assign handler functions to them.
- a way to set shared mutable process-wide things such as connections to external services and make sure one thread doesn't trample on - or needlessly block - other thread's changes.
- a transparent way to make handlers work inside a single thread using asynchronous IO functions wherever possible. Fair enough - we are no longer in the trivial web server anymore.
Writing 'hello world', or a fixed set of dispatch calls is still trivial with epoll now. In the late '90s when the web APIs (before REST) started to become a thing, writing an http C server few specific calls was quite the thing - there was no epoll, yet.
Likely, I can write one (http1.1) in Java just using java.nio + DirectBuffer; no converting anything to String (similar to using void* for everything)... in several hours. It'd even scale with the amount of cores the machines has. It won't do much but as a micro benchmark (hello world) would be up there.
That's nothing about usefulness of the code, or skill - it's just the benchmark is not useful for anything outside toying with.
Please note that “hello world” or echo server benchmarks are typically unrealistic and misleading as comparisons between different performance categories.
A PHP, JS etc. runtime is always going to perform worse, the larger the codebase grows, than Go, Java, Rust etc.
A Bun/Node/Deno/Swoole/Workerman server that doesn’t actually run any substantial amount of JS/PHP (as is typical for benchmarks). Doesn’t actually tell you anything useful about the performance category your server can optimize towards.
These runtimes are obviously written in (almost exclusively) Zig, C++, Rust, C etc. which are languages that actually can express fast and lightweight code.
With a little bit of scripting on top of a server/runtime, this consideration doesn’t matter much. But any substantial amount of code is going to diverge very quickly if it’s not written in a compiled language.
> A PHP, JS etc. runtime is always going to perform worse, the larger the codebase grows, than Go, Java, Rust etc.
I don't see how the size of your codebase could affect performance, maybe between GC languages and non-GC languages there could be a correlation. But between PHP/JS and Go/Java the size of your application shouldn't meaningfully affect performance.
When you're doing a setup with dedicated servers and trying to avoid microservices, it's not exactly uncommon to end up with a beefy 32 core/64 thread server and it can be had for relatively cheap. Hetzner rents them out for ~150/200 EUR/month. I'm sure AWS has something similar albeit a bit pricier.
Bun is not yet ready, sure, for small applications or server or hello world, it might works well, but for production, Nodejs is still on top between these two.
I know this is only gonna add to the fire of the 'microbench' misdirection statement; but I noticed userver just reached 1.0. Would the author be so kind to throw that into the mix at some point (given time... and heck I might check myself later today given the code for others are provided)
I saw a post here [0] about a more realistic benchmark where Bun got worse performance than Node.js. Just testing the http implementation seems a bit useless.
As I understand the Node code here is specific to Node, so running it on Bun is slow. If they'd used the "native" http server it would be better I think
ThePrimeagen did some benchmarks on stream recently - he found Bun to be slightly better in terms of average response time, but the biggest gain was P99 / P99.9 values under heavy load.
NodeJS seems to drop the ball on a handful of requests when it's getting slammed, i.e. upwards of 10s vs average of 100ms. Bun had a much smaller P50/P99 spread.
Basically my understanding is:
- Bun uses JavaScript core and Node uses V8. They’re a bit different, no clear winner.
- Node is bloated for historical reasons, or has an extensive stdlib, depending on your perspective
- Bun uses uWebSockets which is a C++ built purpose built low-memory HTTP1.1 event loop based web server that is extremely fast (and reliable). Credit for speed should go to them, for http at least
- You can just use uWebSockets with Node if you want, it will be roughly equivalent
- Bun otoh does not support HTTP2 but “they’re working on it”, but nevertheless this is a huge feature blocker and depending on the implementation you may see major performance regressions (uWebSockets are not currently planning to implement http2 afaik, which means they have to use http2 as implemented by a mortal instead)
Overall, I think Bun should stop promoting microbenchmarks (but whatever, marketing is what it is) and focus on their real innovation value: a single-tooling simplified and modern server-side JS environment. This is already compelling. If they insist on benchmarking, why not build a standard crud app with Postgres/SQLite, put it behind nginx with ssl and compare there instead? The difference won’t be that big.