Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Web server 'hello world' benchmark: Go vs. Node.js vs. Nim vs. Bun (lemire.me)
56 points by msndr on Oct 7, 2023 | hide | past | favorite | 66 comments


Ah microbenchmarks, we meet again. (This refers to Bun vs Node only)

Basically my understanding is:

- Bun uses JavaScript core and Node uses V8. They’re a bit different, no clear winner.

- Node is bloated for historical reasons, or has an extensive stdlib, depending on your perspective

- Bun uses uWebSockets which is a C++ built purpose built low-memory HTTP1.1 event loop based web server that is extremely fast (and reliable). Credit for speed should go to them, for http at least

- You can just use uWebSockets with Node if you want, it will be roughly equivalent

- Bun otoh does not support HTTP2 but “they’re working on it”, but nevertheless this is a huge feature blocker and depending on the implementation you may see major performance regressions (uWebSockets are not currently planning to implement http2 afaik, which means they have to use http2 as implemented by a mortal instead)

Overall, I think Bun should stop promoting microbenchmarks (but whatever, marketing is what it is) and focus on their real innovation value: a single-tooling simplified and modern server-side JS environment. This is already compelling. If they insist on benchmarking, why not build a standard crud app with Postgres/SQLite, put it behind nginx with ssl and compare there instead? The difference won’t be that big.


Yes. I don't know where this obsession over making JS fast comes from, considering most web applications are http glue code to a database server and spend most of time waiting on I/O

Biggest selling point of Bun is the potential it has to empty my node modules directory and free my mind of worries about supply chain attacks.


True, these benchmarks are mostly worthless because any non-trivial application will bottleneck at things like databases, not the http framework.

In my experience, even with a fast database like redis, one is unlikely to achieve more than like 5k req/sec per process in node.


the perf improvements in the basic libraries help with reducing latency which shouldn't be scoffed at


Not scoffing, performance is nice to have. I really like how Bun replaces all development tools, is much faster for package management and bundling. The fast startup is also a bonus and means it could be used for writing command line tools.

However, as a server side runtime it doesn't matter much because the time spent on those libraries is minimal in compared to the time spent waiting for I/O


I work on Bun

Fastify is not fast in Bun. Bun.serve() is 3x - 4x faster.

Our node:http code currently just wraps Bun.serve(). node:http has to go through many extra string -> buffer conversions and microticks and does a lot of unnecessary work. We have a branch that makes about 2x faster, but we need to prioritize our 1700+ open issues over performance optimizations right now.


> We have a branch that makes about 2x faster, but we need to prioritize our 1700+ open issues over performance optimizations right now.

Recently I have even tried to port onde of my nextjs projects to it, but to my surprise it does not work on Linux ARM64 due to a broken syscall.

To me, the biggest selling point of Bun is not the runtime performance, but the fact that it can replace all tools for installing packages and bundling apps. (No need for a node modules for with hundreds of packages for a simple hello world web app)

Runtime performance is not that important for JS servers, considering that most of the applications are just http glue code to a database server. (Maybe it is just me, but I would never use JS to write performance critical code).

So focusing on making things stable would be a really good move right now.

In any case I'm super enthusiastic about Bun and it has a lot of potential. I hope you can make it work!


> We have a branch that makes about 2x faster, but we need to prioritize our 1700+ open issues over performance optimizations right now.

Maybe “has incredibly high performance” should not be the first selling point in your marketing pitch then.


They just released 1.0 with massive speed increases over node in many scenarios and still have to fix compatibility issues (like the lack of dgram). Let them prioritise coverage.


I mean, they do what they want but if they market themselves as much faster than nodejs and when people are trying it, it's no really faster (and it's not the first time I see such a benchmark), then it's gonna leave a bad impression. Why bother with all the instability that comes from the project's young age[1] if the project fails to deliver on its biggest selling point?

[1] and I'm not blaming them for that, their goal of being node compliant is a big undertaking


> then it's gonna leave a bad impression

My company's frontend team (comprised mostly of people who believe anything written) jumped the wagon on Bun.

After a few days of working with it, it turned out that Bun isn't really faster in real world scenarios.

This brings up the question - why market it as so much faster then?

Many frameworks and tools market themselves primarily as faster than $otherThing but it won't fix suboptimal code, wrong choices for data structures, silly things like "let's select * and sort in browser". When you remove those mistakes that bad devs make, it turns out node/deno are more than fast enough. What we need is less bloat, more engineering and no lazy development by "packagemanager install lib-that-could-do-my-work"


>This brings up the question - why market it as so much faster then?

Speed is somewhat the main selling point. It loses in most other metrics to node or deno. I think speed is a dumb thing to have as a main selling point too; nodejs/v8 are already quite fast, if I needed more speed I'd try to optimise it, before then rewriting it to rust. I want stability, soundness, features, a good toolchain etc.


> After a few days of working with it, it turned out that Bun isn't really faster in real world scenarios.

Which scenarios? Are you sure it was using Bun and not Node? Is it using bun’s tools for things or the same tools as it did in node?


What parts of Bun were you using? All the features I've used (that work*) are faster, the hard part is the compatibility is still lacking so turning on the Bun runtime over just using Node is tricky.


> What we need is less bloat, more engineering and no lazy development by "packagemanager install lib-that-could-do-my-work"

Millions of voices suddenly cried out in terror, and were suddenly silenced.


The thing with these minimalist benchmarks is that they aren't comparing a real workload, or a real server/client pair that can leverage persistent connections (the SYN/ACK rounds in setting up a TCP connection have a lot of impact on this kind of thing), internal async/threading mechanisms in the server (different language runtimes and different threading models make a lot of difference as well), and $DIVINITY knows what other kinds of request parsing variations.

It's a fun exercise (I do it myself on occasion), but requires understanding what is actually happening (I see that Jarred has already remarked that Bun.serve() would be the right thing to do here, for instance.)


> Uber moved from Node.js to Go for better performance.

Technically true but very misleading. The linked article states that Uber switched to Go for 1 microservice that was doing a heavy amount of a specific mathematical processing that Go was better suited for.


A while back I personally tested performance of C (-O3 compiled to WASM and x64) vs JS for basic math and string operations. Overall results:

Almost no difference for string manipulation little difference for floating-point math big perf difference for integer math

WASM vs JS also saw little differences in performance (except for integer math), in some cases it was slightly slower

There are several reasons to use a lower level language, but raw speed is hardly ever of note. It usually comes down to how good the libs you are using are. As Bun is showing even the stdlib of your runtime can be lackluster


Just curious, how did you run integer math in JS to compare? Since it doesn't support it natively.


I wrote a few test cases in both JS and C doing some computations inside a loop. The code between the versions was equivalent and not micro-optimized. In JS was using normal variables so they were stored in memory as IEEE754 (unless the engine was doing some optimisations I was not aware of). So no manual binary math with buffers in JS

I was specifically looking how big of an impact it is to use normal JS numbers for integer-heavy math. It turns out it is quite big, but for float64 math, not so much.



I've stopped relying on those benchmarks for anything other than anecdotal data (if you check some of the server source code and how they finagle things to go faster for that specific benchmark you'll see why)


The idea is great but how gamed it is really ruins it.


It's cool (and a bit surprising) to see that Nim was so much faster than Go, even on a toy benchmark. I wonder how Nim+httpbeast would compare against Go+fasthttp or Go+fiber on the same hardware, or also Rust and Haskell.


nim+httpbeast is a multi threaded library by default. Node, bun are effectively single threaded. A cursory search indicates that the go library is single threaded by default.

So the nim comparison is a bit of apples and oranges.

Also, the usual caveats of running micro benchmarks against localhost on an auto scaling CPU core...


In Go's net/http library, each request is served in a new goroutine. So it uses all CPU resources by default.

I'm not sure why its number is quite low though.


This kind of benchmarks will always be unfair, from implementation to libraries to compilation and in other to know why certain perform such way one have to know the core of the language


"I tried building the equivalent in C++, but it was so painful that I eventually gave up."

Unsurprisingly there's a good number of C and C++ webserver repos. They're not all terrible - although some certainly are.

Would be interesting to see how the more usable ones compare.


ACME.com to the rescue:

https://acme.com/software/mini_httpd (slow) https://acme.com/software/thttpd (pretty decent)


It takes virtually no time with something like cpp-httplib. but I guess one have to discover it first.


it should be quite trivial to write a basic webserver in C.


That's probably why there are so many, and so many terrible options out there.

It's relatively simple to write a trivial web server in C. To make a fast and usable web server in C is most likely anything but trivial.

You'll need:

- a way to register URL patterns and assign handler functions to them.

- a way to set shared mutable process-wide things such as connections to external services and make sure one thread doesn't trample on - or needlessly block - other thread's changes.

- a transparent way to make handlers work inside a single thread using asynchronous IO functions wherever possible. Fair enough - we are no longer in the trivial web server anymore.


... but for such a microbenchmark, you need none of this.


But is a microbenchmark meaningful if not against a real web server?


Then please show us how it is done.


Writing 'hello world', or a fixed set of dispatch calls is still trivial with epoll now. In the late '90s when the web APIs (before REST) started to become a thing, writing an http C server few specific calls was quite the thing - there was no epoll, yet.

Likely, I can write one (http1.1) in Java just using java.nio + DirectBuffer; no converting anything to String (similar to using void* for everything)... in several hours. It'd even scale with the amount of cores the machines has. It won't do much but as a micro benchmark (hello world) would be up there.

That's nothing about usefulness of the code, or skill - it's just the benchmark is not useful for anything outside toying with.


basic web server, for example https://github.com/lionkor/http.

anything eith a real workload is kinda irrelevant here, as this benchmark only focuses on hello world.


And by basic you mean about 1000 lines of code that don't even support TLS.


Yep, basic, yes, by definition not advanced and it even says not to use in production. If you need TLS, put it behind traefik/nginx/whatever


Is it not unfair to use fastify in node rather than using the http from the stdlib like in go?


I agree, they should also use bun.serve instead of the same fastify code


And also "node:http" from node.


Please note that “hello world” or echo server benchmarks are typically unrealistic and misleading as comparisons between different performance categories.

A PHP, JS etc. runtime is always going to perform worse, the larger the codebase grows, than Go, Java, Rust etc.

A Bun/Node/Deno/Swoole/Workerman server that doesn’t actually run any substantial amount of JS/PHP (as is typical for benchmarks). Doesn’t actually tell you anything useful about the performance category your server can optimize towards.

These runtimes are obviously written in (almost exclusively) Zig, C++, Rust, C etc. which are languages that actually can express fast and lightweight code.

With a little bit of scripting on top of a server/runtime, this consideration doesn’t matter much. But any substantial amount of code is going to diverge very quickly if it’s not written in a compiled language.


> A PHP, JS etc. runtime is always going to perform worse, the larger the codebase grows, than Go, Java, Rust etc.

I don't see how the size of your codebase could affect performance, maybe between GC languages and non-GC languages there could be a correlation. But between PHP/JS and Go/Java the size of your application shouldn't meaningfully affect performance.


Because the % of the slow part grows.


Benchmarks like this are misleading. Unless you're already webscale you wont need to manage this level of concurrency at high throughput.

Instead focus on tech decisions that enable you to ship what your paying customers want faster be it PHP, JavaScript, Node, Bun, Go…


Customers may want to collect high throughput data, e.g. telemetry for example.


I usually enjoy reading posts about benchmarks, but this one is nonsense: it's literally apples vs oranges vs aircrafts.

The only part that makes some sense is Bun vs Nodejs, even if some important metrics are missing(response time, rate of errors).


> ran it on a powerful IceLake-based server with 64 cores.

I assume this is not consumer hardware. Is this something you'd normally find in a data center, eg would you rent this from AWS?


When you're doing a setup with dedicated servers and trying to avoid microservices, it's not exactly uncommon to end up with a beefy 32 core/64 thread server and it can be had for relatively cheap. Hetzner rents them out for ~150/200 EUR/month. I'm sure AWS has something similar albeit a bit pricier.


Related: https://news.ycombinator.com/item?id=37800753 (Moving Marginalia to a New Server)

> The machine has 512 GB of RAM and 10x8 plus 2x4 TB SSDs, and the aforementioned dual Epyc 7543s for a whopping 128 logical cores.


Bun is not yet ready, sure, for small applications or server or hello world, it might works well, but for production, Nodejs is still on top between these two.


I know this is only gonna add to the fire of the 'microbench' misdirection statement; but I noticed userver just reached 1.0. Would the author be so kind to throw that into the mix at some point (given time... and heck I might check myself later today given the code for others are provided)


I’m skeptical about the numbers reported for Go, they seem too low. Maybe it’s caused by using Fprintf?


I saw a post here [0] about a more realistic benchmark where Bun got worse performance than Node.js. Just testing the http implementation seems a bit useless.

[0] https://news.ycombinator.com/item?id=37735029


The author needs to read all the comments in here and try again. A lot of caveats and gotchas.

Would like to see them against Microsoft's Kestrel as they have done a ton of work on that to try and make it quick as hell.


Doesn't bun advertise itself as 10 times faster than node or something?

Edit: they “only” claim to be 5 times faster [1].

[1]: https://bun.sh/


They do, but I think the OP didn’t use Bun's native serve methods and instead used the same code for both Node and Bun, hence the subpar performance.


> and instead used the same code for both Node and Bun

Wait, wasn't that the point of bun in the first place?!


As I understand the Node code here is specific to Node, so running it on Bun is slow. If they'd used the "native" http server it would be better I think


I ended up choosing node so that I could run the same function on the client and server.

I'd be most interested to compare Node, Bun and Dino.


In my testing I found no difference in response time for a web app running in Node.js vs Bun.


ThePrimeagen did some benchmarks on stream recently - he found Bun to be slightly better in terms of average response time, but the biggest gain was P99 / P99.9 values under heavy load.

NodeJS seems to drop the ball on a handful of requests when it's getting slammed, i.e. upwards of 10s vs average of 100ms. Bun had a much smaller P50/P99 spread.


Obviously Crystal is missing here. Another benchmark: https://github.com/costajob/app-servers


You could have post this for more recently benchmark, nearly all in one page.

https://web-frameworks-benchmark.netlify.app/result

Community support is much more important than speed.


Deno is missing!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: