Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That makes me all the more curious: why is it faster? I cannot imagine Node doing something obviously wrong that would explain the difference. Evented I/O is pretty well understood, I/O loops aren't that hard and it's all just accept()/read()/write()/close() at the end of the day. Where does the 5x difference come from?


Because it caches (http://gwan.com/faq#cache) response under high concurrencies. Try to benchmark this code:

  #include "gwan.h" // G-WAN exported functions
  int count=0;
  int main(int argc, char *argv[])
  {
   if (!count) printf("New Counter!\n");
   xbuf_xcat(get_reply(argv), "Hello World %d",count++);
   return 200; // return an HTTP code (200:'OK')
  }


Wow, I didn't know that. I certainly did not expect to find that information in the FAQ. In fact this is the first time I've heard of a web server doing that.

So was the benchmark the author posted with microcaching turned on? While it's a legit way to reduce the load in real-world situations, it feels a little like cheating when used in a benchmark.


It is plausible though. node.js have its fair share of "slowness" when compared to other async frameworks for the basic "Hello World" (shameless plug): http://www.saltwaterc.eu/async-frameworks-hello-world-showdo...

vert.x used the JavaScript bindings via Mozilla Rhino. I guess that a pure Java test could be a little bit faster than that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: