I can't even begin to explain how wrong this benchmark and the author's conclusion is. I'd rather see a benchmark of apache vs a fibonacci function than read this nonsense.
In fact, since the fibonacci sequence grows as O(phi^n), we need O(n) digits to hold the n'th fibonacci number, so the fastest possible algorithm to compute the n'th fibonacci number must run in time Omega(n).
Not necessarily if you can parallelize computing the digits. Your average Pentium processor can compute a floating-point number with a mantissa of 16 decimal or 52 binary digits in one clock cycle, not 16 or 52. It does so by using enough silicon to compute them all in parallel.
This is a banal point. Yes, separate compilation hurts performance. But that's what link-time optimization is for. GCC and LLVM alike can both do link-time optimization.
You might as well say "gcc with -O0 is slower than V8, therefore V8 is faster than gcc".
Obscure? It's interesting but I can't see a case where I'd be recompiling C code every time I want to run something. On the other hand, we can't just go AOT compile using V8 (AFAIK) so let's call V8 always slower?
Fair enough, though I'm rarely obsessed with the absolute speed of code while developing. Tools like clang have brought this experience a long way as well.
In my mind, the value of this benchmark isn't to prove that V8 is absolutely faster than C but that there is intrinsic value in doing runtime compilation instead of / on top of compile-time optimizations.
It's a demonstration of why run time optimizing compilers/runtimes may one day become faster than compile time optimizing compilers.