Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If they are using http caching and ESI properly, the average request time would be significantly lower, and the requests per second would probably be > 500/s on that box.

500 req/s is pretty abysmal for a 8 core 16 gigabyte server. I realize that some of that has to do with Ruby performance, but yikes -- that is frighteningly bad scaling, regardless of the cause.



A number like 500 req/s is irrelevant without an understanding of what the site has to do. For a simple site that does simple database lookups, I'd expect more than that. For a site like GitHub that has to do database lookups and pull large amounts of data from Git repositories, it's an entirely different story. Without knowing the split between cache hits and misses, the number is even more irrelevant. You're arguing in abstracts, whereas I am constrained to actually working with real life. I'd love to see some examples of the type of sites you're running and the solutions you employ wherein 500 req/s of dynamic requests on an 8 core machine are considered "frighteningly bad scaling."


How can you say it's bad without knowing how knowing what each request is doing?

Github isn't a microbenchmark.

A concrete example:

I can say that one of my rails sites handles 850 dynamic requests per second running on a single small 1 cpu server. That's because all that particular request does is lookup 4k of data from memcached and returns it. (i.e. http://www.tanga.com/feeds/current_deal.xml)

However, as a general rule, I know that each small server can handle about 30-50 pages per second, because each page takes a lot of data crunching to generate (and because I haven't been bothered to make it as efficient as possible, it's fast enough as is).

If all I was doing was returning a small bit of text that didn't require much lookups or calculation, then sure, a 8 core cpu with rails could probably do 4-5000 requests per second easily.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: