Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Generating and caching can be intertwined. For example, HN seems to generate an inline script at the top of every page. The script doesn't change nearly as often as the posts do, so that script is sent redundantly at the top of every page. If that script were located at separate URL, set to expire never, every visitor would load it once and never again. If you need to change the script, you can change the linked URL. Doing that can avoid an expensive calculation for the server, but also a network request for the client which could be very, very expensive.

To me, it sounds like the caching you're talking about is on the server side. I think you mean something similar to memoization, so you can avoid doing some expensive query or calculation. That is worth doing, but it's still possible to organize the output of those caches in an inefficient manner, and incur unnecessary network overhead. The "semantically correct" part of the markup being advocated here isn't that interesting, if you ask me. What is interesting is the claim is that you can generate less markup per page, and get the same display.

The balance between the repeat visitor cache behavior and the initial number of HTTP requests and latency for a first-time visitor can get hard to judge. Without lots of time to measure the various alternatives and mitigation tactics, it's best to try and generates as little markup as possible. That's where so-called "semantic markup" comes in. Usually it's just less markup, and does better.

Another thing to take into account is the layout behavior triggered in various browsers by the markup you're generating. HN is pretty simple and should render instantly, but it doesn't in anything I've tested (Firefox, Chrome, Safari). Each of them redraw the scrollbar one or more times. That could be due to the tables.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: