I can't even watch anything anymore. It tells me to sign in, but I don't want to sign in. The great YouTube uubiquitous video host of the web appears to be dying.
I never learned to touch type and had a successful career, now retired. Not driving would have been a big problem - in fact I got am IT job early on where they gave me 6 months to get a licence or I was out!
Oh, and yes I did start on punched cards, but was soon keying in my own code online 50 years ago.
> This is like the early days when people didn’t trust buying things over the internet
If you like bad analogies, why not do car analogies? For example, at least this one's accurate:
I wouldn't trust Sam Altman any more than a used car salesman. The only difference is, Sam Altman's persuaded me to pay him to sell me as a the product.
You weren't available all the time then. It was perfectly natural to assume someone wasn't at their computer and that was perfectly ok. It wasn't necessary to give status updates that you would in fact be on time to meet like you said you'd be, things were a little more planned, and you'd call or SMS if really needed but mainly don't want to intrude.
> You weren't available all the time then. It was perfectly natural to assume someone wasn't at their computer and that was perfectly ok.
Someone made the observation back then that 'the less you talk about "being online", the more important it will be'. Nowadays, because of IP-over-radio (smartphones) we're all basically online all the time (which has been true to a certain extent for a while with (dumb) phones and SMS/texting).
But it goes further now with many more ways of interaction.
1000 requests / min @ 10ms limit / request. That's 16 requests per second. Any reasonable CMS, wiki or blogging tool should be able to do one request in 62.5ms. Add on cacheing for non logged in users and nginx serving anything static, that's less than the power a $5 VPS provides.
At these rates, the case for Cloudflare is a lot less than it was.
Obviously a $5 VPS would give you more raw compute than the Cloudflare Workers free tier.
However:
1. It would run in a single location in the world, whereas Workers (even on the free tier) will run in Cloudflare locations all around the world, close to the end user, reducing latency.
2. If you're going to compare against a $5 VPS, the $5 Workers paid tier is probably a better comparison? It can instantly scale to millions of requests per second.
(Disclosure: I'm the tech lead for Cloudflare Workers.)
10 million requests per month for $5 is about 4 requests per second, correct? Which is about 3 orders of magnitude less than nginx serving static html.
As a comparison, that's akin to a person walking 4 km/day vs. flying 4,000 km/day,
It's still not a great comparison as the $5 VPS is already paid for. But to take it as you suggest, I agree it's up to the website owner whether they prefer to have 4km/day workers who can sometimes clone themselves in different parts of the world but only for a limited time until they costmore, or a 4000/day flying suit.
That is reasonably fast. We wrote entire games in PHP where we aimed for wall time under 100ms. That is a challenge, but often doable. Some routes managed to respond in under 50ms.
You probably haven't heard of them. We were a German studio, and most of our games only reached a few tens of thousands of players. The biggest hit was Xhodon — it had a bit of a following among World of Warcraft fans. It was a fun time.
Blog posts don’t change much. Even if your rendering code is horrendously slow (though, why?), you can just cache the resulting html and serve it up with each request. Or slap nginx in front of your web server locally and let that deal with the load. ‘Course you’ll need your http headers set correctly, but you needed that anyway for cloudflare.
Your server has to be pretty badly configured for a personal blog to run out of CPU handling requests.
Everything you wrote is true, but this is not how it works in practice. Usually, the person running the blog uses WordPress, and doesn't know about caching. They add a few plugins that significantly increase response time and make the response dynamic (for example, CSRF nonces). Add to that some "static" AJAX requests (which usually are POSTs and not cacheable), and it all adds up.
I wouldn't bet on an average dev being able to set up and configure nginx + Cloudflare correctly.
>Course you’ll need your http headers set correctly, but you needed that anyway for cloudflare
Not if you don't use CF to cache "dynamic" content.
mklepaczewski was probably talking about end-to-end. I.e. the number you see in the network tab for request duration - whereas the pricing will only care about the time that the application is actually doing something.
That basically means it starts after the connection was established by the proxy (cloudflare) and terminates before the response is delivered to the client.
Doing the whole round trip within 65ms is actually pretty challenging, even if you are requesting over the wire. It would mean you have maybe 10-20 Ms to query data from the database and process them to html or json. Any kind of delay while querying the database is going to ruin that.
If you had a 65ms in the application, you would probably get a round trip average of something above 90, likely closer to 150 then 90.
Sure, but this particular case clearly wasn't using cache, that's why the free tier limit for an application was reached. Hence it's highly likely that each request hit a database.
The message would've been different if it was cached.
Cloudflare Workers run in front of cache -- which is generally useful since it allows you to serve personalized pages while still pulling the content from cache, and since Workers can easily run in <1ms and run on a machine you were already going to pass through anyway (the CDN), it doesn't hurt performance. But it also means that the free tier limit of 100,000 requests per day includes requests that hit cache.