Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's what the bare metal server didn't come with:

API access for managing configuration, version updates/rollbacks, and ACL.

A solution for unlimited scheduled snapshots without affecting performance.

Close to immediate replacement of identical setup within seconds of failure.

API-managed VPC/VPN built in.

No underlying OS management.

(Probably forgot a few...) I get that going bare metal is a good solution for some, but comparing costs this way without a lot of caveats is meaningless.



> Here's what the bare metal server didn't come with:

[bunch of stuff I don't need]

Exactly. Imagine paying for all that when all you need is bare metal.

Now imagine paying for all that just because you've read on the Internet that it's best practice and that's what the big guys do.

Way back the best practice was what Microsoft, Oracle or Cisco wanted you to buy. Now it's what Amazon wants you to buy.

Buy what you need.


Your point is well made... The other thing you need to think about is that all those extra services can come with a reliability cost if your provider is not 100%. Most outages we've encountered have been because the HA infrastructure has decided to have problems rather than the underlying hardware or actual service we need to run.

Having all that "best practice" service is great if it works well, but when it becomes a checkbox on the purchase order then it can cause far more problems than it solves.

I have found that a lot of the push to outsource hosting is simply an attempt to deflect responsibility for problems rather than in expectation of actually providing more reliability.


It’s great if you need that reliability in practice - and not just that you think you need it.

For every company or startup that thinks they need 100% uptime the reality is that they can not only get away with much less but in practice will end up with much less anyway because the effort and moving parts (load balancing, distributed database, etc) will typically result in something failing anyway even if the underlying hardware is indeed 100% up, and somewhat surprisingly to quite a bit of people here, manage to survive and thrive despite that (the recent AWS outages took out a lot of services and products and they still seem to be around somehow).


best practice

Ugh. I hate that phrase. The translation into plain English is almost always "What I read in some blog" or "Because I want to" or "It's what our sales rep told us." Even from C-levels who should know better.


I generally use it as shorthand for, “when and not if we call the vendors with this configuration, they cannot pass the buck and say it isn’t a configuration they support”. “Best practice” as an endorsed configuration blessed by the vendor support engineering organization is my go-to sign-off to get quick cooperation from the vendors.


It should mean something. There are things that are just good ideas you probably shouldn’t stray from. But the moment a concept like this materializes it gets hijacked by marketers and celebrity developers and distorted into oblivion.


> Exactly. Imagine paying for all that when all you need is bare metal.

Yes.

The opposite is also true though: Imagine not wanting to pay for that and needing it!

There's a reason why most homes connect to the power utility companies. Yes, we can run generators ourselves. Does it make sense to do that? Not usually.

Same thing with this server. If it makes sense for your use-case, outstanding. In many cases, people are better off offloading this to another company and focusing on their strengths.


>"Yes, we can run generators ourselves. Does it make sense to do that? Not usually"

The minute gens/solar/wind/batteries combo becomes less expensive than public utility I'll switch. For now it makes no financial sense.

With the clouds it is the other way around. My dedicated servers running my software kick the shit out of AWS performance wise for a fraction of the price. And no I do not spend my days "managing" it. I can order new dedicated server and the same shell script will reinstall all prerequisites, restore data from backup and run it in few minutes add (however long it takes to import database from backup). Where needed I also have standby up to date servers.

Other than running this script to test restoration once a month my management overhead is zero.


Exactly! Who needs a kitchen in their home, it's better to just order in all the time. All that complexity of cooking daily, preparing and storing food, cleaning dishes, etc, etc. I mean, who's going to even do the cooking!

See... anyone can pick <random thing>, and describe how people don't do it. Of course, you cite a generator, others use solar cells, right?


Yeah imagine cutting your own hair and diagnosing your health problems. Growing your own food and making your own chips.

Looks like sensible/nonsensical based on who you are and what are your needs. Not everybody needs cloud, not everybody can self host.


> bunch of stuff I don't need

And that's fine. People should make choices like that. I was objecting to saying people paid X for RDS and others paid Y for one bare metal server. Those are not in the same category, there's no point comparing those prices without a lot more information.


I am buying IaaS it is so nice to use VPS with snapshots every 4 hours that I don't have to think about.

I don't care where those snapshots are stored and how much space those take. In case I need to restore my IaaS provider gives me 2-click option to restore - one to click restore and 2nd to confirm. I sit and watch progress. I also don't care about hardware replacement and anything that connects to that. I have to do VPS OS updates but that is it.

I do my own data backups on different VPS of course just in case my provider has some issue, but from convenience perspective that IaaS solution is delivering more than I would ask for.


Snapshots every 4 hours? That doesn't sound impressive at all. In 2022 that's laptop tier capability.


Well my dev laptop could run 1000x more traffic than what I need on my servers.

But of course I am running build tools and bunch of other dev tools locally that are not even installed on our servers.


I'm not talking about traffic I'm just talking about expectations as far as software capabilities.


What on earth are you running on your laptop?


Windows 11 with Edge. Nothing else /s

But in all reality, laptops are really, really powerful these days.


I use customer cast offs and even employee cast offs. My desktop was cast off by a customer because Win 10 ground to a halt. I slapped an ancient nvidia card in to get a couple of screens and popped in a cheap SSD. According to dmidecode it is a Lenovo from 01/06/2012.

My laptop at the moment is a HPE something ... HP 250 G6 Notebook PC according to dmidecode. I used to have a five year old Dell 17" i7 based beast but it ... broke. I whipped out the SSD, shrank the root fs a bit (with gparted) and used a clonezilla disc and an external USB link to get my system onto a Samsung EVO M.2 thingie from the SSD. This laptop was an employee cast off/

As is probably apparent from the above, I use Linux. I have some decent apps at my disposal but in general I don't need much hardware. Decent: RAM >= 8GB and SSD storage are key. I don't play games much. I've always specified 17" screens in the past for my laptops but now I have to use a 15" jobbie, that's becoming less of a hard requirement.

I am a Managing Director (small business - IT consultancy) but I do spend rather a lot of my time doing sysadmin and network admin stuff. I also do business apps. I once wrote a Finite Capacity Plan for a factory in Excel with rather a lot of VBA. Before you take the piss, bear in mind I used the term finite and not infinite. That meant that quite a few people had employment in Plymouth (Devon not MA) in the 1990s. I won't bore you with my more modern failures 8)

Anyway as you say, modern laptops are phenomenally powerful. Mine have wobbly windows 8) I can't be arsed with MS Windows anymore for my own gear - it gets in the way. Me and the wife said goodbye at Windows 7.


I think I found a simpatico friend!


> Buy what you need


> [bunch of stuff I don't need]

They tend to be the things that you don't need until you absolutely need them right now (or yesterday).


I'm too busy to tend to the care and feeding of a database server. I'll gladly pay amazon to do it for me.


Typical statement from someone that doesn’t actually pay the bill (a.k.a a software engineer in most companies).

It’s easy to trivialize $20-30k a month when it’s someone else’s money and it’s less work for you.


When considering the cost of having a secure server room, a commercial internet line, electricity, cooling and hardware - plus the cost of my time (or someone else's on the team), AWS gets more and more attractive. The business side paying the bill agrees fully.

People on my time would rather be developing software than doing sys admin stuff


There is a continuum between "I run servers in my basement" and "I pay mid 5 figures monthly to have autoscaled autofailover hotstandby 1s-interval backed up cluster across three AZs".

I use AWS, DO and Hetzner (bare metal) for (different) cases where each makes sense.

In the past few years, maintenance burden and uptime between AWS and Hetzner hosted stuff was comparable, with the cost being an order of magnitude less for Hetzner (as an added benefit, that machine is more beefy but I wasn't even comparing that).

> People on my time would rather be developing software than doing sys admin stuff.

I am not surprised software developers would prefer doing software development :-)


my point is $20-30k/m for an AWS database can actually be a bargain


> The business side paying the bill agrees fully.

No, they often don’t understand the alternatives available because the wolves are guarding the henhouse (software devs who don’t want to work with the pace of on-prem IT).

> People on my time would rather be developing software than doing sys admin stuff

No shit, that’s why software devs aren’t sysadmins.


Sure and you could buy a digital ocean droplet or Amazon lightsail instance for 5$/month. Both include 1TB of free data transfer.


Of course there are a lot of benefits of using hosted databases. I like hosted databases and use them for both work and personal projects.

What I have a problem with is:

- the premium over bare metal is just silly

- maximum vertical scaling being a rather small fraction of what you could get with bare metal

- when you pay for a hot standby you can't use it as a read only replica (true for AWS and GCP, idk about Azure and others)


Readable standby instances is something AWS just recently added actually: https://aws.amazon.com/blogs/database/readable-standby-insta...

Though it seems to require you to have 3 instances rather than just letting you read from the standby... I don't quite get the rationale for that.


Even though a hot standby is technically the exact same thing as a read only replica, it solves 2 very different problems (uptime vs scaling). Personally, I never want read only queries going to my hot standby, otherwise I risk the secondary can’t handle the load in case of failover. Probably a reason like that with AWS as well, as they are selling a “hot standby” product to ensure the uptime for your RDS.


> when you pay for a hot standby you can't use it as a read only replica (true for AWS

I'm not sure what you mean here. At least for MySQL you can have an instance configured as replica + read-only and used for reads. Aurora makes that automatic / transparent too with a separate read endpoint.


A hot standby is not a read replica. It's a set of servers running in a different Availability zone mirroring current prod, that is configured to automatically fail over to if the primary is offline. It's been a few years since I personally set this up in AWS, but at the time, those servers were completely unavailable to me, and basically doubled the cost of my production servers.

The fact that a hot standby is usually in some sort of read-replica state prior to failing over is a technical detail that AWS sort of tries to abstract away I think.


They recently announced readable standby instances.

https://aws.amazon.com/blogs/database/readable-standby-insta...


I'm not sure, I guess it depends what you're looking for. I'm a hobbyist, with no real professional experience in server admin, so I am probably missing some important things. But most of this can be replicated with ZFS and FreeBSD jails (or on linux, BTRFS and LXC containers).

>> 1. A solution for unlimited scheduled snapshots without affecting performance.

You can very comfortably have instant and virtually unlimited snapshots with zfs/jails (only occupying space when files change). Very easy to automate with cron and a shellscript.

>> 2. API access for managing configuration, version updates/rollbacks, and ACL.

>> 3. Close to immediate replacement of identical setup within seconds of failure.

There is a lot of choices for configuration management (saltstack, chef, ansible, ..). I run a shellscript in a cron job that takes temporary snapshots of the jail's filesystem, copies to a directory, and makes an off-site backup. A rollback is as simple as stopping the server, renaming a directory, and restarting it. It's probably more than a couple of seconds, but not by much. I think I'm uncomfortable exposing an API with root access to my systems to the internet, but I'm not sure how these systems work. I don't think it would be hard to set it up with flask if you wanted it though.

>> 4. No underlying OS management.

I don't know what this is, but I'm curious and looking it up :D.

In most of the posts I'm reading here, people have really beefy rigs. But you could do this on the cheap with a 2000s era laptop if you wanted (that was my first server).


> You can very comfortably have instant and virtually unlimited snapshots with zfs/jails

Yes, but that both requires manual scripting it and remains local to the server. Compare to scheduled RDS backups which go to S3 with all its consistency guarantees.

> There is a lot of choices for configuration management (saltstack, chef, ansible, ..)

Sure, those are an improvement over doing things manually. But for the recovery they can do only so much. Basically think how fast can you restore service if your rack goes up in flames.

> I don't know what this is, but I'm curious and looking it up :D

It means - who deals with kernel, SSL, storage, etc. updates, who updates the firmware, who monitors SMART alerts. How much time do you spend on that machine which is not 100% related to the database behaviour.

I wasn't recommending everyone use RDS. If your use case is ok with a laptop-level reliability, go for it! You simply can't compare the cost of RDS to a monthly cost of a colo server - they're massively different things.


Thank you very much for this reply! Those are all very good points. You’re right, this is a service, and not having the worry about hardware or scripts at all is valuable when you have a million other things to manage.


Everything has a cost. The cost of the cloud is that you’re out of control, the cloud provider owns your stuff, and a single vulnerability can make half the Earth get pwned. Whoops.

EDIT: I do use cloud services for some stuff. My point isn’t being anti-cloud, just that nothing is perfect.


True, and also it costs more money.


Some of these strike me as things where the software _could_ exist, but it's either not FOSS, or I haven't heard of it yet.

For instance, there are LVM2 snapshots. Maybe those do affect performance. If the cost difference is big enough though, couldn't you just account for that in the budget?

I agree that literal "bare metal" sucks, but self-hosted with cloud characteristics (containers, virtualization) is not totally obsolete.


That's the point, duh.

When you're using the cloud you're paying someone else a very good margin to do all of those things for you.

If you do them yourself you can save quite a lot. Hardware is surprisingly cheap these days, even with the chip shortage factored in, if compared to cloud offerings.


A lot of these aren’t as important for something that’s fairly static and can’t even be autoscaled or resized live (excluding Aurora - I’m talking about standard RDS).

No access to the underlying OS can actually be a problem. I had a situation where following a DB running out of disk space it ended up stuck in “modifying” for 12 hours, presumably until an AWS operator manually fixed it. Being able to SSH in to fix it ourselves would’ve been much quicker.


Don’t forget all of the outages associated with all of that crap you don’t need.


Most selfhosted distros have some/all of these features baked in: yunohost, freedombox or libreserver come to mind (note the latter is maintained only by a single person).

"Identical setup within seconds of failure" being the exception as you need to deploy from backup which can take minutes/hours (depending on backup size) even if you have the spare hardware. Fortunately, that's the least-needed feature in your list.


> Here's what the bare metal server didn't come with:

It's called software, you put that on top of a server, it's not some kind of magic.


You can get nearly all of that on your bare metal if you use something like OpenStack. I know a couple people that run vmware on home lab also.


That should cost more but not 80x more.

Sure, you're paying the cost of some DBAs and SREs with that price. Still, seems too way above what it costs.


Also physical security, power, physical rack cost, networking, salaries to manage everything, and all those other hidden costs


There are no "hidden costs". They're well known.

Factor them all in, and "the cloud" is 1000s to millions of times more expensive than bare metal. I've seen people pay $100k/month, for services which could run on an $140/month bare metal server failover pair.

Meanwhile, people still spend enormous resources managing "the cloud", writing code to deploy to the cloud, dealing with edge cases in the cloud.

There are no savings, time wise, management wise, or money wise, with the cloud.

You are paying for ignorance.


I was watching a video by Neil Patel and he pays 130k a month to Aws for one website, UberSuggest. Now that is a guy with money to burn. He could put that on bare metal and use the savings to buy a hotel and food for the homeless.


Hidden costs? I've been in companies with their own datacenters from tiny to enormous.

The costs were very clearly spelled out and always lower that SaaS.

Why do people here think that cloud companies are charities? The cost of their hardware leasing, personnel, electricity, building management, insurance... it is all paid by the customer.

Plus a beefy profit margin. All paid by you.


Using cloud-managed backups can lull you into a false sense of security. I've heard enough of horror stories of the snapshot/restore mechanism failing for some reason, especially on certain edge-cases when you use huge amounts of data. What then? All your cloud marketing magic becomes a liability.


fully agree with this, in my free time I would fully go for the baremetal, but if I have to save money to a company, by placing all the headaches that are solved by AWS then I just say goodbye


It also comes with 100x bandwidth pricing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: