Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I didn't downvote you. Postgres as a database server is operationally more complex when compared to Sqlite. Since Postgres is a network service, you have to deal with networking and security. Upgrading Postgres is a big task in and of itself. Backups has to happen over the network.

Number of network connections is another sore point. One of Postgres' biggest pain point is the low number of connections it supports. It is not uncommon to have to run a proxy in front of Postgres to increase the number of connections.

Sqlite gives you so much for free as long as you can work within its constraint, which is single writer (for the most part.)



You have a more complex network setup actually. You have north-south traffic between your client->LB->servers. and you have east-west traffic between your servers for sqlite replication. Both happening on the same nodes and no isolation whatsoever. More things can go wrong and will require more tooling to disambiguate between different potential failures.

W.r.t security, you have same challenges to secure east/west vs north/south traffic. W.r.t # of connections, Postgres has a limit on number of connections for a reason – if you are running a multi-process or milt-thread app framework that's talking to sqlite, you have just traded connection limit to concurrent process/thread access limit to sqlite. I don't know if one is better than other – it all depends on your tooling to debug things when things inevitably fail at redline stress conditions.


You're technically right, which is to say: Not at all.

You don't replicate east-west with Litestream, you replicate north -> south where the "North" is your writer and your "South" is some S3-like service. Basically every application has "Some S3 Like Service" as a dependency now, so that's not a new network connection.


You make a really good point about trading Sqlite write limits to the limited number of Postgres connections. My comment is mostly about having to run a proxy, which is another service that ends up being a liability. Regardless, if you need multi-writers, then the obvious solution is Postgres.

"Both happening on the same nodes and no isolation whatsoever. More things can go wrong and will require more tooling to disambiguate between different potential failures."

As a Postgres fan, the extra Postgres service to me is a far bigger liability than Sqlite. The east-west networking is true for Postgres replication as well, so it's a wash to me.


> Upgrading Postgres is a big task in and of itself.

Learnt it the hard way when I first upgraded the major version, Only to realize that the data needs to be migrated first. pg_upgrade requires binaries of the older version and so we need copies of data, as well as binaries of old & new version of postgres[1] i.e. if not manually dumped; Fortunately it was just my home server.

[1] https://wiki.archlinux.org/title/PostgreSQL#Upgrading_Postgr...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: