It is worth noting that both products have had "student" tiers or similar, that had fixed credit limits with a cliff.
Therefore, they've implemented hard-limits. So not offering hard-limits is a business decision, NOT a technical one. They're essentially hiding functionality they have.
Make of that as you will. Anyone justifying it, should be me with skepticism.
Soft limits would be ideal (x/day with maximum peak of x/minute), but hey, that's literally negative value to them (work to code, CPU time to implement, less income out of "mistakes")
I've heard that Google keeps Google Drive data around for up to two years if your subscription expired and your account is over quota. They could certainly do the same with other cloud storage.
If I reduce my gdrive subscription they don’t simply delete what I have over the new (lower) limit. There is a grace period and it’s standard practice. Why should it be any different in this case?
There is, and it would cause an outage while still not achieving the supposed goal of not going over budget. You don't want to be killing your customer's production over potential misconfigurations/forgotten budgets. Especially when you'd continue to bill them for the storage and other static things like IPs.
It's so much easier for them to have support wave accidental overuses.
There’s not much difference, really. I stupidly didn’t bother looking at prior art when I started reverse engineering and the ghidra-cli was born (along with several others like ilspy-cli and debugger-cli)
That said, it should be easier to use as a human to follow along with the agent and Claude Code seems to have an easier time with discovery rather than stuffing all the tool definitions into the context.
That is pretty funny. But you probably learned something in implementing it! This is such a new field, I think small projects like this are really worthwhile :)
> But the implementation of Gerrit seems rather unloved
There are lots of people who are very fond of Gerrit, and if anything, upstream development has picked up recently. Google has an increasing amount of production code that lives outside of their monorepo, and all of those teams use Gerrit. They have a few internal plugins, but most improvements are released as part of the upstream project.
My company has been using it for years, it's a big and sustained productivity win once everyone is past the learning curve.
Gerritforge[0] offers commercial support and runs a very stable public instance, GerritHub. I'm not affiliated with them and not a customer, but I talk to them a lot on the Gerrit Discord server and they're awesome.
It's complex, but Ceph's storage and consensus layer is battle-tested and a much more solid foundation for serious use. Just make sure that your nodes don't run full!
Make sure you have solid Linux system monitoring in general. About 50% of running Ceph successfully at scale is just basic, solid system monitoring and alerting.
This line of advice basically comes down to: have a competent infrastructure team. Sometimes you gotta move fast, but this is where having someone on infrastructure that knows what they are doing comes in and pays dividends. No competent infra guy is going to NOT set up linux monitoring. But you see some companies hit 100 people and get revenue then this type of thing blows up in their face.
Postgres replication, even in synchronous mode, does not maintain its consistency guarantees during network partitions. It's not a CP system - I don't think it would actually pass a Jepsen test suite in a multi-node setup[1]. No amount of tooling can fix this without a consensus mechanism for transactions.
Same with MySQL and many other "traditional" databases. It tends to work out because these failures are rare and you can get pretty close with external leader election and fencing, but Postgres is NOT easy (likely impossible) to operate as a CP system according to the CAP theorem.
There are various attempts at fixing this (Yugabyte, Neon, Cockroach, TiDB, ...) which all come with various downsides.
reply