Hacker Newsnew | past | comments | ask | show | jobs | submit | lima's commentslogin

It's true, neither AWS nor GCP support spending limits. Only alerting.

It is worth noting that both products have had "student" tiers or similar, that had fixed credit limits with a cliff.

Therefore, they've implemented hard-limits. So not offering hard-limits is a business decision, NOT a technical one. They're essentially hiding functionality they have.

Make of that as you will. Anyone justifying it, should be me with skepticism.


I have never heard of nor seen AWS student accounts.

There is a free tier but that varies per service and anyway will not limit anything. It works as if it just gives you some credit to offset the costs.


AWS Educate "Starter" Accounts were exactly that[0]. It didn't ask for, nor need a Credit Card, and there was functionally no way to exceed.

[0] https://www.geeksforgeeks.org/cloud-computing/aws-educate-st...

They also offered (may still offer) the same thing with AWS Academy.


Soft limits would be ideal (x/day with maximum peak of x/minute), but hey, that's literally negative value to them (work to code, CPU time to implement, less income out of "mistakes")

That's because you pay for stuff like storage. If you had a spending limit, they'd have to delete your data to stop your spend.

Or do what every other industry does, and trigger a conversation. Or even don't let you store more, or restrict access. Why the need to delete?

'By the way old chap, you have gone over your storage limit. Do you want to buy more or delete some stuff?'


>By the way old chap, you have gone over your storage limit. Do you want to buy more or delete some stuff?

Why does my AWS counselor sound British. Am I in eu-west-2?


Why shouldn't it, its just a machine? Wouldn't the world be better if these messages varied a bit!

That's what alarms that you set up are for.

I've heard that Google keeps Google Drive data around for up to two years if your subscription expired and your account is over quota. They could certainly do the same with other cloud storage.

If I reduce my gdrive subscription they don’t simply delete what I have over the new (lower) limit. There is a grace period and it’s standard practice. Why should it be any different in this case?

If only there was a way to pause all the other stuff and only let storage to keep costing you ...

There is, and it would cause an outage while still not achieving the supposed goal of not going over budget. You don't want to be killing your customer's production over potential misconfigurations/forgotten budgets. Especially when you'd continue to bill them for the storage and other static things like IPs.

It's so much easier for them to have support wave accidental overuses.


If only we had the technology to exempt storage from spending limits.

As if that would solve anything? Depending on use, storage could be the largest line item (storage across databases, VMs, object storage).

Everything gets more expensive?

How does this approach compare to the various Ghidra MCP servers?

There’s not much difference, really. I stupidly didn’t bother looking at prior art when I started reverse engineering and the ghidra-cli was born (along with several others like ilspy-cli and debugger-cli)

That said, it should be easier to use as a human to follow along with the agent and Claude Code seems to have an easier time with discovery rather than stuffing all the tool definitions into the context.


That is pretty funny. But you probably learned something in implementing it! This is such a new field, I think small projects like this are really worthwhile :)

I also did this approach (scripts + home-brew cli)...because I didn't know Ghidra MCP servers existed when I got started.

So I don't have a clear idea of what the comparison would be but it worked pretty well for me!


> But the implementation of Gerrit seems rather unloved

There are lots of people who are very fond of Gerrit, and if anything, upstream development has picked up recently. Google has an increasing amount of production code that lives outside of their monorepo, and all of those teams use Gerrit. They have a few internal plugins, but most improvements are released as part of the upstream project.

My company has been using it for years, it's a big and sustained productivity win once everyone is past the learning curve.

Gerritforge[0] offers commercial support and runs a very stable public instance, GerritHub. I'm not affiliated with them and not a customer, but I talk to them a lot on the Gerrit Discord server and they're awesome.

[0]: https://www.gerritforge.com/


Clearly, convincing it otherwise is part of the challenge.


This is changing, Ghidra is increasingly replacing IDA for commercial work.


LLMs don't really know why they got something wrong, so unless it had access to the original chain of thought, it's just guessing.


They don’t have access to their network level. But I assume they actually do have access to their chain of thoughts.


It's complex, but Ceph's storage and consensus layer is battle-tested and a much more solid foundation for serious use. Just make sure that your nodes don't run full!


Make sure you have solid Linux system monitoring in general. About 50% of running Ceph successfully at scale is just basic, solid system monitoring and alerting.


This line of advice basically comes down to: have a competent infrastructure team. Sometimes you gotta move fast, but this is where having someone on infrastructure that knows what they are doing comes in and pays dividends. No competent infra guy is going to NOT set up linux monitoring. But you see some companies hit 100 people and get revenue then this type of thing blows up in their face.


> AI first websites will get so popular that browsing sites manually will look like trying to use a text only browser in the JavaScript world.

That might be great for accessibility, though.


Postgres replication, even in synchronous mode, does not maintain its consistency guarantees during network partitions. It's not a CP system - I don't think it would actually pass a Jepsen test suite in a multi-node setup[1]. No amount of tooling can fix this without a consensus mechanism for transactions.

Same with MySQL and many other "traditional" databases. It tends to work out because these failures are rare and you can get pretty close with external leader election and fencing, but Postgres is NOT easy (likely impossible) to operate as a CP system according to the CAP theorem.

There are various attempts at fixing this (Yugabyte, Neon, Cockroach, TiDB, ...) which all come with various downsides.

[1]: Someone tried it with Patroni and failed miserably, https://www.binwang.me/2024-12-02-PostgreSQL-High-Availabili...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: