HN has been a wonderful source of both news and community - it connected me to my industry in ways that could only have been achieved otherwise by moving to SF or NYC.
In a way, it feels like a great continuation of my Slashdot years.
I get pulled into a fair number of "why did my AWS bill explode?" situations, and this exact pattern (NAT + S3 + "I thought same-region EC2→S3 was free") comes up more often than you’d expect.
The mental model that seems to stick is: S3 transfer pricing and "how you reach S3" pricing are two different things. You can be right that EC2→S3 is free and still pay a lot because all your traffic goes through a NAT Gateway.
The small checklist I give people:
1. If a private subnet talks a lot to S3 or DynamoDB, start by assuming you want a Gateway Endpoint, not the NAT, unless you have a strong security requirement that says otherwise.
2. Put NAT on its own Cost Explorer view / dashboard. If that line moves in a way you didn’t expect, treat it as a bug and go find the job or service that changed.
3. Before you turn on a new sync or batch job that moves a lot of data, sketch (I tend to do this with Mermaid) "from where to where, through what, and who charges me for each leg?" It takes a few minutes and usually catches this kind of trap.
Cost Anomaly Detection doing its job here is also the underrated part of the story. A $1k lesson is painful, but finding it at $20k is much worse.
Hey, just a friendly reminder that this is HackerNews, not The Lancet.
I have seen past comments here debating many relative basic concepts on medicine. Please don't take medical advice from engineers. Drink water, exercise, eat well. Otherwise seek medical advice from a doctor.
You just opened my eyes to IMSLP and I would like to thank you for that. For years I have been chasing music scores that were just the right level for my kids.
Hard to believe it's been 27 years. I remember when it was still in beta, and how exciting it was to have an open source alternative to Internet Explorer.
The model's source code (the training data) is hundreds of GB and much harder to transfer. The compiling (training) process is also very costly. This is very different from the Linux case.
Only big techs have enough resources to make these things happen.
I like looneysquash's viewpoint about the definition of open source AI. You will need to have all parts involved open-sourced to make a model "open", not just the weights:
> The trained model is object code. Think of it as Java byte code. You have some sort of engine that runs the model. That's like the JVM, and the JIT. And you have the program that takes the training data and trains the model. That's your compiler, your javac, your Makefile and your make. And you have the training data itself, that's your source code.
> Each of the above pieces has its own source code. And the training set is also source code. All those pieces have to be open to have a fully open system. If only the training data is open, that's like having the source, but the compiler is proprietary. If everything but the training set is open, well, that's like giving me gcc and calling it Microsoft Word.
We just shipped a release aimed at bringing FinOps into PRs/CI (instead of another dashboard).
New in this release:
- Public REST API for cost + carbon summaries (JSON)
- /v1/report endpoint that can return Markdown for PR comments
- GitHub Action to generate the report + optionally post/gate on thresholds
- AWS RI/SP recommendations, plus All Accounts rollups and per-account data quality indicators
Would love feedback:
1) What signals/thresholds would make CI cost checks useful without becoming PR spam?
2) What’s your bar for auth/security + least-privilege for a tool like this?