Hacker Newsnew | past | comments | ask | show | jobs | submit | quacker's commentslogin

It's a good idea. There are many studied benefits to (intermittent) fasting, for example: https://pmc.ncbi.nlm.nih.gov/articles/PMC11262566/

I don’t agree.

She has posted publicly about her condition.

He is 25 years old and trying to cope with a hard life event. Let’s not act like it doesn’t affect him. It affects everyone around her and the strong reaction from him is really a positive reflection on her, isn’t it?

His post is written and edited to garner sympathy and support. I don’t mind that for a naive but noble cause. And there is always a slim chance of success.


Supposedly there is no data shared with Google when using Gemini-powered Siri:

Google’s model will reportedly run on Apple’s own servers, which in practice means that no user data will be shared with Google. Instead, they won’t leave Apple’s Private Cloud Compute structure.[1]

1: https://9to5mac.com/2025/11/05/google-gemini-1-billion-deal-...


Supposedly and reportedly that is true. For now.

We still have Google models running on hardware people pay thousands of dollars for, under the impression it wasn't a Google device.

Imagine the gigantic temptation of gigantic wads of cash Google would pay Apple to allow Gemini to index and produce analytics about your data on your machine.

Now Google have a foot in the door.


This needs more detailed data that normalizes for the amount of food (price per calorie or price per weight or something like that).

Yes, a bowl at chipotle in the US might be 2x the price (more, probably) of a Japanese bowl, but it matters if I am getting 2x the calories also.

And there are foods in the US that are technically as cost effective, although maybe not as nutritious, like pizza which they mention, that can be around $1-$3 per slice. (Not my first choice for a lunch, but I could pickup a large 3 topping dominos pizza for $10 and make 3-4 lunches out of it, for example)


> In Japan, workers rely on healthy lunch bowls for under $4

The title doesn't capture that, but the issue is not that the US can't produce $4 lunches. It's it can't enable cheap(er) healthy lunches


I'm not sure what your point is. Is it about the lunches being specifically healthy?

A rice bowl at Chipotle, for example, is not unhealthy (rice, beans, meat, vegetables). Plenty of restaurant food in the US is perfectly healthy (or, you can look at nutrition facts to know if it is). And if I can take a single US portion size and split it into two lunches that are Japanese-sized portions, then maybe we're getting the same amount food per dollar.

And on the "healthy" point: The article doesn't discuss nutrition facts at all or refer to any specific meals or dishes.

They link to an article concerning the price of Japanese bowls, that mentions "a regular-sized bowl of rice with beef from Japanese fast food chain Yoshinoya, which costs around 468 yen (S$4.25)." I don't know Japanese so it's hard for me to find nutrition information about that particular dish, but I suspect that a beef bowl is high in saturated fat, cholesterol, and sodium (because most stir-fried beef is higher in these things). Is that healthy? Japan as a country has higher sodium intake than the US. Is that healthy? And so on. I suspect a big factor of the "health" of these lunches is that portion sizes are just smaller than in the US (but I have no data).


I think just statistics about how many people are overweight and obese in both countries can already paint a picture that probably japanese food is more healthy. And optimizing for how many calories you can get for $1 is probably also not the best metric to aim for.


Sort of for sake of argument: National obesity statistics don’t necessarily imply anything about the healthiness of the food, nor specifically about the healthiness of $4 lunches that the article discusses. If the Japanese eat smaller portions and are less sedentary, they could still be less obese regardless of differences in the nutritional content of these $4 lunches. (And I think they ARE less sedentary and DO eat smaller portions.)

I’m not advocating for anything (certainly not optimizing for calories per dollar).

My point is just that the article has no data. It says a Japanese lunch is cheap and a US lunch is expensive and doesn’t consider what you actually get for the money. It assumes the US lunch is a worse deal, but I suspect it’s really not if you adjust the price for the amount of food.


Using git history as documentation is hacky. A majority of feature branch commit messages aren't useful ("fix test case X", "fix typo", etc), especially when you are accepting external contributions. IF I wanted to use git history as a form of documentation (I don't. I want real documentation pages), I'd want the history curated into meaningful commits with descriptive commit messages, and squash merging is a great way to achieve that. Git bisect is not the only thing I do with git history after all.

And if I'm using GitHub/Gitlab, I have pull requests that I can look back on which basically retain everything I want from a feature branch and more (like peer review discussion, links to passing CI tests, etc). Using the Github squash merge approach, every commit in the main branch refers back to a pull request, which makes this super nice.


As far as I'm concerned, the git history of a project is the root source of truth of why a change was made, at that point in time. External documentation is mostly broad strokes, API references, or out of date. Code comments need to be git blamed anyway to figure out when they were added, and probably don't exist for every little change. Pull requests associated with a given commit give the broad description of "what feature was being implemented or bug was being fixed" for a given change, but a commit message tells me what, specifically, during that work, triggered this particular change. I want to know, for example, that the reason this url gets a random value appended to it is that while implementing a new page to the site it was found that the caching service would serve out-of date versions of some iframe. It never made it out of dev testing, so it never became a full-blown bug, it wasn't the purpose of the feature branch, so it wasn't discussed in the PR. But the commit message of "Add some cache-busting to iframe" (even something that brief), can go wonders to explaining why some oddity exists.


Agree to disagree I guess, but IME, git history is good for low level detail, not for high level information. Git history is a poor source for understanding architecture, code organization, and other aspects of the codebase. More often, git commit messages tell me what changed - not why the change was made or who it impacted or etc.

Reading through git history should be my last resort to figure something out about the codebase. Important knowledge should be written somewhere current (comments, dev docs, etc). If there is a random value being appended to a url, at least a code comment explaining why so I don’t even have to git blame it. Yes, these sources of knowledge take some effort to maintain and sure, if I have a close-knit team on a smaller codebase, then git history could suffice. But larger, long-lived codebases with 100s of contributors over time? There’s just no possible way git history is good enough. I can’t ask new team members to read through thousands of commits to onboard and become proficient in the codebase (and certainly not 5x-10x that number of commits, if we are not squashing/rebasing feature branches into main. Although, maybe now an LLM can explain everything). So I really need good internal/dev documentation anyway, and I want useful git history but don’t care so much about preserving every tiny typo or formatting or other commit from every past feature branch.

Also iirc, with github, when I squash merge via the UI, I get a single squashed commit on main and I can rewrite the commit message with all the detail I like. The PR forever retains the commit history of the feature branch from before the squash, so I still have that feature branch history when I need it later (I rarely do) so I see no reason to clutter up history on main with the yucky feature branch history. And if I tend toward smaller PRs, which is so much nicer for dev velocity anyway, even squashed commits can be granular enough for things like bisect, blame, and so on.


Right, but they are referring to configuration on a GitHub repository that can make squash merge automatic for all pull request merges.

e.g. When clicking the big green "Merge pull request" button, it will automatically squash and merge the PR branch in.

So then I don't need to remind or wait for contributors to do a squash merge before merging in their changes. (Or worse, forget to squash merge and then I need to fix up main).


Rebasing replays your commits on top of the current main branch, as if you’d just created your branch today. The result is a clean, linear history that’s easier to review and bisect when tracking down bugs.

The article discusses why contributors should rebase their feature branches (pull request).

The reason they give is for clean git history on main.

The more important reason is ensure the PR branch actually works if merged into current main. If I add my change onto main, does it then build, pass all tests, etc? What if my PR branch is old, and new commits have been added onto main that I don't have in my PR branch? Then I can merge and break main. That's why you need to update your PR branch to include the newer commits from main (and the "update" could be a rebase or a merge from main or possibly something else).

The downside of requiring contributors to rebase their PR branch is (1) people are confused about rebase and (2) if your repository has many contributors and frequent merges into main, then contributors will need to frequently rebase their PR branch, and each rebase their PR checks need to re-run, which can be time consuming.

My preference with Github is to squash merge into main[1] to keep clean git history on main. And to use merge queue[2], which effectively creates a temp branch of main+PR, runs your CI checks, and then the PR merge succeeds into main only if checks pass on the temp branch. This approach keeps super clean history on main, where every commit includes a specific PR number, and more importantly minimizes friction for contributors by reducing frequent PR rebases on large/busy repos. And it ensures main is never broken (as far as your CI checks can catch issues). There's also basically no downside for very small repos either.

1. https://docs.github.com/en/repositories/configuring-branches...

2. https://docs.github.com/en/repositories/configuring-branches...


You can still do vendoring with Go modules, using the `go mod vendor` command.


I'm a bit out of date now, but 'go mod vendor' did not in fact vendor all dependencies when I last used it. It seemed more like a CI caching mechanism then actual complete dependency vendoring.


This is not the experience that I have. I use vendoring heavily because it's nice to be able to review all the relevant code changes when Renovatebot updates a dependency [1], and to get a feel for how heavy or lightweight certain dependencies are. If vendoring was incomplete, I would see it trying to download missing dependencies at compile time, but the "go: downloading $MODULE $VERSION" lines only show up when I expect them too, e.g. during "go get -u" or "go mod tidy/vendor".

[1] Before you ask, I'm not reading the full diff on something like x/sys. Mostly on third-party dependencies where I find it harder to judge the reliability of the maintainers.


I'll throw in a +1, afaik vendoring is a complete and reliable solution here.

Go's `mod`-related commands have had quite a large number of breaking changes in behavior through the years, so I won't say it's a stable solution (aside from completely disabling modules, that has always worked fine from my use). I've had to fix build scripts at least once or twice a year due to that, e.g. when they started validating that the vendor folder was unmodified (a very reasonable thing to do, but still breaking). But once that is overcome it has always worked fine.


Yes it does. They don’t have to pay $200k/yr + $50k immediately, and don’t have to spend the time, effort, and money on self-hosting and migrating away.


It solves it for this one client. It doesn’t provide any transparency on exactly how this occurred (like you would for say, a data breach) and provides no guarantees (only words) that this won’t happen again, or any guarantee that this isn’t happening to any other client right at this moment!


This entire article is a strawman. It fails to even understand the basic important problem lock files try to address.

Sure, I can deterministically resolve the same X.Y.Z versions of packages according to whatever algorithm I like.

But literally everything is mutable (read: can be compromised). Package managers, bytes flying over the network, build servers - all mutable. A published version of a package can be overwritten. Some package managers allow authors to overwrite existing versions.

That means no guarantee of reproducibility. I build again the next day, and I may not download the same code for the same X.Y.Z version of a package. Or my local cached packages may have changed.

So I need checksums on packages. Then I can prove I have the exact same code in those packages. And if I have checksums on packages, I have a lock file.

> lockfiles are an absolutely unnecessary concept that complicates things without a good reason

What is even the complication? The lock file is autogenerated on a successful build and I commit it. It's trivial to use. It's literally all benefit and no downside.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: