Hacker Newsnew | past | comments | ask | show | jobs | submit | armchairhacker's commentslogin

In real life, can you choose an experiment perfectly randomly?

You can ask many people to propose hypotheses and choose one at random, and perhaps with a good sample you get better experiments. You can query a Markov chain until it produces an interpret-able hypothesis. But the people or Markov chain (because English itself) has significant bias.

Also, some experiments have wider-reaching implications than others (this is probably more relevant for the Markov chain, because I expect the hypotheses it forms to be like "frogs can learn to skate").


Even Go is encountering the same fate, albeit slower. It’s nearly impossible to remove a feature once it has seen adoption, especially without an alternative; whereas there are always patterns that are greatly simplified by some new feature, and when a language becomes large, these patterns become common enough that the absence of said feature becomes annoying.

Yes agree, we are definitely past peak Go.

Suspiciously, after Rob Pike retired from the project, the amount of language and standard library changes skyrocketed. Many people now trying to get their thing into the language so they can add it to their list of accomplishments.

Clear evidence that you need someone saying "no" often.


Make the journal app store its data in plain-text Markdown files in an encrypted folder (or ZIP).

If necessary for things like search, add a cache file to the folder.


Here’s a dumb idea:

Give people the ability to submit a “Show HN” one year in advance. Specifically, the user specifies the title and a short summary, then has to wait at least year until they can write the remaining description and submit the post. The user can wait more than a year or not submit at all; the delay (and specifying the title/summary beforehand) is so that only projects that have been worked on for over a year are submit-able.

Alternatively, this can be a special category of “Show HN” instead of replacing the main thing.


Makes me think about Taleb's lindy effect https://en.wikipedia.org/wiki/Lindy_effect

It's like books. Old but still relevant books are the best books to read.

This tech industry is changing so fast though. Maybe a year is too much?


If a year is too much, then that implicitly also implies that the project was never interesting enough to be posted anyway. The interesting projects are interesting because of the projects themselves, not the tools used to build them.

If the software is not as exciting because the tools have changed, then it wasn't exciting in itself in the first place.


I'd push back on this and say that the #1 problem with the discourse about AI now (e.g. why I'd almost never upvote a blog post about AI coding) is that it is too focused on 2026-02-17. That is, I could care less about optimizing to pick the best model or agentic workflow because it's all going to be obsolete in a year.

I am wary of blogs by celebrity software managers such as DHH, Jeff Atwood, Joel Spolsky, and Paul Graham because they talk as if there was something about their experience in software development and marketing except... there isn't.

The same is true for the slop posts about "How I vibe coded X", "How I deal with my anxiety about Y" and "Should I develop my own agentic workflow to do Z?" These aren't really interesting because there isn't anything I can take away from them -- doomscrolling X you might do better because little aphorisms like "Once your agent starts going in circles and you find yourself arguing it you should start a new conversation" is much more valuable than "evaluations" of agents where you didn't run enough prompts to keep statistics or a log of a very path-dependent experience you had. At least those celebrity managers developed a product that worked and managed to sell it, the average vibe coder thinks it is sufficient that it almost worked.


If you can get rid of the bad aftertaste (maybe in fried foods it’s better), tilapia is very sustainable and nutritious. It also has very low mercury, lower than cod (https://www.fda.gov/food/environmental-contaminants-food/mer...)

Most tilapia is farmed in ponds in China.

I would suggest exploring all the implications that this brings to the table. Personally, I won't eat what is commonly being sold in the USA.

https://www.lifehack.org/314139/3-alarming-reasons-you-shoul...


Cross-reference. When a site is archived by one client (who visited it directly), request a couple other clients to archive it (who didn’t visit it directly, instead chosen at random, to ensure the same user isn’t controlling all clients).

I think if a Web of Trust becomes common, it will create a culture shift and most people won’t be excluded (compared to invite-only spaces today). If you have a public presence, are patient enough, or a friend or colleague of someone trusted, you can become trusted. With solid provenance, trust doesn’t have to be carefully guarded, because it can be revoked and the offender’s reputation can be damaged such that it’s hard to regain. Also, small sites could form webs of trust with each other, trusting and revoking other sites within the larger network in the same manner that people are vouched or revoked within each site (similar to the town -> state -> government -> world hierarchy); then you only need to gain the trust of an easy group (e.g. physically local or of a niche hobby you’re an expert in) to gain trust in far away groups who trust that entire group.

There’s a lot of debate under your linked comment.

My understanding is that people tend to cooperate in smaller numbers or when reputation is persistent (the larger the group, the more reliable reputation has to be), otherwise the (uncommon) low-trust actors ruin everything.

Most humans are altruistic and trusting by default, but a large enough group will have a few sociopaths and misunderstood interactions; which creates distrust across the entire group, because people hate being taken advantage of.


> Most humans are altruistic and trusting by default ...

... towards an in-group, yes. Not towards out-groups, as far as I can tell.

Though for some reason this tends not to apply to solo travellers in many, many parts of the world.

Lots of debate, yes, but very little about the basic fact that Hardin's formulation of "the tragedy of the commons" doesn't describe actual historical events in pretty any well documented case.


Wikipedia (https://en.wikipedia.org/wiki/Tragedy_of_the_commons#Example...) does have global examples where tragedy of the commons has applied, like mass extinctions and climate change. These are ongoing but have already caused permanent damage.

Although, there are other large-scale examples where tragedy of the commons has been (practically) avoided: ozone depletion and Polio eradication. Wikipedia (https://en.wikipedia.org/wiki/Tragedy_of_the_commons#Non-gov...) also mentions Elinor Ostrom, but her examples involve "smaller numbers".


- There’s a difference. Users don’t see code, only its output. Writing is “the output”.

- A rough equivalent here would be Windows shipping an update that bricks your PC or one of its basic features, which draws plenty of outrage. In both cases, the vendor shipped a critical flaw to production: factual correctness is crucial in journalism, and a quote is one of the worst things to get factually incorrect because it’s so unambiguous (inexcusable) and misrepresents who’s quoted (personal).

I’m 100% ok with journalists using AI as long as their articles are good, which at minimum requires factual correctness and not vacuous. Likewise, I’m 100% ok with developers using AI as long as their programs are good, which at minimum requires decent UX and no major bugs.


> - There’s a difference. Users don’t see code, only its output. Writing is “the output”.

So how is the "output" checked then? Part of the assumption of the necessity of code review in the first place is that we can't actually empirically test everything we need to. If the software will programmatically delete the entire database next Wednesday, there is no way to test for that in advance. You would have to see it in the code.


Tbf I'm fine with it only one way around; if a journalist has tonnes of notes and data on a subject and wants help to condense those down into an article, assistance with prioritising which bits of information to present to the reader then totally fine.

If a journalist has little information and uses an llm to make "something from nothing" that's when I take issue because like, what's the point?

Same thing as when I see managers dumping giant "Let's go team!!! 11" messages splattered with AI emoji diarrhea like sprinkles on brown frosting. I ain't reading that shit; could've been a one liner.


Another good use of an LLM is to find primary sources.

Even an (unreliable) LLM overview can be useful, as long as you check all facts with real sources, because it can give the framing necessary to understand the subject. For example, asking an LLM to explain some terminology that a source is using.


I’d be happy if people stop linking to paywalled sites in the first place. There’s usually a small blog on the same topic and ironically the small blogs poster here are better quality.

But otherwise, without an alternative, the entire thread becomes useless. We’d have even more RTFA, degrading the site even for people who pay for the articles. I much prefer keeping archive.today to that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: