> So there’s a little live reporting on the situation in the streets.
> I offered no aid.
I just want to say I find this writing style refreshing as it’s a bit out of distribution for typical HN comments. Anyway, thanks for sharing your experience.
I can, I don't have a specific example I've used to give you in this moment. And trying to share an exact example would read like a double negative.
The general rule of thumb is only put what you want in context. If you put instructions of what not to do in context, those tokens can be misunderstood and create unintended/unwanted steering of the model.
A fair example would be testing for positive sentiment. Consider weight of tokens appended to context, phrase instructions or questions to be neutral or positive.
e.g. Some phrases and their impact:
- "Is the tone of the user message positive?" will be biased for a false positive.
- "Analyze the tone of the user message?" will be more neutral and less biased.
- "Is the tone of the message negative?" will be biased for false positives when evaluating for negative tone.
> The term references an Internet meme depicting the fallacy using Goombas, which was first posted to Twitter by @supersylvie_ on January 29, 2024.
The history of this term goes back… one year? (from a rather unpopular meme) I’m all for introducing new vocab in english but it feels like there should already be a term for this.
Funny enough, searching "goomba fallacy" in wikipedia's search yields [association fallacy](https://en.wikipedia.org/wiki/Association_fallacy) and it appears to be more accurate. (Also, what I assume to be semantic search hitting that article from that search is amusing and more than a little telling.)
The population fallacy is when one infers information about an individual from the group, which wasn't done here as there is no specific individual in question. The population fallacy is seeing that some demographic likes to do a thing more than other demographics and thinking therefore any given subject in that demographic likes to do that thing.
> […] if the Sun were a ping-pong ball, […] the average distance between stars […] is analogous to one ping-pong ball every 3.2 km (2 mi).
Intuitively this visualization actually makes it seem like stars are pretty close? Usually with galactic dimensions it’s hard for our mere monkey minds to grasp the scales but this is actually pretty easy to imagine.
I’m going to try it out. My first feedback is that links in text posts (like this very post) aren’t clickable and the <a href… tags show up as plaintext (iOS). Also it’s not possible to make new lines in comments on iOS? Also I can only see the comment I’m currently typing in a single line. Otherwise, the app looks nice!
I’m surprised this is the only comment mentioning devicepixelratio. Most of the roasts from deepseek seem to involve roasting the window size but that’s misleading without DPR
I was wondering why it was showing mine wrong and why many people's resolution here was surprisingly low. TIL about DPR. Too bad that was the only good roast in my first result.
Refreshing the page gives a new one though, and it's done a pretty good job now!
I ended looking for the link then clicking the apparently first link “Via Jason Snell”. In that page the link to the tournament is also the header (which I did not notice). The last paragraph on that page had a link to the tournament and that’s what I ended up clicking. I’m glad I’m not the only one
> So there’s a little live reporting on the situation in the streets.
> I offered no aid.
I just want to say I find this writing style refreshing as it’s a bit out of distribution for typical HN comments. Anyway, thanks for sharing your experience.
reply