We need you to stop posting shallow dismissals and cynical, curmudgeonly, and snarky comments.
We asked you about this just recently, but it's still most of what you're posting. You're making the site worse by doing this, right at the point where it's most vulnerable these days.
Your comment here is a shallow dismissal of exactly the type the HN guidelines ask users to avoid here:
Predictably, it led to by far the worst subthread on this article. That's not cool. I don't want to ban you because you're also occasionally posting good comments that don't fit these negative categories, but we need you to fix this and stop degrading the threads.
I respect your awareness of that, which I'm sure is much broader than mine is. HN consumes my attention; I'm all depth and no breadth.
What I'm interested in is how well HN does at fulfilling its own mandate in its own terms. On that scale, it's getting worse—in this respect, at least, which is a big one. We're going to do something about it, the same way we've always tried to stave off the decline of this place (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...).
I think it's both external and internal, but the internal factors are more important because a compromised immune system is more vulnerable to outer pathogens.
I'm not sure it's about HN getting larger, though. It's a bit hard to tell, but at least some of the upswing in cynical and curmudgeonly comments is coming from established users.
I'd rather HN become a much worse place than the world suffer though AI massive wealth theft, the BIG LIE that will convince elites to kill millions of people.
Obviously it's our job to ban accounts that make HN a much worse place, but I'm more curious to understand your thinking here.
What's the connection between these two things? They don't seem related to me. How would making HN worse contribute to alleviating world suffering or saving millions of people?
Whether powered by human or computer, it is usually easier (and requires far fewer resources) to verify a specific proof than to search for a proof to a problem.
Professors elsewhere can verify the proof, but not how it was obtained. My assumption was that the focus here is on how "AI" obtains the proof and not on whether it is correct. There is no way to reproduce this experiment in an unbiased, non-corporate, academic setting.
It seems to me that in your view the sheer openness to evaluate LLM use, anecdotally or otherwise, is already a bias.
I don't see how that's sensible, given that to evaluate the utility of something, it's necessary to accept the possibility of that utility existing in the first place.
On the other hand, if this is not just me strawmanning you, your rejection of such a possibility is absolutely a bias, and it inhibits exploration.
To willfully conflate finding such an exploration illegitimate with the findings of someone who thinks otherwise as illegitimate, strikes me as extremely deceptive. I don't appreciate being forced to think with someone else's opinion covertly laundered in very much. And no, Tao's comments do not meet this same criteria, as his position is not covert, but explicit.
> ... Also, I would not put it past OpenAI to drag up a similar proof using ChatGPT, refine it and pretend that ChatGPT found it. ...
That's the best part! They don't even need to, because ChatGPT will happily do its own private "literature search" and then not tell you about it - even Terence Tao has freely admitted as much in his previous comments on the topic. So we can at least afford to be a bit less curmudgeonly and cynical about that specific dynamic: we've literally seen it happen.
> ChatGPT will happily do its own private "literature search" and then not tell you about it
Also known as model inference. This is not something "private" or secret [*]. AI models are lossily compressed data stores and will always will be. The model doesn't report on such "searches", because they are not actual searches driven by model output, but just the regular operation of the model driven by the inference engine used.
> even Terence Tao has freely admitted as much
Bit of a (willfully?) misleading way of saying they actively looked for it on a best effort basis, isn't it?
[*] A valid point of criticism would be that the training data is kept private for the proprietary models Tao and co. using, so source finding becomes a goose chase with no definitive end to it.
An I think valid counterpoint however is that if locating such literature content is so difficult for subject matter experts, then the model being able to "do so" in itself is a demonstration of value. Even if the model is not able to venture a backreference, by virtue of that not being an actual search.
This is reflected in many other walks of life too. One of my long held ideas regarding UX for example is that features users are not able to find "do not exist".
It genuinely seemed to me that they were looking for empirical reproductions of a formal proof, which is a nonsensical demand and objection given what formal proofs are. My question was spurred on by this and genuine.
It may be that there wasn't enough information in your comment for me to read its intent correctly. I thought you were taking a snarky swipe at the other commenter—especially because most people on HN can be presumed to know what a formal proof is.
Thank you, I'll try to keep it in mind. I'll admit that the curtness of my original question was not just you misreading it, but it did (also) come from a place of genuine confusion.
For what it's worth, it's not even that I don't see merit to their points. I'm just unable to trust that they're being genuine, not the least for how they conduct themselves (which I only fault them for so much). This also impacts my ability to reason about their points clearly.
Sadly, I'm not able to pitch any systematic solutions.
If you don't stop, we're going to ban you. As I said, I don't want to—but when you respond to requests to stop breaking the site guidelines by breaking them again, that's not good.
I get how it's activating and annoying when moderators show up and start fault-finding, so I can appreciate the irritation here. But really, we're just trying to have an internet forum that doesn't destroy itself. I can't imagine why you wouldn't want to contribute positively to that.
What scientific field do you reckon the regular usage of LLMs falls under? Do you genuinely think Tao was making scientific claims or just provided evidence that may eventually feed into some? It reads to me like just a plain recollection of events, an anecdotal experience.