Hacker Newsnew | past | comments | ask | show | jobs | submit | zakkor1's commentslogin

> Fix curly brace tokens to the beginning

Regular LLMs can already do this, by prefilling the start of the assistant's response.

But there is actually something even better: you can constrain the LLM's output to a specific grammar (like JSON), so it'll only be able to answer with syntactically valid JSON.


Yes. And you can have a grammar parser only select from valid tokens in a randomized distribution. But, this feels much more sophisticated to me, especially if you can mix specific token-based grammar requirements with other instructions during the token selection phase.


That's a bug in the JS interpreter they're running, not anything related to Clojure. Also, that's not a syntax error


Good thing Go has a race detector!


and C++ doesn't?


The point is, you got the phrase you Googled ("low platelet count increase wbc") from GPT4. I had no idea what the test results meant or how to interpret them. I was trying to do SOMETHING to improve my dog's chances, and ChatGPT was #1 most accessible tool I could use for this.

Sure, if you know exactly what you want to google for, you can google it. But that requires you to interpret all the information and piece together a theory, whereas with GPT I just stated all the facts and received accurate (in this case) info back


That’s not the case. After going to the first vet, you can search for causes of anemia in dogs without uploading test results or giving any further details and the first hit is as good as GPT’s.

I have seen some good use cases for GPT but this one in particular (the Tweet) is not a good example.


Sorry, but this is just not true.

1) She was already correctly diagnosed with the first issue (babesiosis), which causes anemia.

2) As a result of the babesiosis, she developed a secondary complication (immune disorder), which worsened the anemia. Note that this can also occur as a standalone disease, which is actually the google result you're getting.

3) You would have had to google "secondary complications causing anemia as a result of canine babesiosis", and at this point, google stops helping you. Not that I would have known to google that anyway.


I searched "dog anemia" and got this as the first result:

https://vcahospitals.com/know-your-pet/anemia-in-dogs

Right there on the page: IMHA. If you add just a couple keywords from the large input, you likewise get more direct IMHA results.


It also lists cancer, parasites and a whole bunch of other stuff.

As a guy who knows nothing about dogs, Chatgpt seemed to zero in on this 10x better than that very long VCA page.


ChatGPT lists multiple things too.

Also this ignores my original comment where I just typed the keywords from the vet notes and got IMHA right on the first result.

That'd be about 100x faster using Google.


I’m afraid what’s happening here is what’s called “confirmation bias”.

Here [1] it’s actually the author who rules that out himself based on prior knowledge and the fact that the situation didn’t get better after the first diagnosis and subsequent treatment. Looking for a secondary cause/explanation is what drives him to ask the question in the first place. GPT says here are some “general information on…”.

[1] https://twitter.com/peakcooper/status/1639716836911489025?s=...


[flagged]


> Sorry but "dog anemia" also suggests it on the first result, but enjoy gaining the cultish following you seek.

Please don't be an asshole on HN, regardless of how right you are or feel you are. Being right (assuming you are) actually makes it worse.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


Tweet author here. First vet was incompetent. Second vet was highly skilled and very professional, I am 100% confident they would have been able to place the correct diagnosis regardless.

If we continued staying with the first vet without getting a second opinion, she definitely would have died. On the other hand, if the first vet had access to specialized AI medical diagnostics tools, she wouldn't have died. I think that's the key takeway here.

I think it's also worth pointing out this was less about GPT placing a certain diagnosis, and more about it swaying me in the right direction (i.e. away from listening to bad doctor's advice, and towards looking for a second/opinion trying something else). When the first vet said "welp, don't know what else to do, we gave her the treatment and there's nothing else it can possibly be, so we're just going to monitor her and see how things go", I stayed up all night asking GP4 various things, and no matter which way I sliced it, it was obvious something didn't add up


Question is, would an incompetent, overworked person take the time and humility to actually ask the AI? You did, because you cared. With incompetent or overworked people, the problem is often that they just don’t care, or have no capacity to care. Being able to ask the proper question is almost always 80% of the solution. In your case, the first vet, it seems to me, wasn’t even asking the question.


Indeed, this is a call to fix the process by requiring or strongly encouraging consulting the big electronic brain.

Historically (1800s), doctors did not feel the need to wash their hands between patients, leading to poor outcomes and death. It had to be mandated, leading to improved outcomes. Same thing.

https://www.nationalgeographic.com/history/article/handwashi...


People should be getting a 2nd opinion regardless. Too many people don’t shop around with their health care providers enough and the say things like “GPs are useless”, “physiotherapists are useless”.

Some are bad, some burn out, some are over worked, just like all professions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: