Hacker Newsnew | past | comments | ask | show | jobs | submit | echollama's commentslogin

agreed its like saying that jerking off makes you healthier


Wait, it doesn't? /s


Its actually healthy to jerk off at least 20 times per month or so i read


i fixed this


reasoning models know when they are close to hallucinating because they are lacking context or understanding and know that they could solve this with a question.

this is a streamlined implementation of a interanlly scrapped together tool that i decided to open-source for people to either us or build off of.


> reasoning models know when they are close to hallucinating because they are lacking context or understanding and know that they could solve this with a question.

I’m interested. Where can I read more about this?


> reasoning models know when they are close to hallucinating because they are lacking context or understanding and know that they could solve this with a question

You've just described AGI.

If this were possible you could create an MCP server that has a continually updated list of FAQ of everything that the model doesn't know.

Over time it would learn everything.


Unless there is as yet insufficient data for meaningful answer.



this is mainly meant as a way to conversate with the model while you are programming with it. This is not meant to pull questions to a team but more to pair program. a markdown file is best for syntax in an llm prompt and also just easiest to have open and answer questions with. If i had more time and could i would build an extension into cursor.


Why not have the model ask in the chat? It's a lot easier to just talk to it than open a file. The article mentions cursor so it sounds like you're already using cursor?


Because i only have 500 requests in my cursor usage plan so if there's a way for claude to ask me questions (e-g missing context) without it taking an entire new request, i'll take. Haven't tried it yet but looking forward to it


would probably work better, this is just how i threw it together as an internal tool a long time ago. i just improved it and shipped it to opensource it.


Conversate is not a word.


yes it is


It is not.


The word "conversate" is in the dictionary [1], labelled as "non-standard". That doesn't mean it's not a word. Most people would be able to easily infer its meaning.

English is a living language, words come and go.

I don't understand the objection.

[1] https://www.merriam-webster.com/dictionary/conversate


Because it is either laziness, ignorance or AI slop.


I wouldn't agree that using an understandable word that's in the merriam-webster dictionary is being ignorant or lazy. Nor would I call something AI slop because of a single word, without otherwise engaging with the content.

I do genuinely wonder why some people can be so derailed by odd or unfamiliar words and grammar. Are they stressed? Not wanting to engage with a conversation? Trying to assert status or intellectual superiority? Being aggressive and domineering to assuage their self-worth? Perhaps they feel threatened by cultural change? I assume it has something to do with emotional regulation, given that I can't recall bumping into too many mature people who do such things.

Might you have any insight to offer?


the reasoning aspect of most llms these days knows when its unsure or stuck, you can get that from its thinking tokens. It will see this mcp and call it when its in that state. Though this could benefit from some rules file to use it, although cursor doesn't quite follow ask for help rules, hence making this.


Does all thinking end up getting replaced by calls to Ask-human-mcp then? Or only thinking that exceeds some limit (and how do you express that limit)?


garbled/half-hallucinated is probably what you would've gotten 8-12mo ago but now adays im sure with good prompting you can pull value from any book.



i usually just have cursor chat open and i code some function, get cursor to do a code review of that, look for errors and do some chain of thought reasoning with o1-mini and its related functions. I then run the tests and put log output through cursor to identify bug fixes. Using LLMs as almost a pair programmer as well as something to check my work as im working.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: