Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you ask it about things which require deduction like Math, even simple Math questions like multiply binomials or solve a quadratic it gets it totally wrong, confidently, and even if you correct it, it often still gets it wrong.

It’s not even close to something like Wolfram Alpha.

I think we’re blown away more by its command of language and prose than by its reasoning ability. It’s fantastic at generation, but like stable diffusion, things can fit together and look beautiful yet still be not what you asked.



Sure. But if you combine the understanding that this chatbot has with a Wolfram Alpha backend, you could build an even more amazing system. I'm sure someone is working on hooking up language models to math backends (anywhere from a simple calculator to Wolfram Alpha).


DeepMind published a system that does sort this with a backend theorem prover a year ago. My point is, I don’t think transformer based text prediction systems are the right model here. I could be wrong, but it think about how formal systems work, they seem a far cry from what decoder architectures are doing.

https://www.nature.com/articles/s41586-021-04086-x




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: