Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel like that blogpost was almost just ragebait for ai researchers. It goes between calling not including the +1 an error (which to me implies it would improve training losses, which it doesn't really https://news.ycombinator.com/item?id=36854613) and saying possibly it could help with some types of quantization (which could very well be true but is a much weaker statement) and the author provides basically no evidence for either.


It's the stereotypical computer scientist who thinks they know something others don't and don't feel the need to prove their claim. Specifically when it disagrees with experts. And unsurprisingly it's been something others have already investigated and even written about. Definitely not all CS people, but it is a stereotype many other fields believe.

I know he's an economist btw. I was also surprised he got a job at anthropic a few months after. I wonder if they're related.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: