Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The point is that if people estimate their abilities at random, with no information, it will look like people who perform worse over-estimate their performance. But it isn't because people who are bad at a thing are any worse at estimating their performance than people who are good at the thing: they are both potentially equally bad at estimating their performance, and then one group got lucky and the other didn't.

It would require them to be _even worse that random_ for them to be worse at estimating their abilities, rather than simply being judged for being bad at the task. It is only human attribution bias that leads us to assume that people should already know whether they are good or bad at a task without needing to being told.

The study assumed that the results on the task are non-random, performance is objective, and that people should reasonably have been expected to have updated their uniform Bayesian priors before the study began.

If any of those are not true, we would still see the same correlation, but it wouldn't mean anything except that people shared a reasonable prior about their likely performance on the task.

People will nevertheless attribute "accurate" estimates to some kind of skill or ability, when the only thing that happened is that you lucked into scoring an average score. You could ask people how well they would do at predicting a coin flip and after the fact it would look like whoever guessed wrong over-estimated their "ability" and a person who guessed right under-estimated theirs, even though they were both exactly accurate.

This comment section clearly demonstrates the attribution bias that makes this myth appealing, though. And this blog post demonstrates how difficult it is to effectively explain the implications of Bayesian reasoning without using the concept.



Consider the original study: they used 45 Cornell undergraduate students and asked them about grammar. Grammar isn't objective. Everyone there had performed well on the verbal portion of the SAT, but they weren't studying grammar and hadn't gotten instruction on this particular book of grammar they were judged against. It is very likely that what they were capturing in the "better" or "worse" scores is differences in local dialect.

They then judged people whose beliefs about grammar varied from the one book's beliefs about grammar as having over-estimated their performance. They took people out of one context, asked them how they would behave in a novel context, and everyone made an educated guess. The people who guessed correctly were judged to accurately know their own abilities, when actually they may just have gotten lucky.

Thus what Dunning-Kruger's paper actually says is that if you want people to know how you would like them to perform a task, you can't assume they will read your mind: you have to provide them with actual feedback on their performance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: