Suppose you make 1000 people take a test. Suppose all 1000 of these people are utterly incapable of evaluating themselves, so they just estimate their grade as a uniform random variable between 0-100, with an average of 50.
You plot the grades of each of the 4 quartiles and it shows a linear increase as expected. Let's say the bottom quartile had an average of 20, and the top had 80. But the average of estimated grades for each quartile is 50. Therefore, people who didn't do well ended up overestimating their score, while people who did well underestimated it.
In reality, nobody had any clue how to estimate their own success. Yet we see the Dunning-Kruger effect in the plot.
That's the way I understand the statistical analysis, and in my view this exactly supports (not contradicts) DK:
> In reality, nobody had any clue how to estimate their own success.
Wouldn't that mean unskilled people tend to overestimate their skill, and experts tend to underestimate it? Why is there a contradiction with DK's conclusions?
> Wouldn't that mean unskilled people tend to overestimate their skill, and experts tend to underestimate it?
I think it's because the original paper speculates far beyond it:
> The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it.
The argument about autocorrelation says this "dual burden" doesn't need to be there to observe the effect.
Again, not in my reading. In the random data thought experiment, everybody (experts and unskilled alike) suffer from the burden of not having the skill to estimate their performance. The author is even surprised that the DK effect in the random data is bigger than observed in the DK experiment ("In fact, as Figure 9 shows, our effect is even bigger than the original") - but that's because in reality, people do have some ability to estimate their own skill. So the claim that the lack of skill is related to the lack of ability to self-evaluate does make sense, or at least, isn't contradicted by the experiment.
Someone who has a skill=0 can not underestimate and someone with a skill=100 can not overestimate. So by the framing of the question alone the participants are nudged to estimate there own skill "more averagely".
Suppose you make 1000 people take a test. Suppose all 1000 of these people are utterly incapable of evaluating themselves, so they just estimate their grade as a uniform random variable between 0-100, with an average of 50.
You plot the grades of each of the 4 quartiles and it shows a linear increase as expected. Let's say the bottom quartile had an average of 20, and the top had 80. But the average of estimated grades for each quartile is 50. Therefore, people who didn't do well ended up overestimating their score, while people who did well underestimated it.
In reality, nobody had any clue how to estimate their own success. Yet we see the Dunning-Kruger effect in the plot.