> Some scholars observe that Fig. 5.2 looks like a regression effect, and then claim that this constitutes a complete explanation for the Dunning–Kruger phenomenon. What these critics miss, however, is that just dismissing the Dunning–Kruger effect as a regression effect is not so much explaining the phenomenon as it is merely relabeling it. What one has to do is to go further to elucidate why perception and reality of performance are associated so imperfectly. Why is the relation so regressive? What drives such a disconnect for top and bottom performers between what they think they have achieved and what they actually have? [...] As can be seen in the figure, correcting for measurement unreliability has only a negligible impact on the degree to which bottom performers overestimate their per-formance (see also Kruger & Dunning, 2002). The phenomenon remains largely intact.
The DK effect says roughly, "low performers tend to overestimate their abilities." Yet when researchers analyzed the data, they found that high and low performers overestimate and underestimate with the same frequency. [0] It's just that high performers are more accurate than low performers (note how this statement differs from the DK effect). Since you can completely explain the "X graph" by the random noise combined with the ceiling effect, and since beginners' self evaluations are noisier than experts', you don't even need regression to the mean to explain why you get the "X graph."
0. Nuhfer, Edward, Steven Fleisher, Christopher Cogan, Karl Wirth, and Eric Gaze. "How Random Noise and a Graphical Convention Subverted Behavioral Scientists' Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives." Numeracy 10, Iss. 1 (2017): Article 4. DOI: http://dx.doi.org/10.5038/ 1936-4660.10.1.4
> Some scholars observe that Fig. 5.2 looks like a regression effect, and then claim that this constitutes a complete explanation for the Dunning–Kruger phenomenon. What these critics miss, however, is that just dismissing the Dunning–Kruger effect as a regression effect is not so much explaining the phenomenon as it is merely relabeling it. What one has to do is to go further to elucidate why perception and reality of performance are associated so imperfectly. Why is the relation so regressive? What drives such a disconnect for top and bottom performers between what they think they have achieved and what they actually have? [...] As can be seen in the figure, correcting for measurement unreliability has only a negligible impact on the degree to which bottom performers overestimate their per-formance (see also Kruger & Dunning, 2002). The phenomenon remains largely intact.
The DK effect says roughly, "low performers tend to overestimate their abilities." Yet when researchers analyzed the data, they found that high and low performers overestimate and underestimate with the same frequency. [0] It's just that high performers are more accurate than low performers (note how this statement differs from the DK effect). Since you can completely explain the "X graph" by the random noise combined with the ceiling effect, and since beginners' self evaluations are noisier than experts', you don't even need regression to the mean to explain why you get the "X graph."
0. Nuhfer, Edward, Steven Fleisher, Christopher Cogan, Karl Wirth, and Eric Gaze. "How Random Noise and a Graphical Convention Subverted Behavioral Scientists' Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives." Numeracy 10, Iss. 1 (2017): Article 4. DOI: http://dx.doi.org/10.5038/ 1936-4660.10.1.4