The author of this assumes the conclusion in order to decide how to analyze his data.
He cannot reasonably say both:
> we have a decision to make: what are we going to assume? How are we going to quantify our surprise from the results?
> The first option is, as in the case of the state census, to assume dependence between X and Y. I.e. to assume that, generally, people are capable of self-assessing their performance.
> The second option conforms with the Research Methods 101 rule-of-thumb “always assume independence.” Until proven otherwise, we should assume people have no ability to self-assess their performance.
> It seems to me glaringly obvious that the first option is much, much more reasonable than the second.
— and -
> most notably the claim that the more skilled people are, the better they are at self-assessing their performance. This result is supported by their plot, but in any case, my issue is not with objections to this claim
and then expect to carry any credibility.
The author of this piece both suggests that a key variable is fixed and later admits it varies within the same dataset.
I guess at least they admit it, but this lacks basic self-consistency.
I'm utterly confused. The latter statements it just the author explaining which parts they didn't discuss in their article; it has no bearing whatsoever on the section before it.
It discloses the cognitive dissonance in his position. He seems to be saying both “skill at assessing ability is random and mathematically bounded only” while admitting “skill at assessing ability changes with ability.”
> The author of this piece both suggests that a key variable is fixed and later admits it varies within the same dataset.
I don't see how that variable changes, here is an example how the error variable can be exactly the same for everyone and reproduce the results:
Lets say the overconfidence is always that you feel 50% of those better than you are actually worse than you. So everyone is equally overconfident, just that the top wont move their own placings as much as the bottom since there are much fewer people that they can mistake being worse than them. Then apply noise to this and you get the graph Dunning-Kruger got.
You could say "But they are better at estimating their rank!", but that is just a mathematical artefact, it isn't a psychological result. Even if everyone always guessed that they are number 1, the better you are the better your guess will be, but in that case it is easy to see that everyone overestimates their skill in the same way instead of the better people having a fundamentally different way of evaluating themselves.
Both analyses seem to agree on one finding: people’s skill at estimating their own ability increases with that skill. It can’t be a purely mathematical artifact because you would see a tapering at either end, or a narrowing distribution of errors at the bottom end, not just a narrowing toward the top end.
This should be unsurprising for anyone who has become sufficiently skilled at something. Beginners can’t even discern the differences the experts are discussing, and frequently make errors in classes they don’t even understand.
Beginners, by definition, are guessing 100%. Some will guess high, others low, and the rest in between. But they are all guessing. Perhaps There's a cultural bias to over-estimate their skill? Perhaps there's a nudge in the process of the study that led them to overestimate?
The lede isn't that people over-estimate their skill level. The lede is, why would that be as they have nothing else to go on. That is the trigger or triggers? And to say, the more experienced estimate better? Well, duh.
> Lets say the overconfidence is always that you feel 50% of those better than you are actually worse than you. So everyone is equally overconfident, just that the top wont move their own placings as much as the bottom since there are much fewer people that they can mistake being worse than them. Then apply noise to this and you get the graph Dunning-Kruger got.
But the data of original D-K paper shows that the top 25% people underestimate their placings. So this whole paragraph, while logically true, has little to do with the original D-K effect.
> You could say "But they are better at estimating their rank!", but that is just a mathematical artefact, it isn't a psychological result. Even if everyone always guessed that they are number 1...
If everyone always guessed that they are number 1, it's a huge psychological result: it means people are extremely irrational when it comes to self-evaluation.
> But the data of original D-K paper shows that the top 25% people underestimate their placings. So this whole paragraph, while logically true, has little to do with the original D-K effect.
That is what you would expect under my model, due to the randomness being limited upwards for the high placings but still go downwards. That is the effect the article we are talking about refers to when they say "Autocorrelation".
He cannot reasonably say both:
> we have a decision to make: what are we going to assume? How are we going to quantify our surprise from the results?
> The first option is, as in the case of the state census, to assume dependence between X and Y. I.e. to assume that, generally, people are capable of self-assessing their performance.
> The second option conforms with the Research Methods 101 rule-of-thumb “always assume independence.” Until proven otherwise, we should assume people have no ability to self-assess their performance.
> It seems to me glaringly obvious that the first option is much, much more reasonable than the second.
— and -
> most notably the claim that the more skilled people are, the better they are at self-assessing their performance. This result is supported by their plot, but in any case, my issue is not with objections to this claim
and then expect to carry any credibility.
The author of this piece both suggests that a key variable is fixed and later admits it varies within the same dataset.
I guess at least they admit it, but this lacks basic self-consistency.