> The best are not that much more productive than merely good (on routine tasks).
> There just aren't that many "best performers" available.
> The real world, of course, is far more complex than the limited, controlled study that we conducted.
I think this pretty much sums up why I don't like this article.
They had some real world-data, but didn't really have any interesting conclusions. Obviously someone isn't going to be good at everything, and if they were, there aren't going to be that many people. The issue of individual programmer productivity is very complex, and studies like these are way too abstract to draw any meaningful conclusions.
This is one of those cases where it's easier to talk about things using common sense because it involves things that are easy to break down by the human mind but hard to break down into a scientific theory. We know certain people are better at their jobs, maybe even 10x the average programmer for specific situations. But at the same time, it might just be that the comparison being made is unfair, or the person is good at one specific thing but bad at something else, these are things that have to be evaluated on a case-by-case basis.
People have gotten way too obsessed with the term as as a concept and it doesn't help to talk about 10xers because the discussion is at a high enough level that it's too vague and meaningless. It always just devolves into what even is a 10xer and what if someone is 0.1xer and so on...
The weird thing about the article is that figure 3 clearly shows that some programmers did have consistently excellent performance, but they don't show the relative performance of the top performers, only their rank. We are supposed to infer it from table 1, I suppose, but that only shows the relative performance for the top 5%.
> There just aren't that many "best performers" available.
> The real world, of course, is far more complex than the limited, controlled study that we conducted.
I think this pretty much sums up why I don't like this article.
They had some real world-data, but didn't really have any interesting conclusions. Obviously someone isn't going to be good at everything, and if they were, there aren't going to be that many people. The issue of individual programmer productivity is very complex, and studies like these are way too abstract to draw any meaningful conclusions.
This is one of those cases where it's easier to talk about things using common sense because it involves things that are easy to break down by the human mind but hard to break down into a scientific theory. We know certain people are better at their jobs, maybe even 10x the average programmer for specific situations. But at the same time, it might just be that the comparison being made is unfair, or the person is good at one specific thing but bad at something else, these are things that have to be evaluated on a case-by-case basis.
People have gotten way too obsessed with the term as as a concept and it doesn't help to talk about 10xers because the discussion is at a high enough level that it's too vague and meaningless. It always just devolves into what even is a 10xer and what if someone is 0.1xer and so on...