> What I'm trying to say is the current coding interview is a really poor mechanism to gauge a software engineer.
People like to say this, but in my experience this is not true. It's just that people misunderstand the goal of technical interviews and they often are poor at evaluating their own skills.
First off, these giant tech companies have enormous economic incentives to improve their interview processes as much as possible. They also do a pretty rigorous assessment of the effectiveness of their interview process (Google, for example, has publicized some of their data). I'm not saying these tech companies interview processes are perfect, but I also have a problem believing they're so fundamentally flawed that these companies can't figure out how to fix them given the giant economic returns they get for optimizing their hiring processes.
Moreover, as some other comments mentioned, many companies (and individuals, myself included) believe it is much worse to hire someone who ends up not cutting it, than missing out on a potentially good hire. I can list out all the reasons why, but Joel Spoelsky has a pretty famous essay from a couple decades ago on the topic that explains it well [1].
Thus, it's not surprising hearing a lot people complain that they can do the job, but they aren't good at interviews. Because, from Google's/Microsoft's/etc. perspective, they're fine with a bit higher false negative rate if they can greatly reduce their false positive rate. And my experience matches that: I have never seen a candidate who did awesome in "whiteboard-style programming questions" who couldn't cut it programming-wise (they may have had other issues, but "coding productivity" wasn't one of them). Now, I certainly believe and have seen that there are some people who aren't good at these questions who can do a job well, but there are also a ton more people who can't do the job if they can't pass a technical screen, so hiring any of these folks means much more risk.
I also think that whiteboard-style coding questions help show a quality that is very important to businesses, even if those questions don't represent "real world" work. There are basically 2 types of people that do well at these questions: people who are just naturally smart and have a ton of experience to the point that they wouldn't even need to study to do well, and people who are of more "normal" intelligence/ability, but who can do well if they study a ton. Either of those two groups would likely do well in a programming role. So often I hear the complaint "I'm a busy person, I've got outside responsibilities, you can't expect me to spend all this time studying". And that may be true, but you'll be competing against people who are willing to study, so I don't think you can fault Google et al for favoring people who show a willingness to do more preparation.
1. https://www.joelonsoftware.com/2006/10/25/the-guerrilla-guid... "And in the middle, you have a large number of “maybes” who seem like they might just be able to contribute something. The trick is telling the difference between the superstars and the maybes, because the secret is that you don’t want to hire any of the maybes. Ever."
> They also do a pretty rigorous assessment of the effectiveness of their interview process
Companies do review their hiring processes, but actual experiments and data seem fairly rare. It's harder than you think. What experiment would you run? Hire a group of people entirely randomly, and compare their performance reviews after 2 years?
That's pretty much exactly my point. In the 90s, wide-scale hiring for software engineers was a relatively new thing - many companies were just figuring it out. And so they did some shit pretty early on that didn't make sense. But for all the times I hear folks pulling out the "Why are manhole covers round?" and "How many cars are there in Manhattan?" examples, I haven't heard these types of brainteaser questions being used for nearly 2 decades.
I'm not arguing that the FAANGs have some perfect, unassailable interview process that can never be improved, but I am arguing that so often I hear grumbling discontent from people who don't like the interview process, but rarely do I see much examination around why those particular hiring processes appear to work fairly well for the likes of Google, Apple, etc.
Those puzzle questions weren't just used to hire software engineers. Hiring fads sweep corporate America, despite no evidence of effectiveness, and despite economic incentives to hire well.
Yes, they did move away from it, but that doesn't mean we aren't now in the grip of equally bad fads.
> but rarely do I see much examination around why those particular hiring processes appear to work fairly well for the likes of Google, Apple, etc.
You assume they work well, but you don't have any data to support that. That's sort of assumption is basically where these hiring fads come from.
People like to say this, but in my experience this is not true. It's just that people misunderstand the goal of technical interviews and they often are poor at evaluating their own skills.
First off, these giant tech companies have enormous economic incentives to improve their interview processes as much as possible. They also do a pretty rigorous assessment of the effectiveness of their interview process (Google, for example, has publicized some of their data). I'm not saying these tech companies interview processes are perfect, but I also have a problem believing they're so fundamentally flawed that these companies can't figure out how to fix them given the giant economic returns they get for optimizing their hiring processes.
Moreover, as some other comments mentioned, many companies (and individuals, myself included) believe it is much worse to hire someone who ends up not cutting it, than missing out on a potentially good hire. I can list out all the reasons why, but Joel Spoelsky has a pretty famous essay from a couple decades ago on the topic that explains it well [1].
Thus, it's not surprising hearing a lot people complain that they can do the job, but they aren't good at interviews. Because, from Google's/Microsoft's/etc. perspective, they're fine with a bit higher false negative rate if they can greatly reduce their false positive rate. And my experience matches that: I have never seen a candidate who did awesome in "whiteboard-style programming questions" who couldn't cut it programming-wise (they may have had other issues, but "coding productivity" wasn't one of them). Now, I certainly believe and have seen that there are some people who aren't good at these questions who can do a job well, but there are also a ton more people who can't do the job if they can't pass a technical screen, so hiring any of these folks means much more risk.
I also think that whiteboard-style coding questions help show a quality that is very important to businesses, even if those questions don't represent "real world" work. There are basically 2 types of people that do well at these questions: people who are just naturally smart and have a ton of experience to the point that they wouldn't even need to study to do well, and people who are of more "normal" intelligence/ability, but who can do well if they study a ton. Either of those two groups would likely do well in a programming role. So often I hear the complaint "I'm a busy person, I've got outside responsibilities, you can't expect me to spend all this time studying". And that may be true, but you'll be competing against people who are willing to study, so I don't think you can fault Google et al for favoring people who show a willingness to do more preparation.
1. https://www.joelonsoftware.com/2006/10/25/the-guerrilla-guid... "And in the middle, you have a large number of “maybes” who seem like they might just be able to contribute something. The trick is telling the difference between the superstars and the maybes, because the secret is that you don’t want to hire any of the maybes. Ever."