Tbh I’m very happy these tools exist. If your company wants to ask dumb formulaic leetcode questions and doesn’t care about the candidate’s actual ability then this is what you deserve. If they can automate the interview so well then they should also be able to automate the job right? Or are your interview questions not representative of what the job actually entails?
I understand this sentiment for experienced developers. It is an imperfect signal. But what is in your opinion a better signal for junior or new grads?
Every alternative I can think of is either worse, or sounds nice but impractical to implement in practice at scale.
I don’t know about you, but most interviewers out there don’t have the ability to judge the technical merit of a bullshitters’s contribution to a class or internship project in half an hour, specially if it’s in a domain interviewer has no familiarity with. And by the way, not all of them are completely dumb, they do know computer science, just perhaps not as well as an honest competitor.
>But what is in your opinion a better signal for junior or new grads?
They are juniors, I don't expect them to be experts, I expect eagerness and passion. They spent 4 or more years focusing on schooling, show me the results of your projects. Let them talk and see how well they understand what they did. Side projects are even better to stand out.
And you know... apparently people can still fail fizzbuzz in 2025. If you really question their ability to code, ask the basics, not if they can write a Sudoku verifier on the spot. If you aren't a sudoku game studio I don't see the application outside of "can they work with arrays?"
>I don’t know about you, but most interviewers out there don’t have the ability to judge the technical merit of a bullshitters’s contribution to a class or internship project in half an hour, specially if it’s in a domain interviewer has no familiarity with.
everyone has a different style. Personally I care a lot less about programming proficiency and a lot more about technical communication. If they only wrote 10 lines of code for a group project but can explain every aspect of the project as if they written it themselves, what am I really missing? The odds of that sort of technical reaspning being accompanied by poor coding is a lot rarer than the alternative of a Leetcode wizard who can't grasp architectural concept nor adjust to software tooling, in my experiences.
Yea I totally agree. During one of my interviews, the interviewers asked me to write "snake game" in react. I had spent the last week studying their open source project and learning how things were structured, and then the two part interview consisted of parsing json and outputting it as markdown, and writing snake game. They're weren't a game ship, so it really didn't make any sense that they would've asked about that... It was really lame
>But what is in your opinion a better signal for junior or new grads?
For students specifically, the strongest signal is if they've done research with a past collaborator of mine and my collaborator vouches for them. It's a great signal because it's very high barrier to entry and absolutely does not scale.
Realistically, being able to speak confidently about something they did/built during their education is a decent proxy. If they can handle open-ended follow-up questions like "what did you learn?" and "what trade-offs did you make?" and "how would you tweak it under X different requirements?" then that's a great signal too.
These aren't "gotcha" questions, but they insist on the candidate being reasonably competent and (most importantly) an actual human who can think on their feet.
> Or are your interview questions not representative of what the job actually entails?
100% of all job interviews are a proxy. It is not possible to perform an interview in ~4 hours such that someone sufficiently engages in what the job “actually entails”.
A leetcode interview either is or not a meaningful proxy. AI tools either do or not invalidate the validity of that proxy.
Personally I think leetcode interview are an imperfect but relatively decent proxy. And that AI tools render that proxy invalid.
Hopefully someday someone invents a better interview scheme that can reliably and consistently scale to thousands of interviews, hundreds of hires, and dozens of interviewers. That’d be great. Alas it’s a really hard and unsolved problem!
>> It is not possible to perform an interview in ~4 hours such that someone sufficiently engages in what the job “actually entails”.
>> ... leetcode interview are an imperfect but relatively decent proxy.
I think all this is just the status quo that should be challenged instead of being justified.
When I conduct interviews (environment: a FAANG company), I focus on (a) fundamental understanding and (b) practical problems. None of the coding problems I pose are more than O[N] in complexity. Yet, my hiring decisions have never gone wrong.
>> How do you know? Every interview you conduct is for your team?
My apologies. I should have been more careful in making the claim. You are right in challenging me**.
>> What do you ask? If you want to challenge the status quo you have to offer a replicable alternative
I cannot share the specific problems publicly as then those become available for candidates to practice out, whereas the idea is to give them a fresh problem to think about. Further, the list of interviewers is often made available to the candidates ahead of the time, which can create a loophole as interviewers tend to have a relatively small set of problems they keep repeating for different candidates***.
In general however, I always use real-life problems. Similar examples from text books may perhaps be: (a) Find the day of the week for a given date, (b) Output a given number in textual form (e.g., 14630 -> Fourteen thousand, six hundred and thirty), (c) etc. Most of the problems I pose are motivated from actual use cases, while the remaining are taken from my own prior interviews as a candidate. As I noted before, each one is at best Big-O[N] in time complexity, just like the Fizz Buzz problem. I have brought in more complex ones only when testing for specialized skills for specialized positions.
I also focus a lot on fundamentals. There have been candidates hired for my own teams which could not fully solve the problem I posed, but who however were unambiguously going in the right direction and would not say anything wrong or stupid during their reasoning towards it. I chose to go ahead with them with some risks in my mind, however, these people proved to be stellar in performance. When fundamentals are understood well, gaps can be picked up on the job as well.
I still do stand by my original assertion that proxy problems can be avoided for interviewing. In general, the best method to solve a problem is to solve that problem itself and not another.
** Appendix: Here're the actual data points:
- For interviews within my own team where I have been an interviewer, my claim likely holds. However, this is a small number of hires (say around six-eight) so the error margins would be large. Also, I would not have information about false-rejects, hence '100% accuracy' cannot be claimed even in theory.
- I have been a interview 'bar raiser' at Amazon. Bar Raisers at Amazon are people highly trusted for interview outcome decisions and process control.
*** I have even had cases where (a) someone interviewed a candidate for a second time after a time gap and posed exactly the same problem, (b) a recruiter leaked out questions an interviewer asked frequently to the candidate.