The problem is that a good interview is only vaguely related to getting a good employee. Anyone can ace and interview and then slack off once they have to job.
If someone aces the interview using an LLM and then does good work using that same LLM then what should the employer or other employees care? The work is getting done, so what's the problem?
Compare a shitty worker to a deceptive one using an LLM. They both passed the interview and in both cases the work isn't being done. How are those two cases different?
Your hypotheticals are all extremely unlikely. People who ace interviews are usually good, and people who lean on stuff like ChatGPT aren't. I'd also rather not have someone dumping massive amounts of ChatGPT output into a good codebase.
>what's the problem?
Using a LLM is akin to copy/pasting code from random places. Sure, copy/paste can be done productively, except ChatGPT output comes completely untested and unseen by intelligent eyes. There are also unsolved copyright infringement issues via training data, and a question as to whether the generated code is even copyrightable as it is the output of a machine.
People who ace interviews are people with practice. That means you are last in a long line of unsuccessful interviews or the person constantly interviewing and will be leaving you as fast as they came in.
Find someone with a great resume and horrible interview skills. Chances are they have been working for years and are entering the job market for the first time. You are one of the firsts in their interview process. Grab them right away because once they start getting slightly good in the interview process someone will snap them up and realize they got a 10x (whatever it means to that company).
You'll never find that 10x if you are looking at interview performance unless you can compete on price and reputation.
You don't have to guess if someone is entering the job market for the first time. You can just look at their resume.
Interview skill is not some monotonically increasing quantity. It very much depends on how the question hits you and what kind of a day you've had. Also, it somewhat depends on the interviewers' subjective interpretation of what you do. If you're more clever than them, your answer may go over their head and be considered wrong. They might also ask a faulty question and insist it is correct.
I'm not great at interviews myself. My resume is decent, but the big jobs usually boil down to some bs interviews that seem unnecessarily difficult to pass. I don't practice much for them, because I feel like it mostly depends on whether I've answered a similar question before and how I feel that day. I also often get a good start and just run out of time. I've found that sometimes interviews are super hard when the interviewers have written you off, as in you presented poorly in an earlier session and they are done with you. Also, when there is zero intention of hiring you generally, like someone else already got the job in their minds.
> does good work using that same LLM then what should the employer or other employees care?
Maybe I'm wrong, but I find it very hard to believe that anyone thinks the "good work" part here is actually a practical possibility today. Boilerplate generation is fine and certainly possible, and I'm not saying the future won't bring more possibilities. But realistically anyone that is leaning on an LLM more than a little bit for real work today is probably going to commit garbage code that someone else has to find and fix. It's good enough to look like legitimate effort/solutions at first glance, but in the best case it has the effect of tying up actual good faith effort in long code reviews, and turns previously productive and creative individual contributors into full-time teachers or proof-readers. Worst case it slips by and crashes production, or the "peers" of juniors-in-disguise get disgusted with all the hand-holding and just let them break stuff. Or the real contributors quit, and now you have more interviews where you're hoping to not let more fakers slide by.
It's not hard to understand that this is all basically just lies (misrepresented expertise) followed by theft. Theft of both time & cash from coworkers and employers.
It's also theft of confidence and goodwill that affects everyone. If we double the number of engineers because expectations of engineer quality is getting pushed way down, the LLM-fakers won't get to keep enjoying the same salary they scammed their way into for very long. And if they actually learn to code better, their improved skills will be drowned out by other fakers! If we as an industry don't want homework, 15 interviews per job, strong insistence on FOSS portfolio, lowered wages, and lowered quality of life at work.. low-effort DDoS both in interviews or in code-reviews should concern everyone.
The premise of my comment was: if a person passes an interview using some tool and then uses that same tool to do the job, then didn't the interview work?
You found a person (+ tool combo) that can do the job. If that person (+ tool combo) then proceeds to do the job adequately, is there a problem?
If you present a scenario in which a person passes the interview and then doesn't do the job, the you are answering a question I didn't ask.
To you scenario I would respond: the interview wasn't good enough to do its job, the whole point of the interview process is to find people (+ tool combos, if you allow) that can do the job.
That's not the point I was making. The full quote is:
>Anyone can ace and interview and then slack off once they have to job.
In that a person can pass an interview, get hired, and then not do the job. An interview will never tell you if you will get poor job performance with 100% accuracy.
I don't think you are getting my point. You can totally ace an interview and then slack off. That's it, that's my point. Not the opposite, not something else, just that.
Ok. I see. This is theoretically possible. But in practice, I haven't seen it. That's not something I really care about spending effort filtering for in an interview.
If someone aces the interview using an LLM and then does good work using that same LLM then what should the employer or other employees care? The work is getting done, so what's the problem?
Compare a shitty worker to a deceptive one using an LLM. They both passed the interview and in both cases the work isn't being done. How are those two cases different?