Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My startup got acquired last year so I haven't interviewed anyone in a while, but my technical interview has always been:

- share your screen

- download/open the coding challenge

- you can use any website, Stack Overflow, whatever, to answer my questions as long as it's on the screenshare

My goal is to determine if the candidate can be technically productive, so I allow any programming language, IDE, autocompleter, etc, that they want. I would have no problem with them using GPT/Copilot in addition to all that, as long as it's clear how they're solving it.



I recently interviewed for my team and tried this same approach. I thought it made sense because I want to see how people can actually work and problem solve given all the tools at their disposal, just like on the job.

It proved to be awkward and clumsy very quickly. Some candidates resisted it since they clearly thought it would make them judged harsher. Some candidates were on the other extreme and basically tried asking ChatGPT the problem straight up, even though I clarified up front "You can even use ChatGPT as long as you're not just directly asking for the solution to the whole problem and just copy/pasting, obviously."

After just the initial batch of candidates it became clear it was muddying things too much, so I simply forbade using it for the rest of the candidates, and those interviews went much smoother.


Over the years, I've walked from several "live coding" interviews. Arguably though, if you're looking for "social coders" maybe the interview is working as intended?

But for me, it's just not how my brain works. If someone is watching me, I'll be so self-conscious the entire time you'll get a stream of absolute nonsense that makes me look like I learned programming from YouTube last night. So it's not worth the time.

You want some good programming done? I need headphones, loud music, a closed door and a supply of Diet Coke. I'll see you in a few hours.


Yep, if I’m forced to talk through the problem, I’ll force myself to go through various things that you might want to hear, that I wouldn’t do.

Whereas my natural approach would be to take a long shower, workout etc and let my brain wander a bit before digging into it. But that wouldn’t fly during an interview..


Ironically this is exactly how I am too. Even at work, if I'm talking through a problem on a presentation or with my boss, I'm much more scatterbrained, and I'll try to dodge those kinds of calls with "Just give me 30 minutes and I'll figure it out." which always goes better for me.

That said, now we're just talking about take home challenges for interviews and you always hear complaints about those too. And shorter, async timed challenges (something like "Here's a few hours to solve this problem, I'll check back in later") are now going to be way more difficult to judge since AI is now ubiquitous.

So I really don't think there's any perfect methodology out there right now. The best I can come up with is to get the candidate in front of you and talk through problems with them. The best barometer I found so far was to set up a small collection of files making up a tiny app and then have candidates debug it with me.


> The best barometer I found so far was to set up a small collection of files making up a tiny app and then have candidates debug it with me.

This is a great one! I wish more companies tried that.


I need my default mode network to produce good code, and I don't talk while it's active


The interview works as intended because the main priority is to avoid hiring people who will be a negative for the company. Discarding a small number of good candidates is an acceptable tradeoff.


What do you do if a junior asks for help and it's easiest to code through with them?


Well, that's not really the same thing though?

In an interview, the coding challenge is often to produce something new from scratch while being closely monitored by people you don't know, who control your financial future.

When working with a "junior," you'd already be fairly familiar with the code base, build system, and best practices. And with a junior, you're not likely to be solving things that require deep concentration, like never-before-seen problems or architectural work (or screwball interview-tests). And, unlike an interview, if something does require all my focus, it's very easy to defer. Take a break and think about it alone.


What are you supposed to ask chatGPT if you can’t just ask it the answer? That’d confuse me too.


Some part of the problem statement you want help with (rather than a complete answer)?


I mean, that’s obvious, but also incredibly silly if I know it can give me both the answer and the reasoning behind it.

The challenge should be in determining if ChatGPT is correct.


One example would be looking up syntax and common functions. In a high-pressure situation it's much tougher to bumble around Google and Stack Overflow, so this would be a way for solving for "I totally know how to do this thing but it's just not coming to mind at this moment" which is fair. Usually we the interviews can obviously just tell them ourselves though, but that's what I was going for.

But yeah, the point is that once I applied it in practice it did quickly become confusing, so now I know from experience not to use it.

I think the other suggestions in this thread about how to use it are good ones, but they would present their own meta challenges for an interview too. Just about finding whatever balance works for you I guess.


Just another interview methodology pulled out of someone's ass. They don't know.


As opposed to all other interviewing methodologies which are rigourously tested?

Unfortunately in our industry it's pretty much all personal anecdotes on what works better and what doesn't.


Did you tell them that you “want to see how people can actually work and problem solve given all the tools at their disposal, just like on the job”? Just curious.


Yup, we told them exactly that.


> "You can even use ChatGPT as long as you're not just directly asking for the solution to the whole problem and just copy/pasting, obviously."

No, it's not "obvious" whatsoever. Actually it's obviously confusing: why you are allowing them to use ChatGPT but forbidding them from asking the questions directly? Do you want an employee who is productive at solving problems, or someone who guess your intentions better?

If AI is an issue for you then just ban it. Don't try to make the interview a game of who outsmart who.


See my answer to the other comment on this question. We figured there were some good use cases for AI in an interview that weren't just copy/pasting code, it's not about guessing intentions. It seemed most helpful to potentially unstick candidates from specific parts of the problem if they were drawing a blank under pressure, basically just an easier "You can look it up on Google" in a way that would burn less time for them. However we quickly found it was just easier for us to unstick them ourselves.

> If AI is an issue for you then just ban it.

Yes, that was the conclusion I just said we rapidly came to.


I've had a few people chuck the entire problem into ChatGPT, it was still very much useful in a few ways:

- You get to see how they then review the generated code, do they spot potential edge cases which the AI missed? - When I ask them to make a change not in the original spec, a lot of them completely shut down because they either didn't understand the code generated well enough, or they themselves didn't really know how to code.

And you still get to see people who _do_ know how to use AI well, which at this point is a must for its overall productivity benefits.


Maybe come up with a problem that isn’t so simple you can just ask it to ChatGPT. Create some context that would be difficult/tedious to convey.


If you really don't penalize them for this, you should clearly state it. Some people may still think they'll be penalized as that is the norm.


I'd be fine with the GPT side of things, as long as I could somehow inject poor answers, and see if the interviewee notices and corrects.


That's actually a horribly awesome idea.


the trick is to phrase the problem in a way that GPT4 will always give the incorrect answer (due to vagueness of your problem) and that multiple rounds of guiding/correcting are needed to solve.


That's pretty good because it can exhaust the context window quickly and then it starts spiraling out of control, which would require the candidate to act.


If you only use ChatGPT to code, you are only able to copy paste the llm emitted code, then you ask for changes to the code (to reflect for example the evolution of the product)


There's more than one possible AI on the other end, so crafting something that will not annoy a typical candidate, but will lead every AI astray seems pretty difficult.


Maybe you could allow using AI, but only through the interviewer-provided interface. That interface would allow using any model the candidate likes, but before sending the response it will inject errors into the code (either randomly or through another AI prompt).


I did this while hiring last year and the number of candidates who got stuff wrong because they were too proud to just look up the answer was shocking.


Is it pride or is it hard to shake the (reasonable, I'd say) fear the reviewer will judge regardless of their claims?


Exactly. You never know. Some interviewers will penalize you for not having something memorized and having to look it up, some will penalize you for guessing, some will penalize you for simply not knowing and asking for help. Some interviewers will penalize you for coming up with something quick and dirty and then refining it, some will penalize you for jumping right to the final product. There's no consistency.


I do what I can to allay that fear. The rest is up to them.


I love these kind of interviews. This would very closely simulate real world on-job Performance.


If I had to do real world on-job coding while someone looks over my shoulder at all times (i.e. screensharing), I'd be flipping burgers.


I don't care how you're good at it so long as I can watch.


> I would have no problem with them using GPT/Copilot in addition to all that, as long as it's clear how they're solving it.

Too many people are the opposite that I would literally never tell you

And this works.

what can we do to help that?

I’ve had interviews where AI use was encouraged as well.

but so many casual tirades against it dont make me want to ever try being forthcoming. most organizations are realistically going to be 10 years behind the curve on this


Screen share or in person are what I think the best ways are. These are not the best options.

I do not want AI. The human is the value add.

I understand that people won't feel super comfortable with this, and I try not to roast the candidate with leetcode. It should be a conversation where I surface technical reality and understanding.


im not doing any coding challenges that aren't real world

if i see anything remotely challenging i dip out. interviewing is just a numbers game nowadays so i dont waste time on interviews if they seem like they're gonna burn me out for the rest of the day. granted i have 11 years experience


The difficulty of your questions have to change drastically if they are using good tooling. Many a problem that would take a reasonable candidate half an hour to figure out is 'free' for Claude, so your question might not show any signal. And if you tweak your questions to be sure to not be auto-solved by a strong enough AI, then you better say it's semi-required, because the difficulty level of the question you need to ask goes up quite a bit.

Some of the questions in our interview loop have been posted in github... which means every AI has trained on them specifically. They are, therefore, useless if you have AI turned on. And if you interview enough people, someone will post on github, and therefore your question will have a pretty short shelf life before it's in training and instantly solved.


[flagged]


It's pretty obvious when someone's input focus changes to nothing or when their mouse leaves the screen entirely, or you could just ask to see the display settings to begin. Doesn't solve for multiple computers but it's pretty obvious in real time when someone's actual attention drifts or they suddenly have abilities they didn't have before.

Either way, screen sharing beats whiteboards. Even if we throw our hands up and give up, we'll be firing frauds before the probationary period ends.


There is nothing fraudulent about using LLMs. If people can use them on the job, it's okay to use them on the interview. They're the calculators of tomorrow if not of today.

Interviewing just needs to adapt such as by assessing one's open source projects and contributions. Not much more is needed. And if the candidate completely misrepresents their open source profile, this can be handled by an initial contract-to-hire period.


I agree that there's nothing fraudulent with using a tool you would use on the job when you are interviewing. But in no way are LLMs equivalent to calculators. Calculators actually give the correct answer reliably, unlike LLMs. A sporadically reliable tool is worse than no tool at all.


LLMs have come a long way. If you give gpt-o3-mini the same interview question five times, chances are good that it will get it right all five times. Yes, it's not a calculator, but it's approaching one.


> There is nothing fraudulent about using LLMs.

There is if you’re asked not to.


Negative. They are not the law.


What are you talking about? If you lie to get a job, you are committing fraud. The company is not making any law here.


Using AI secretly in an interview setting where you were told the constraints excluded them or the interview required everything to be on the screen share even if they were permitted is fraudulent behavior. It’s not much different than having a surrogate interviewee at that point. You’d only being doing it to deceive the interviewer.

Open source contributions is a bad metric for interviewing too. People have lives outside a computer, if they aren’t doing open source contributions in their free time outside of work I wouldn’t hold that against them. If someone has those that’s great and I’d take a look, but I’m not disqualifying someone else for not working for free. Someone doing OSS as an interviewing badge of honor is a chump in my book. At least do it for principled reasons.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: