It's an artifact of the camera. The camera shutter is long enough that it averages the images over 33ms.
At some point in the video you can see that a high speed camera can see the correct display.
At the 7 minute mark the industrial 14k FPS camera shows essentially zero rollover. The earlier rollover does appear to be an artifact of the cheap consumer grade high speed camera used.
when it comes to real people, they get sued into oblivion for downloading copyrighted content, even for the purpose of learning.
but when facebook & openai do it, at a much larger scale, suddenly the laws must be changed.
Swartz wasn’t “downloading copyrighted content…for the purpose of learning,” he was downloading with the intent to distribute. That doesn’t justify how he was treated. But it’s not analogous to the limited argument for LLMs that don’t regurgitate the copyrighted content.
This is not about memory or training. The LLM training process is not being run on books streamed directly off the internet or from real-time footage of a book.
What these companies are doing is:
1. Obtain a free copy of a work in some way.
2. Store this copy in a format that's amenable to training.
3. Train their models on the stored copy, months or years after step 1 happened.
The illegal part happens in steps 1 and/or 2. Step 3 is perhaps debatable - maybe it's fair to argue that the model is learning in the same sense as a human reading a book, so the model is perhaps not illegally created.
But the training set that the company is storing is full of illegally obtained or at least illegally copied works.
What they're doing before the training step is exactly like building a library by going with a portable copier into bookshops and creating copies of every book in that bookshop.
But making copies for yourself, without distributing them, is different than making copies for others. Google is downloading copyrighted content from everywhere online, but they don't redistribute their scraped content.
Even web browsing implies making copies of copyrighted pages, we can't tell the copyright status of a page without loading it, at which point a copy has been made in memory.
Making copies of an original you don't own/didn't obtain legally is not fair use. Also, this type of personal copying doesn't apply to corporations making copies to be distributed among their employees (it might apply to a company making a copy for archival, though).
> when it comes to real people, they get sued into oblivion for downloading copyrighted content, even for the purpose of learning.
Really? Or do they get sued for sharing as in republishing without transformation? Arguably a URL providing copyrighted content, is you offering a xerox machine.
It seems most "sued into oblivion" are the reshare problem, not the get one for myself problem.
From my observations: cold start, ease of patching.
If you're running a lot of different JS code or restarting the code frequently, it's faster than node.
Where it's useful: fuzzing. If you have a library/codebase you want to fuzz, you need to restart the code from a snapshot, and other engines seem to do it slower.
It's also really easy to patch the code, because of the codebase size. If you need to trace/observe some behavior, just do it.
Salesforce sandboxing is too easy to escape. Last time I needed to implement some feature for Salesforce, I've encountered 4 different escapes. It was also horrible dev experience.
It's not about being poor.
First, the climate didn't require AC in most of the Europe, until ~10 years ago. You had a few hot days, and that's it.
Second, thermal isolation in the US is extremely bad quality. I think people could cut their AC usage by half if they had proper thermal isolation in their houses.
Third, northern Europe countries still don't have a climate to justify buying an AC.
Specifically, American houses lack thermal mass due to being constructed mainly from wood. Concrete and brick will buffer over a week or so of heat before it warms up too much.
In Florida, most of the homes are built from concrete brick with wood trusses. There are apartments made from wood and concrete.
It’s not the heat completely - it is also the humidity. You can bear up to 80 F before it starts to feel uncomfortable. Humidity will make even 75F uncomfortable.
Relative humidity isn't a great indicator of comfort. It's better to look at dew point. The Netherlands is not only cooler on average but also has a lower dew point. This shouldn't be surprising given each country's latitudes.
Both regions have high humidity, but Florida tends to have higher average humidity levels, particularly in the summer months. Florida has a subtropical to tropical climate, characterized by high temperatures and humidity, especially in the summer. Florida experiences high humidity levels throughout the year, often ranging from 70% to 90%. Summer months are particularly humid, with frequent afternoon thunderstorms.
The Netherlands has a temperate maritime climate, influenced by the North Sea.
Florida and the Netherlands are not close in comparison.
I’m sorry, but it is just mind boggling to suggest that Netherlands and Florida have comparable weather in any sense. You wouldn’t suggest that the weather in Netherlands is as hot as in, say, Italy or Greece, and Florida is even hotter than these two.
I'm not saying it's as hot here as it is in Florida. But we've been breaking records left and right up to the point where I've purchased an AC (a crappy mobile one for lack of better options here for rented apartments) because we go through months every summer now where I can barely sleep without one anymore.
My point was that people often don't realize how humid it is here. You apparently also can't believe it. And how our buildings are not made to keep heat out, but rather in. So I expect many more ACs to be sold here as well in the coming years.
It might just be a month or two each year. And it might be worse for you. But it's also getting pretty bad here already thanks to climate change. And that's not going to improve anytime soon thanks to all of us.
Yes, mostly by using insulating (double) glass to let warmth in in the form of light that then warms up the interior. Think greenhouses. Surround that with poorly insulated walls and limited ventilation and in cold weather they'll leak out heat while in warm weather they'll also heat up in the sun and radiate that in.
Any home with an ACH nat of 1 that's attempting to condition the air (heating or cooling) is wasting a mind boggling percentage of the energy. Surely that's not the natural ventilation rate of the _typical_ home? That would imply that 50% of homes are worse.
Artificial neural networks work the following way: you have a bunch of “neurons” which have inputs and an output. Neuron’s inputs have weights associated with them, the larger the weight, the more influence the input has on the neuron. These weights need to be represented in our computers somehow, usually people use IEEE754 floating point numbers. But these numbers take a lot of space (32 or 16 bits).
So one approach people have invented is to use more compact representation of these weights (10, 8, down to 2 bits). This process is called quantisation. Having a smaller representation makes running the model faster because models are currently limited by memory bandwidth (how long it takes to read weights from memory), going from 32 bits to 2 bits potentially leads to 16x speed up. The surprising part is that the models still produce decent results, even when a lot of information from the weights was “thrown away”.
Not a browser, but a PWA. It's a web page, which you can "install" as an "app". Features like storage, background tasks and notifications are important for many applications, for example a messenger. These were available, and there is a market for those, but Apple has decided to kill that market.
I've had a displeasure of interviewing someone who used ChatGPT in a live setting.
It was pretty obvious: I ask a short question, and I say that I expect a short answer on which I will expand further. The interviewee sits there in awkward silence for a few seconds, and starts answering in a monotone voice, with sentence structure only seen on Wikipedia. This repeats for each consecutive question.
Of course this will change in the future, with more interactive models, but people who use ChatGPT on the interviews make a disservice to themselves and to the interviewer.
Maybe in the future everybody is going to use LLMs to externalize their thinking. But then why do I interview you? Why would I recommend you as a candidate for a position?
The idea that spotting cheating is obvious is a case of selection bias. You only notice when it's obvious.
Clearly, the person put 0 effort towards cheating (as most cheaters would, to be fair). But slightly adjusting the prompt, or just paraphrasing what ChatGPT is saying, would make the issue much harder to spot.
Maybe I’m a slow reader, but reading, understanding, and paraphrasing the response seems like it would take enough time to be awkward and obvious as well.
I’m not sure why anyone would want a job they clearly aren’t qualified for.
As an interviewer, if a candidate can use chatGPT to give a better answer than other candidates, I'm not gonna mark them down for use of chatGPT.
It's a tool, and if they can master it to make it useful, then credit to them.
Alas, ChatGPT seems to be a jack of all trades, but master of none, which is gonna make it hard to pass my interviews which test very specific technical skills.
> As an interviewer, if a candidate can use chatGPT to give a better answer than other candidates, I'm not gonna mark them down for use of chatGPT.
Tool usage is what separates us from animals and is generally ok where tools are available/expected, but in this case I think you misunderstand which tool we're talking about. The tool involved isn't actually chatGPT, it's more like strategic deception. Consider the structurally similar remark "as a voter, if a candidate can use lies to represent themselves as better than other candidates, I'm not gonna mark them down for use of dishonesty".
The rest of this comment is not directed at you personally at all, but the number of folks in this thread who are extremely eager to make various excuses for dishonesty surprised me. The best one is "if dishonesty works, blame the interviewer". I get the superficial justification here like "should have asked better questions", but OTOH we all want fairly short interview processes, no homework, job-related questions without weird data-structures and algorithms pop-quizes, etc, so what's with the double standards? Hiring/firing is expensive, time-consuming, and tedious, and interviewing is also tedious. No one likes picking up the slack for fake coworkers. No one likes being lied to.
> the number of folks in this thread who are extremely eager to make various excuses for dishonesty surprised me
Not me. I see it all the time, online and offline. I suspect they think it confers status on themselves, but what actually happens is honest people wind up shunning them.
I assume you know this is just an expression, and you know that I know that animals indeed use tools. So I'll refer you to community guidelines https://news.ycombinator.com/newsguidelines.html
Apologies, I had no idea that you used this as an "expression". I have only heard this from people who believed it. Also, I don't think this is a good "expression"; at the very least it's misguided, but mainly it's scientifically constraining.
As for the guidelines, I think I could quote it back at you.
The problem is that a good interview is only vaguely related to getting a good employee. Anyone can ace and interview and then slack off once they have to job.
If someone aces the interview using an LLM and then does good work using that same LLM then what should the employer or other employees care? The work is getting done, so what's the problem?
Compare a shitty worker to a deceptive one using an LLM. They both passed the interview and in both cases the work isn't being done. How are those two cases different?
Your hypotheticals are all extremely unlikely. People who ace interviews are usually good, and people who lean on stuff like ChatGPT aren't. I'd also rather not have someone dumping massive amounts of ChatGPT output into a good codebase.
>what's the problem?
Using a LLM is akin to copy/pasting code from random places. Sure, copy/paste can be done productively, except ChatGPT output comes completely untested and unseen by intelligent eyes. There are also unsolved copyright infringement issues via training data, and a question as to whether the generated code is even copyrightable as it is the output of a machine.
People who ace interviews are people with practice. That means you are last in a long line of unsuccessful interviews or the person constantly interviewing and will be leaving you as fast as they came in.
Find someone with a great resume and horrible interview skills. Chances are they have been working for years and are entering the job market for the first time. You are one of the firsts in their interview process. Grab them right away because once they start getting slightly good in the interview process someone will snap them up and realize they got a 10x (whatever it means to that company).
You'll never find that 10x if you are looking at interview performance unless you can compete on price and reputation.
You don't have to guess if someone is entering the job market for the first time. You can just look at their resume.
Interview skill is not some monotonically increasing quantity. It very much depends on how the question hits you and what kind of a day you've had. Also, it somewhat depends on the interviewers' subjective interpretation of what you do. If you're more clever than them, your answer may go over their head and be considered wrong. They might also ask a faulty question and insist it is correct.
I'm not great at interviews myself. My resume is decent, but the big jobs usually boil down to some bs interviews that seem unnecessarily difficult to pass. I don't practice much for them, because I feel like it mostly depends on whether I've answered a similar question before and how I feel that day. I also often get a good start and just run out of time. I've found that sometimes interviews are super hard when the interviewers have written you off, as in you presented poorly in an earlier session and they are done with you. Also, when there is zero intention of hiring you generally, like someone else already got the job in their minds.
> does good work using that same LLM then what should the employer or other employees care?
Maybe I'm wrong, but I find it very hard to believe that anyone thinks the "good work" part here is actually a practical possibility today. Boilerplate generation is fine and certainly possible, and I'm not saying the future won't bring more possibilities. But realistically anyone that is leaning on an LLM more than a little bit for real work today is probably going to commit garbage code that someone else has to find and fix. It's good enough to look like legitimate effort/solutions at first glance, but in the best case it has the effect of tying up actual good faith effort in long code reviews, and turns previously productive and creative individual contributors into full-time teachers or proof-readers. Worst case it slips by and crashes production, or the "peers" of juniors-in-disguise get disgusted with all the hand-holding and just let them break stuff. Or the real contributors quit, and now you have more interviews where you're hoping to not let more fakers slide by.
It's not hard to understand that this is all basically just lies (misrepresented expertise) followed by theft. Theft of both time & cash from coworkers and employers.
It's also theft of confidence and goodwill that affects everyone. If we double the number of engineers because expectations of engineer quality is getting pushed way down, the LLM-fakers won't get to keep enjoying the same salary they scammed their way into for very long. And if they actually learn to code better, their improved skills will be drowned out by other fakers! If we as an industry don't want homework, 15 interviews per job, strong insistence on FOSS portfolio, lowered wages, and lowered quality of life at work.. low-effort DDoS both in interviews or in code-reviews should concern everyone.
The premise of my comment was: if a person passes an interview using some tool and then uses that same tool to do the job, then didn't the interview work?
You found a person (+ tool combo) that can do the job. If that person (+ tool combo) then proceeds to do the job adequately, is there a problem?
If you present a scenario in which a person passes the interview and then doesn't do the job, the you are answering a question I didn't ask.
To you scenario I would respond: the interview wasn't good enough to do its job, the whole point of the interview process is to find people (+ tool combos, if you allow) that can do the job.
That's not the point I was making. The full quote is:
>Anyone can ace and interview and then slack off once they have to job.
In that a person can pass an interview, get hired, and then not do the job. An interview will never tell you if you will get poor job performance with 100% accuracy.
I don't think you are getting my point. You can totally ace an interview and then slack off. That's it, that's my point. Not the opposite, not something else, just that.
Ok. I see. This is theoretically possible. But in practice, I haven't seen it. That's not something I really care about spending effort filtering for in an interview.
>As an interviewer, if a candidate can use chatGPT to give a better answer than other candidates, I'm not gonna mark them down for use of chatGPT.
I think that makes you an incompetent interviewer, unless your questions are too hard for ChatGPT. In any case, solving the question without ChatGPT is more impressive than using it. Just like most other tools, like search engines or IDEs.
Would you also say that, "as an interviewer, if a candidate can use their buddy to give a better answer than other candidates, I'm not going to mark them down for using their buddy"?
Even if you don't mind that situation, shouldn't you get buddy's contact information and offer him the job?
That's not a great analogy: you can't do the job with your buddy; whereas some interviewers are ok with, and even expect you to, use GenAI on the job daily. Depends on the interviewer and job expectations.
A better analogy is an interview where you can use a calculator (and not be detected). If the interviewer were only to ask you simple arithmetic questions with numeric answers then sure you'd seem to do well. So interviewers adjust to not doing that.
Sure, and also split the dental and other benefits, vacations, and share one building fob, parking pass, cubicle and computer. Also, split the food at the company dinner. :)
To use a slightly more extreme example .. if you were hiring someone to maintain a nuclear power plant, and when you asked them a question about what actions to take to avoid a meltdown, and they had to ask ChatGPT to figure it out, would you really be OK with hiring that person to maintain your nuclear plant? When they don't actually have the knowledge they need to succeed, but instead have to rely on external tools to decide things? If they need to ask ChatGPT for the answer, how do they know if the answer is right? You really think that person, who relies on tools, is just as good of a hire as someone that fully internally knows what they need to know?
Yeah, hiring someone to code a website isn't the same as maintaining a nuclear plant, but it's the same concept of someone that knows their craft vs. someone that needs to rely on tools. There's a major difference in my mind.
I hope your statement is hyperbolic because we're all doomed if you expect a person to know how to operate a nuclear power plant. Normally, your testing if they can follow operational procedure that were created by people who designed the power plant in the first place.
Similar it is unreasonable and bordering on negligence to assume a person has the skill set unique to your situation.
If the job at your nuclear power plant were so simple you only needed the employee to follow operational procedures, then you'd be better off scripting it instead, or training a monkey.
Consider e.g. being a pilot, or a surgeon - two other occupations known for their extensive use of operational procedures today. People in those jobs are not being hired for their ability to stick to a checklist, but rather for their ability to understand reasons behind it, and function without it. I.e. the procedures are an important operational aid, not the driver.
Contrast with stereotypical bureaucrats who only follow procedures and get confused if asked something not covered by them.
Now, IMHO, the problem here is that, if you're hiring someone who relies on an LLM to function, you're effectively employing that LLM, with its limitations and patterns of behavior. As an employer, you're entitled to at least being made aware of that, as it's you who bears responsibility and liability for fuckups of your hires.
Like a university diploma is a signal of being able to learn or at least comply, use of a chatbot is a signal of not bothering enough to learn or comply.
I can see how an applicant who cheats interview with chatbot would later not bother to internalize operation instructions for the job.
I’d like to believe the common line that chat GPT is “just a tool” and that it can actually be used to learn/comply just as much as a university degree can be obtained by mere compliance or demonstration of learning (or merely giving the appearance of such).
My experience with Chat GPT ranges from “it’s really good for rapidly getting a bearing with a certain topic” to “it’s a woeful substitute for independently developing a nuanced understanding of a given topic.” It tends to do an OK with programming and a very poor job with critical theory.
> a university degree can be obtained by mere compliance or demonstration of learning
Exactly. It “only” shows you can & willing to at least understand the requirements, internalize them well enough, and comply with them. It shows your capability of understanding & working together with other humans.
Which is key.
In my impression, almost always the knowledge you receive at the uni is not really pertinent to any actual job, and anyone can have PhD level understanding of a subject without having finished high school.
It is the capability of understanding and working in a system that matters.
Similarly with a chatbot. Using it to game interviews in ways described does not mean candidate is stupid, or something like that. It is, though, a negative signal of one’s willingness and intrinsic motivation to do things like internalizing job responsibilities & procedures, or just simply behave in good faith.
Mental capacity to do mundane things is often important when it comes to, say, maintaining a nuclear reactor.
> just a tool
> it’s really good for rapidly getting a bearing with a certain topic
Perhaps. Personally I prefer using Google, so that I at least know who wrote what and why rather than completely outsourcing this to an anonymous team of data engineers at ClosedAI or whatnot, but if it is efficient to get some knowledge then why not?
It’s using it to blatantly cheat and do the key part for you where it becomes questionable.
ChatGPT like all transformers (language models) depends on how well you prime the model as it can only predict the next series of tokens over a finite probability space (the dimensions it was trained on) , it is up to you as the prompt creator to prime that model so it can be used as a foundation for further reasoning.
Normally people who get bad results from it would also get similar results if they asked a domain expert. Similarly different knowledge domains use a different corpus of text for their core axioms/premises, so if you don't know the domain area or those keywords your not going to be able to prime the model to get anything meaningful from it.
in terms of tools, I absolutely want the nuclear power plant engineer to use a wrench and pliars and tongs and a forklift and a machine while wearing a lead lined safety suit instead of wandering over to the reactor in a t-shirt to pull out the control rods with their bare hands. You could be Edward Teller and know everything there is to know about nuclear physics but you're not getting anywhere without tools.
to your point though, a person needs both. all of one and none of the other is useless. You don't want someone who doesn't know what they're doing to play around disabling safety systems so you don't get Chernobyl, but for the everyday crud website you can just hire the coding monkey at a reduced cost.
That's like being okay with a candidate Googling the answer during an interview. Not unheard of, but unusual. It seems hard to test someone's knowledge that way.
At my company we tell people that they should feel free to google or consult references at practical coding challenges.
> It seems hard to test someone's knowledge that way.
I don’t really want to test knowledge but skill. Can you do the thing? At work you will have access to these references so why not during the interview?
Now that doesn’t mean that we are not taking note when you go searching and what you go searching for.
If you told us that you spent the last 8 years of your life working with python and you totally blank on the syntax of how to write a class that is suspicious. If you don’t remember the argument order of some obscure method? Who cares. If you worked in so many languages that you don’t remember if the Lock class in this particular one is reentrant or not and have to look it up? You might even get “bonus points” for saying something like that because it demonstrates a broad interest and attention to detail. (Assuming that using a Lock is reasonable in the situation and so on of course :))
I do want to understand their knowledge. I'll preface questions with the disclaimer that I am not looking for the book definition of a concept, but to understand if the candidate understands the topic and to what depth. I'll often tell them that if they dont know, just say so. I'll start with a simple question and keep digging deeper until either they bottom out or I do.
I'm okay with them googling too. And I tell them that at the start. But if they take ages to lookup the answer when others just know the answer, it's gonna hurt their chances.
Sure, they can search it live but you have to assess if they understand what they found. Usually, if they really know their stuff, whatever they find is just gently pushing their working memory to connect the dots and give a decent answer. Otherwise it's pretty easy to ask a follow up question and see a candidate struggle.
It's like in college when you're allowed to take textbooks to an exam. You can bet the professor spent more time crafting questions that you can't answer blindly.
That being said, I think both types of questions have their place in an interview process. You can start with the no searching allowed questions in the beginning to assess real basic knowledge and, once you determine the candidate has some knowledge, you start probing more to see if they can connect the dots, maybe it's architecture decisions and their consequences, maybe it's an unexpected requirement and how they would react, etc.
The knowledge we're testing is related to how well you can do your job. Work isn't closed book - if you can quickly formulate a good query to grab any missing information off the internet then more power to you. I've worked with extremely stubborn people who were very smart and would spend a week trying to sort out a problem before googling it, there are some limited situations (highly experimental work) where this is valuable but... I no longer work with these people.
I remember the days when Greybeards would look down on me for using Google in my first IT job, they would harp on about how real Sysadmins use man pages and O’Reilly books to solve problems, and if you tried to Google something you were incompetent. I had college professors that told me you can’t use the Internet for research because the Internet is a not a legitimate source of information, only libraries can have real information.
What happened to all those folks? They retired, and turned into Boomers who are now unable to function in society at a basic level and do things like online banking or operate a smartphone.
On the other hand, they knew how their hardware worked. And if LLMs keep improving, we're going to reach the last generation that knew how software worked.
We’re pretty close. I’m not sure that 51% of the people I work with understand what DNS is, what a call stack is, what the difference between inheritance and polymorphism is, or what a mutex is
When I'm retired, sitting on the beach with my beer and a good book,
please don't come bothing me that your smartphone banking and GPT
arse-wiping assistant has gone berserk.
It will take 3 to 6 months to determine that a new hire is incompetent, especially if you're required to document their incompetence before firing them.
I've never had a job without a probation period where you can let someone go without cause within the first 90 days with nothing more than two weeks pay in lieu of notice. It definitely doesn't take 6 months to identify someone who only got their job because they used AI in the interview.
> I’m not sure why anyone would want a job they clearly aren’t qualified for.
Well, I suck at interviewing and/or leetcode questions, but have so far done perfectly fine in any actual position.
I can totally see how you’d resort to ChatGPT to give the interviewers their desired robotic answers after 3 months of failing to pass an interview the conventional way.
> give the interviewers their desired robotic answers
As someone who has interviewed a lot of people – robotic answers are specifically not what I (we?) look for. The difference between hands-on experience and book knowledge is exactly what we're trying to tease out.
It's very obvious when someone is reciting answers from a book or google or youtube or whatever vs. when they have actually done the thing before.
For the record: ChatGPT is very good and the answers it gives are exactly the kind of answers that people with book knowledge would give. High level, directionally correct, soft on specifics.
I mostly interview seniors, you obviously wouldn't expect experience from an entry-level candidate. Those interviews are different.
I understand that you have no control over who you're interviewing with but... if you're a good fit and the interviewer leaves thinking you're a terrible fit that's a sign of a bad interviewer. Obviously there are non-proficiency things you can do to skew that perception (bad hygiene, late, obviously disinterested) but a good interviewer (especially one used to working with developers) should be good at getting by all the social awkwardness to evaluate your problem solving.
And yes, most large companies have terrible interviewers.
I refuse to believe that all the interviewers I had over the course of 6 months were all terrible. It must be something about the process that is pathologically broken (especially when getting hired at larger companies)
I mean... if the interview process is even a little broken then doesn't that mean that over time worse and worse interviewers will get hired, making for worse and worse interviews meaning that worse and worse interviewers get hired...
"Algorithms are taking over much of the human work of hiring humans. And, unless they are programmed to seek out currently undervalued and difficult-to-track factors, they may tend to find that the more robot-like a human is the best she or he will be at doing most jobs. So, it could be that the robots are most likely to hire the most robotic humans."
I find the whole gamified system to be bizarre and disheartening no matter which side of the table you're on.
To me, looking at modern tech interviewing is like comparing the gold standard OCEAN and the emergent HEXACO in personality surveys. Take the former on a bad day and it may leave the test taker feeling bad about themselves. The latter, much kinder and gentler in messaging around strengths and weaknesses.
That "by design" quality strikes me as missing from the entire tech interview system. If it weren't broken, this would not be a 7-year conversation updated yesterday:
> I’m not sure why anyone would want a job they clearly aren’t qualified for.
Money, obviously.
Software jobs in particular are magic in this way - the pay is way above the average, and performance metrics are so poorly defined that one can coast for months doing nothing before anyone starts suspecting anything. Years, even, in a large company, if one's lucky. 80% of the trick is landing the first gig, 15% is lasting long enough to be able to use it as a foundation of your CV, and then 5% is to keep sailing on.
No, really. There's nothing surprising about unqualified people applying for software companies. If one's fine with freeloading, then I can't think of easier money.
(And to be fair, I'd say it's 10% of freeloaders, 10% of hard workers, and in between, there's a whole spectrum of varying skills and time and mental makeups, the lower half of that is kind of unqualified but not really dishonest.)
Just because I can’t recite rabin-karp off the top of my head or some suffix tree with LCA shit for some leetcode question about palindromes doesn’t mean I’m unqualified to do the work of an engineer.
I’ve gone public, been acquired by Google, and scaled solutions to tens of millions of users. I’m probably overqualified for your CRUD app.
Consider a situation where you’re applying for a job that you’re 50% qualified for and then using chatgpt to cheat on the interview. Would be much more difficult to catch is my guess.
If you slide from 50% to 99%, how do people feel about using ChatGPT? What is more honest: Many people here were hired when they were less than 100% qualified, and did very well in their new role. It has happened to me more than once.
>I’m not sure why anyone would want a job they clearly aren’t qualified for.
Easy. They have nothing to lose because the jobs they are qualified for don't even pay enough to survive. You probably could have figured this out yourself.
We have a small team of developers and you cannot hack metrics. You build and deliver what is in requirements or not.
If you don’t deliver we don’t even have to have a discussion because team reviews code, tests features and gives feedback quickly if someone is slacking.
Well you did say that your metrics "cannot be hacked."
The XKCD one can actually be easily hacked. Just spam the system and rate every comment as helpful. Classifier learns to accept everything. There's dozens of way to hack this one and undermine the actual goal. But it is a comic, and it is funny. Doesn't need to be realistic.
Center Stage is a feature where a device uses an ultra-wide camera, and then is supposed to track your _face_ as you move and shift around it it's field of view.
I find it most useful for FaceTime calls on Apple TV, where you can leave your phone near the TV, and it will automatically frame you sitting on the couch and will follow you as you shift around, etc.
There is a similar feature to what you're describing for FaceTime, but I don't think it has any cutesy name.
Just shrink the width of the text area being read from. It's really easy to not look like you are sitting in the front row of the theater reading the opening text in Star Wars. If an actor on live TV can do it, you can too
I predict that that will be followed shortly by a mysterious sharp increase in applicants claiming to have nystagmus (https://en.wikipedia.org/wiki/Nystagmus), which causes random involuntary eye movements, but without any medical documentation.
What's interesting is this wouldn't necessarily imply cheating. That doesn't sound like an issue I'd necessarily draw attention to under normal circumstances, but if I knew interviewers were likely to be paying close attention to my eye movements I certainly would.
Yes, exactly. I have nystagmus myself because of an underlying medical condition that causes other vision problems and it's depressing that interviewers might think it's reason for suspicion.
I've been a pain in the ass in quite a few feedback sessions when people brought up a candidate not making "enough" eye contact. Usually I mention that they could be treading into infringing on a protected class and they shut up.
I mean, at some point if they go through so much effort to hide their cheating they probably have attained some mastery in the process. Kinda like how some friends in high school would try and sneak in note cards on a test but they probably spent so much time prepping them that they coulda gotten an A or B regardless.
It's also why it's kinda annoying to do live interviewing trivia questions. Can I immediately answer what a partial template specialization is? Probably not, I never used them. Can I google it in 2 minutes and summarize it as as way for (often c++) template classes to bound some of the template arguments to values or pointers? Well, I just did. Should that cost me the interview? That's pretty much what I do on the job.
I am a polyglot: Perl, Python, C, C++, Java, C#, etc. Not experts at all, but I can do fine with an existing code base. What is it about C++-heavy interviews that always regress to trivia? And asking about rarely used features? It is a bother. And rarely does the person asking the trivia have any depth whatsoever in other languages. It is my biggest gripe with "C++ people". For many, they have a hammer and everything looks like a nail. Yes, "Java enterprise people" were the same in 2005-ish.
Yes of course! I'd be happy to answer your short question with a short answer. I look forward to expanding further on the answer, as you previously stated that you expect me to.
Jokes aside, something about LLM responses is very uncanny valley and obvious.
The peppy, upbeat, ultra-American tone that the LLMs produce can be somewhat toned down with good prompting but ultimately, it does stink of the refinement test set.
To be honest, I think in the future we will interview people on their ability to work with an LLM. This would be a separate skill from the other ones we are looking for. Maybe even have them do some fact checks on a given prompt and response as well as suggest new prompts that would give better results. There might even be an entire AI based section of an interview.
In the end, it's just a new way to "Google" the answer. After all, there isn't much difference between reading off an LLM response and just reading the Wikipedia page after a quick Google search, except for less advertisements.
I’ve already been allowed to use it in programming interviews where they’ve said it’s explicitly allowed to use ChatGPT. It’s led to some fun interactions because I use it a lot and as such I’m quite good with it and interviewers are often taken aback by how quickly I’m able to just destroy the question they put out with a good prompt
I will say there are still some programming questions you can give that will stump the hell out of ChatGPT. In particular I took one online coding assessment where I used it and there was a question about plotting on a graph with code and calculating areas based on the points plotted that ChatGPT failed miserably at, but someone pretty good with math and geometry would find pretty tractable.
There are ChatGPT resistant questions you can ask. ChatGPT recognizes the question but doesn't actually think about it, so if you give it the river crossing problem (farmer, fox, sheep, and grain need to cross a river) but tell it the boat can take all the items, it won't actually read those details and blithely solve the problem the expected way. Give candidates a problem that's trivially solvable if you actually read the question and see if they try and solve it the ChatGPT way.
it's a fun problem to explore, and gpt-4 doesn't do any better. swapping in other things doesn't help because it internally recognizes it as the river crossing problem and proceeds to solve it normally. I was able to get it to two shot it with a lot of coaching but yeah, it's a trip.
The downside is that you're now wasting interview time on the river crossing problem instead of actually relevant questions for the job you're hiring for.
Don't literally use the river crossing problem, but it's existence implies there is a form of question that ChatGPT will solve for where someone actually reading the prompt can solve trivially but someone using ChatGPT will get stuck on.
You're already asking Leetcode questions that are irrelevant for the job you're hiring for. What's the problem with asking one more to test for cheaters?
We didn't start testing people on Google usage when Googling became useful, so I don't see why LLMs would be different.
Instead, there would be tasks that can be completed using any tools available - Google, LLM, whatever. And candidates are rated on how well the task is done, and maybe asked a few questions to make sure they made decisions knowingly and not just copied the first answer off the internet.
This already exists and is called "take home programming assignment"
I agree that this is the likely long term outcome. But for now folks want to think that everyone needs to have memorized every individual screw, nail, nut and bolt in the edifice of computer science.
Me and several friends have used ChatGPT in live interviews to supplement answers to topics we were only learning in order to bridge the gap on checkboxes the interviewer may have been looking for.
We’ve all got promotions by changing jobs in the last 6 months using this method.
You can be subtle about it if it’s already an area you kind of know.
I like when a person admits they don’t know something in an interview. It shows they aren’t afraid to admit when they don’t have the answer instead of trying to lie their way through it and hoping they don’t get caught. Extra bonus points if they look the thing up later to show they are curious and want to close knowledge gaps when they become aware of them.
People who are unwilling to say, “I don’t know, let me look into that,” are not fun to work with. After a while it’s hard to know what is fact vs fiction, so everything is assumed to be a fabrication.
When I was 11 I took a live assessment to get into the gifted program at school. I thought I didn't do very well because about 20% of the questions I answered "I don't know".
At the end the assessor told me that I passed specifically because I said "I don't know". They purposely put questions on the test they didn't expect you to answer to see what you do when faced with an unanswerable question.
I've used that in my own life since -- I much prefer working with (and have a much more positive view of) people who are willing to say "I don't know".
I couldn't agree more. When I am interviewing candidates, one of the things that I'm looking for is that the applicant is willing to say "I don't know" when they don't know. That's a positive sign. If they follow that up with a question about it, that's even better.
If a candidate is trying to tap-dance or be vague around something to avoid admitting ignorance of it, that's a pretty large red flag.
For every one person hiring with your mentality there are a hundred other managers looking to cut down the stack of a thousand resumes in any trivially easy way they can. That starts with saying sorry we are looking for someone else when you say you don’t know x or lack z on your resume. You are literally incentivized to lie and fake it on the job.
I'm going to be pedantic and challenge your use of the word 'learn' here. I tend to agree with the notion that being able to say 'I don't know, let me find out' and then find out quickly with a correct answer is in general a Good Thing™, but I wouldn't equate that with learning the thing they just looked up.
Yeah, the disclosure is very important. It’s the difference between an open book test and notes written on their thigh.
During some interviews I’d give people access to a computer. If they could quickly find answers and solve problems, that is a skill in itself, but I could see what they were looking up. Sometimes that part would make or break the interview. Some people didn’t have a deep base of knowledge in the area we were hiring for, but they were really good at finding answers, following directions, and implementing them successfully. They would be easy to train on the specifics of the job. Other people couldn’t Google their way out of a paper bag, I was shocked at how bad some people were and looking up basic things. Others simply quit without even attempting to look things up.
while I agree with you in the context of a work environment 100%, in an interview there is a series of checkboxes interviewers need to hear and if you say I don't know I will look into that you can really screw yourself.
I disclosed my use of chatgpt throughout the process and the hiring manager was excited that I was on the cutting edge. I used it for the project they gave me as well. :)
I don't think my friends disclosed that they were using it.
It's a fair point that I am making this assumption. At any rate, my comment could instead read:
> [If one assumes that the candidate] would have been able to perform the job duties I'm not sure why [they] should care.
This is what I mean; I can see why an interviewer thinks they've been cheated or that a candidate was dishonest but that doesn't mean that the interviewer even has a successful system for determining if a candidate can perform the job duties. A candidate who cheated -- from the perspective of the interviewer, I guess -- but still manages to adequately perform in their role very plainly did not cheat from a less biased perspective. What is that interviewer even thinking? How could that person have cheated?
> determining if a candidate can perform the job duties. A candidate who cheated -- from the perspective of the interviewer, I guess -- but still manages to adequately perform in their role very plainly did not cheat
That's not what anyone means when they say "cheating". Cheating means to violate the conditions and assumptions of an examination or contest.
For example, if a chess grandmaster uses an AI implant to win a game and gets caught, it doesn't make it OK if they could consistently win against the same opponent even without the AI.
Okay, that does make the position more understandable but I still don’t quite get it. Perhaps more accurately, I see these assumptions which others don’t necessarily share. The people claiming cheater have different opinions from the supposed cheaters.
I recall a Starcraft 2 match[0] involving a person with an apparently psychosomatic wrist injury that was only painful while they’re playing on stage. Their opponent was seeming to draw out a game they were losing in an attempt to trigger the pain; it was a viable strategy given the “best of” series they were playing. That’s certainly not going to be accounted for in the rules and one might believe that it’s an underhanded way to win. But both players are in the top echelons of game knowledge, experience, and skill; that’s the only reason either player made it to this particular match-up. The player with the wrist injury ultimately had it act up and lost the series.
Did the winner deserve to win? Should the other player be considered the better player? The assumptions of the game rules and what’s “fair” might be different per player; who’s right, who’s wrong, and why? What about when prize money is involved; that guy who won by the written rules just doesn’t deserve it because of unspoken rules? These questions don’t seem to have obvious answers, so of course I challenge assumptions.
You're completely ignoring the fact that honesty (& willingness to follow rules you might otherwise disagree with, etc.) themselves might be traits the employer is looking for in that role. Traits that (by your willingness to break the rules) you're obviously lacking. They just don't happen to be technical skills, but that doesn't mean they don't matter to the employer. What do you think you're doing by cheating? You're deceiving them into hiring someone with traits they explicitly don't want. You don't see a problem with that?
There are nicer ways to express your meaning. I haven’t ignored anything.
These traits are often not offered by the employer. Why do I keep hearing people talk about the underhanded ways that companies try to obfuscate salary budgets if not because they’re dishonest? I certainly see that as dishonesty; where are they coming from to demand such honesty from their candidates?
They get honesty anyway but that doesn’t mean I can convince them of it. If a person wants to assume guilt in someone, that is often what happens. You may not have experienced a person power-tripping over you but that’s been a good portion of my life and it’s hard to miss the patterns in a modern job interview.
To be clear, I’m not advocating for one to be dishonest. The person using ChatGPT to supplement their knowledge is not being dishonest; that’s my claim. The interviewer feels like the candidate “cheated”. Oh well. Too bad the interviewer isn’t above pejoratives. Gotta call it “cheating” so they can dismiss the candidate as dishonest. How dishonest!
People care because such a person isn't terribly trustworthy. There's more to being a valuable employee than just being able to perform the job duties.
Curious outlook. I, for one, avoid lying. The closest I get is omission. I'm not interested in remembering false realities depending on the person I'm talking with. The last lie I recall was a number of years ago where I said to a store clerk that I had recently been somewhere when in fact it was not very recent. Immediately after I felt bad. I value honesty.
Lying is such a fundamental part of human psychology that we lie to ourselves without even knowing we're doing it, and children learn to lie without any instructions. I wouldn't go so far as to say that it's instinctual, but it's very close to it.
Taking it even a step further - your memory is imperfect, the degree to which you can accurately recall events is significantly poorer than most people believe, which leads to incidental lies. We call them mistakes, but from the outside perspective that's just a question of intent.
That being said, despite my pessimism towards human nature, I too value honesty. But, like everyone, I lie occasionally - and I note that you don't claim to not lie, nor to have never lied. I'd call it honesty on a best efforts basis.
That job could have gone to someone who like actually knew what they were doing and was honest lol not sure why you want to defend professional and intellectual dishonesty?
This suggestion that a person who can adequately perform job duties could have even possibly cheated in their job interview is intellectually dishonest. If they had to cheat to get the job we should be looking at the interviewer. Why did the qualified candidate have to cheat? Why is whatever-they-did even considered cheating?
If they're qualified, they didn't have to cheat. If they're not, then they did. Either way, they're dishonest and that means they're not a desirable hire.
> If they're qualified, they didn't have to cheat.
(Just rewriting to specify my understanding: If the candidate was qualified, they didn't have to cheat even if they did cheat. They could have simply not cheated and been selected by the merits of their qualifications.)
This argument relies on the false premise that an interviewer will always accurately determine a candidate's qualifications. That a candidate is not qualified to pass an interview is not the same that a candidate is not qualified for the job for which they're being interviewed.
True, most interviewing processes are very imperfect by necessity and some qualified people will be mistakenly filtered out.
But also, there are usually several-to-many applicants for a position that are all qualified, and by necessity most of them won't get the position.
Additionally, technical qualifications is only a part of what an employer is looking for. There are other things that are at least equally important -- how well the applicant would fit into the team, how trustworthy they are, etc. It's about a lot more than just technical skillset.
> True, most interviewing processes are very imperfect by necessity and some qualified people will be mistakenly filtered out.
This is ultimately something I see as dishonest given the context of job applications. Employers generally expect a certain kind of perfection from job candidates, which they can’t manage to show of themselves. I understand that this isn’t an easy thing to solve -- nor even something that’s ever been solved -- but that should at least make it more understandable when an otherwise qualified candidate uses disallowed tools in their interview.
Perhaps the candidate’s real best option is to find a different company to work for but they may not be so privileged as to have a choice if their on-paper qualifications are lacking. Assuming their practicable qualifications are adequate, they may have good reason to bullshit through a bad interview. Additionally, finding a different company is pretty likely to be “same shit, different day”.
> But also, there are usually several-to-many applicants for a position that are all qualified, and by necessity most of them won't get the position.
Assuming they’ve qualified via an interview and there are particularly close candidates, pick the one who applied first. They’re admittedly qualified and further interviewing is just a means of discriminating in error-prone and possibly unlawful or immoral ways.
> Additionally, technical qualifications is only a part of what an employer is looking for. There are other things that are at least equally important -- how well the applicant would fit into the team, how trustworthy they are, etc. It's about a lot more than just technical skillset.
Fair enough. I would caution interviewers against judging too harshly or quickly. One can imagine many reasons an interviewee might choose or seem to lie during an interview while they are otherwise an honest person, ranging from stress to disillusionment to [cultural differences](https://news.ycombinator.com/item?id=39209794).
At the end of the day, filtering for liars and cheaters actually filters for bad liars and cheaters in addition to people who are a bit nervous or tired or stressed or cynical or just having a slightly off day; dishonest people who genuinely see nothing wrong with dishonesty get through just fine.
Is this junior/intermediate software engineer, or what? What sort of questions? CS exam-type, definitions, whiteboarding, programming, LeetCode, numerical problems, algorithm, data structures...? Programming-language certifications? Riddles?
To be fair, you were likely already getting out competed by people with better connections or social skills anyway. Years in corporate leadership has cleansed me of the notion that merit is required to be a major factor in hiring decisions.
i have a really annoying habit of constantly double-clicking to highlight whatever i'm reading or looking at.
i've actually been called out for it in a systems design interview, under the presumption i was copying my notes into another window, but was glad they called me out so that i could explain myself
Same. I sometimes use Edge when a site is broken on Firefox and I get into trouble there because it has super weird behavior when you highlight text. Very annoying.
I compulsively left and right click random shit all day. It helps me encounter bugs like steam locking up for a few seconds on Linux if you right click the steam windows or overlay too quickly.
I'm a compulsive highlighter too, but it's generally in the vein as xkcd (https://xkcd.com/1271/) and not a select all. Frequently, highlighting ends up starting in the middle of a word!
On some modern websites I end up having to select all because the selection mechanism is broken (I'll highlight, remove my finger from the mouse button, highlight another piece of text, and yet the original highlighted text is still totally, or even worse partly, highlighted.). Crtl-A Selecting all and then clicking anywhere is the only way to clear all of the highlighting in these instances.
Thanks for the XKCD. I didn't realize how common this is. Now I'm even more annoyed that so many websites and reader apps force context menus or 'gestures' when you highlight, without a way to disable those context menus or gestures.
Oh, I do too. But then his monotone typing out of the answer, eyes darting back and forth between two screens, it’s kind of obvious. The select all just starts me looking.
I just selected your reply here while reading. Some people use mouse selection as a visual aid to keep track of where they're reading. It's there, and it's handy!
(I also just select randomly sometimes. Not even quite sure why.)
> Maybe in the future everybody is going to use LLMs to externalize their thinking. But then why do I interview you?
It will become a skill. In 1900 you'd interview a computer (a person who does math) by asking them to do math on paper. Now you'd let them write some code or use software to do it. If the applicant didn't know how to use a (digital) computer, you'd negatively rate them.
I don't love it, but we may reach the point where your skill at coaxing an LLM to do the right thing becomes a desirable skill and you'd negatively rank LLM-illiterate applicants.
Looking at LLM quality, we're not at that point for most fields.
You're not asking the correct questions as an interviewer. You should be asking specific questions about projects they've worked on, or about them personally to get to know them. ChatGPT should not be able to answer. Pretend you're Harrison Ford in Blade Runner.
A candidate can do very well on personal and web project experience questions, and suddenly blank when you ask them how an http request is structured. Or what's CORS.
Then you dig further and discover a lot more thing about them that wouldn't have surfaced otherwise because hou assumed they knew all of that.
My best advice would be to never skip "dumb" and easy technical questions. You can do it very quick, and warn ahead that it's dumb questions but you ask them to everyone.
Knowing the structure of an http request and CORS is a check for a common technical vocabulary, but I would strike a blank when asked directly. It feels a bit a like, “I had to learn it” even though it’s just googleable labels for simple topics. I heard of interviewees being dropped for not knowing the difference between 402 and 401.
I think blanking is actually OK. From there you could probably explain what you know about it, how it was set in your project, or any peripheral story that comes to mind at that time.
I see it as a different angle to get more information.
I agree with the "drill down" technique. Example: How does a dynamic array class (Vector, List, etc.) work? The very best interview questions have "fractal complexity" that allow you to drill deep.
You can't only ask those questions, because some people are extremely good at bullshitting.
I always start interviews by asking them to explain their own projects. However, sometimes I'll find someone who's great at explaining projects they supposedly worked on in great detail, but then when given a simple coding problem they can't even write a for loop in their own top language.
As an experiment I gave ChatGPT my resume and background information and then pretended to interview it, just to see how well it would be able to conduct a mock interview. It did exceptionally well.
I'm not sure what specific questions you have in mind, but ChatGPT is almost certainly trained on a vast array of resumes and a diverse range of profiles, possibly even all of LinkedIn itself as well as other job boards. There is little to no reason why it wouldn't be able to make up an entire persona who is capable of passing most job interviews.
One red flag for me is when the interviewee gives "cork" answers -- the metaphor is that of a cork bobbing in the water. If you ask superficial questions about work they've done, the answer it convincingly. But the further down you go into the details the more resistance you get and the cork keeps bobbing up to the surface level.
Using LLM's isn't externalizing or outsourcing thinking. LLM's aren't performing that. People doing this are in fact substituting thinking with a process, the output of which masquerades as thoughts after a fashion, but are in fact basically word cloud probability based pattern matching.
Sure, the point that superior tool use is a valid job skill makes some sense, but conceding your agency and higher reasoning to a machine which possesses none of these is to my mind not going to be beneficial to a business in the long run.
Perhaps interviews need to assume the person being interviewed is using an LLM and can be evaluated on how effective they are with it. Presumably this is what employers want. The challenge is interviewers are busy, would prefer to be doing other things and want to stick to their old playbook ("tell me how to invert a binary tree").
If it is out in the open, with the chat/prompts available, you can ask other questions. You're not on your toes trying to catch a cheater. You're not assuming that the interviewee is lying or trying to scam you.
No, it's not what they want. If they wanted you to use a LLM then they would tell you that up front. It's also too new of a technology to be required anywhere. Hardly anyone I know is even trying LLMs to begin with. Then, what do you do if the interviewee gets garbage code out of the LLM and misses an error? An error that might be forgiven in a normal interview cannot be excused when you didn't even have to write the code. Technically, if the LLM did the coding for you, you might pass without even being able to read code. This is all like the same reason you can't use a laptop on an algebra exam... The tool might do 100% of the work and leave you having shown nothing of your own ability.
It may not be what the interviewer wants, because they would like to keep using their old interviewing strategies. However, it is what the business wants (or should want, assuming they want the most effective employees).
It's not about being attached to interview strategies. It's about the fact that some people only copy/paste and aren't effective up to basic standards. I bet you'd consider an answer pasted from Stack Overflow and misrepresented as original to be 100% ok too, but both are unacceptable.
It is important to focus on the intersection of human and machine intelligence. If you listen to the AI luminaries, the role of the human will be more like a manager so perhaps understanding the code may eventually become unnecessary. However, my own experience with LLMs so far is they do seem to have trouble getting fine details correct. Presumably it will change over time.
You should just openly let them use chatgpt (assuming they can use it on the job too). When I interview people I try to create the same environment as the one they’ll be working in. They can use chatgpt, google, stack overflow, etc. I don’t care how many tools they have to use, as long as the work output is good and done in a reasonable time. I really don’t understand the obsession with coding on whiteboards or other situations that will literally never come up on the job. There will never be a time my employees can’t use google or chatgpt. In any case, you can tell pretty quickly how much someone knows about a topic just based on the questions they’re asking chatgpt.
Whoah, hold up: Why should we believe that success using an LLM to (possibly blindly) look up the answer to interview-questions will strongly correlate to success using an LLM to craft good code, properly tested, and their ability to debug it and fit it into an existing framework?
Heck, at that point you aren't even measuring whether the candidate understood the question, nor their ability to communicate about it with prospective coworkers.
If there are any questions where "repeat whatever ChatGPT says" seems like a fair and reasonable answer, that probably means it's a bad question that should be removed instead. Just like how "I'd just check the API docs" indicates you shouldn't be asking trivia about the order of parameters in a standard library method or whatever.
Nothing I hire for requires someone to do the World’s Most Challenging ™ life or death problems under pressure from memory. I think that’s true for the vast majority of tech companies. If I need someone to wire up a database to a react interface, or write some cron scripts, or refactor an old nodejs codebase, that is all stuff that chatgpt would be a great tool to use. I don’t care whether they’re doing it from memory or not.
> Nothing I hire for requires someone to do the World’s Most Challenging [...] from memory
That's a bit of a strawman: I didn't say anything about the ease/difficulty of the role being filled, and I implied rote memorization was not meaningful.
To reiterate, interviews should measure good data for choosing between candidates.
That's not happening when the given problem is solve-able by an LLM using a human as a proxy, everybody's just burning man-hours of company/applicant time on interview-theater that isn't useful for making a decision. (Well, not unless the hiring goals include "willingness to jump through hoops".)
If they're just putting the question straight into GPT, then what benefit is the candidate bringing? I can use GPT myself, and for a lot cheaper than the cheapest candidate.
If the interview is for a position in which the candidate will be tasked with solving problems that ChatGPT is able to help with significantly, then they have just proven they are capable of doing the job. (If you have time to do this work yourself, why are you interviewing anybody at all?)
Assuming the interview is to determine somebody's programming chops, without the benefit of ChatGPT, you'll have to ask questions where ChatGPT is little to no help. This was the conclusion of the article.
Why does it matter to you if they can do it from memory, if they can find the answer easily from ChatGPT? It's like asking "how can I tell if somebody knows the exact definition of a word without having to look it up?" If it's really important that they have that ability (e.g. because you will be asking them to perform other tasks which are not so easily solved by ChatGPT, or you simply don't want them using ChatGPT at their job for whatever reason), then you will have to devise an interview scenario where the candidate is incapable of using ChatGPT clandestinely, e.g. by bringing them into your office.
At Caltech, exams were typically open book, open note. The time limit on the test, however, prevented attempts to learn the material in the time allotted. Calculators were also allowed (though were useless on Caltech exams, as course material didn't care about your arithmetic skillz).
I suspect the way to deal with ChatGPT is to allow it. Expect the interviewee to use ChatGPT as a tool. Try out the interview questions beforehand with ChatGPT. Ask questions that ChatGPT won't be good and answering, like how a calculator is useless on a physics exam.
Using ChatGPT as a tool makes as much sense as allowing a human assistant to take the exam with you.
In an open-book test, you have to know what you're looking for and roughly where to find it in the book. That implies some knowledge. With ChatGPT you could type the question verbatim and get a potentially right answer, without even understanding the answer at all. It is therefore unacceptable for use on any exam.
As a former tertiary educator (for a brief moment, before I decided academia wasn't my thing), that's how open book exams are set; the assumption is you have knowledge of the subject, and the books are there for you to verify and quote examples of/from.
NOT to browse through looking for a solution from step 0.
> But then why do I interview you? Why would I recommend you as a candidate for a position?
Presumably you have tasks that you want performed in exchange for money? (Or want to improve your position in the company hierarchy by having more people under you or whatever).
It sounds like the problem is really that this is the most obvious cheater. Someone better at manipulation and deception might do a better job cheating the interviewer such that they're hired but then be entirely inadequate in their new position.
That works in a world where everyone is technically competent, but oddly many people applying to software positions are, optimistically, planning to learn on the job. Work with enough folks like that at once and the motivation for the coding interview becomes clear.
Lame? It’s a bare minimum demonstration of ability.
The number of experienced candidates I’ve interviewed just in the past few months who have trouble writing a for-loop in the language they’re “experienced” in might astound you.
Welders sometimes (always?) have to go to a certification center to demonstrate that they can actually perform the types of welds the job they’re applying for requires.
You're 100% right, but I think your experience is different than recent job seekers. I think this is mostly semantics. You're asking simple problems and are amazed at the number of people that can't do them.
In the current job market, however, lots of places are asking ridiculously hard verbatim leetcode questions in an attempt to filter out "bad candidates." Job seekers feel that too many places ask unfair questions (which is true) and employers feel that there are too many candidates that can't write genuinely simple programs (also true).
> Interviewing as a process sucks enough as it is.
It truly does, and it sucks just as much for the employer as for the applicants. That's why I suspect that more interviews will be required to be in person: if it's too easy for someone to cheat, that makes everything suck even more for the employer and the employer is likely to adjust the process to minimize that suckage.
> Technical interviews are lame and filter for people that are good at technical interviews, not people that are good at the job.
Not automatically, but yes, bad technical interviews filter for people who are good at technical interviews. And too many interviews (technical or otherwise) are bad.
I've had this happen too, with almost the same responses. It was even more obvious because I was able to see the reflection of their lcd backlight glowing across their face as they switched back and forth to answer the questions. I just directly asked if they were using an external resource to answer my questions. They said yes as if it was normal. I thanked them for their time as that was my last question.
> The interviewee sits there in awkward silence for a few seconds, and starts answering in a monotone voice, with sentence structure only seen on Wikipedia. This repeats for each consecutive question.
That's a bit better than proxy interviews and people lip syncing, but not by much.
How much can you mitigate this by interviewing them remotely but on video? Then you can see if they're typing and reading the answer (unless they have a friend doing that and feeding them it in an earphone, as I hear happens).
> but people who use ChatGPT on the interviews make a disservice to themselves and
I think most people have been thinking that the interviews are mostly BS with little relationship to the job, which you simply have to get through.
Many, many people will cheat to the extent that they think they can get away with it.
It's a bit like many people cheat in school. (On classes they consider irrelevant, they might justify it that way. On classes relevant, they might justify it, that passing or their GPA is more relevant to their goals, than learning that material at that time.)
I think people generally don't believe a "you're doing a disservice to yourself" argument. They choose the tradeoff or the gamble.
Personally, I don't tolerate cheating, and I have a low tolerance for interview BS. Neither is the dominant strategy for the current field.
Considering how much time is spent on manufacturing BS for consumption by bosses, professors, teachers, and advertising? I think this is going to automate at least half of the work office workers and students are doing now...
https://github.com/m1el/samaras/blob/master/src/xorshift128....