I find it entirely appropriate that recruiting managers wear their biases on their sleeves, in this manner.
As representatives of the organization, you put its core values and culture at the forefront. You represent what the owners are really all about.
It is important for those with the ability and inclination to be part of "an association for superior IQ people" to not be placed in an unsuitable role, at an unsuitable company, given their rarity. Better for all involved, I would say.
>From a practical standpoint, this research may ultimately lead to insights about how to improve people’s psychological and physical well-being. If overexcitabilities turn out to be the mechanism underlying the IQ-health relationship, then interventions aimed at curbing these sometimes maladaptive responses may help people lead happier, healthier lives.
Reading between the lines, in these cooling months, sent a chill up my spine.
I think you've touched on a crucial point here about making the comparison "fair". In my view, I find the "trivia questions" approach to be an abdication of responsibility for the difficult, nebulous task of assessing a candidate's abilities. It gives you an objective measure by which to compare candidates... even though the measure itself is usually arbitrary and irrelevant.
> What if you consider part of someone's job to be communicating concepts to people, possibly with the help of visual aids and diagrams?
Perhaps it would be more effective to have the candidate whiteboard a concept that they are already familiar with, be it a high-level engineering principle or a system/solution they have built in the past. Attempting to solve a problem you have just been presented with AND communicating the solution effectively is a big ask.
That's an excellent idea. I'm in no way saying the existing method is perfect. Just that some of the things it tests around communication and being put on the spot and analyzing a problem in a way that is understandable to the rest of the room is actually a really good engineering skill. There can be other great ways to measure those skills.
I know my opinion is unpopular, but I sometimes do think I've figured out something other people miss. I think when it comes to whiteboard inteviews, candidates are often playing the wrong game. They think it's about gotcha questions and they think they fail because they didn't leetcode hard enough. I don't think that's true, they are just trying to game the thing that's easy to measure.
In my experience with the "terrible" FANG companies, it's not about gotcha questions. I get the offers even though I don't often find a non-naive solution. The people I'm in the room with really do want to see my thought process and they really do want me to communicate the tradeoffs with them. People don't fail the gotcha questions because they don't know trivia or because they forgot a detail from their CS classes. They fail the whiteboard interview because they see an unfamiliar question and say: "I don't know that trivia" instead of drawing out possible solutions and having a conversation with their interviewers.
They think it's about gotcha questions and they think they fail because they didn't leetcode hard enough.
But ... if you read some of the feedback from interviewers at "those companies" that rely on these interviews, they say the reason they failed someone is exactly because they "didn't leetcode hard enough". It is manifested as:
"Well other candidates got the same solution as you, they just did it 10 minutes faster"
or
"You missed an edge case, even though your core algorithm was correct".
This is a huge issue with these interviews. It's all too easy for interviewers to evaluate candidates based on how fast, correct, neat or "complete" you answer was. It's easy and takes no time.
Agreed, thats a disingenuous point. "It's not about getting the right answer but the way you think". I've never found that to be true. If you don't get to the right answer, you're gone. If they planned to ask two questions and you only got through one, you're gone no matter how you "thought" about it. A huge part of this is Leetcode practice. If you can't solve most algorithm questions on whiteboard in less than an hour (because you haven't practiced) then you won't pass any interviews.
> "You missed an edge case, even though your core algorithm was correct".
That may have been a reason why I have failed some interviews but I feel like most interviewers are actually good about this and will say something along the lines of "what about this input?" where my code does not work and then I have that "oh shit, that won't work for that" and then I fix it.
Thanks for elaborating so well on your initial premise, I think you bring up some good points about how this kind of interactive problem-solving can be an effective tool available to the interviewer. For better and/or worse, there's a reason why it's so prevalent now and controversial.
I'm inclined to describe it as a sort of interrogation technique, you're offering a stimulus (the problem) and aggregating a number of reactions to form an opinion, and open up new lines of questioning. Very delicate work.
I think problems arise when unskilled interviewers present an excessively obtuse problem to the candidate, then read far too much into their responses (this approach is also easily "hacked" by those who've memorised common problems of this ilk). To stick with my interrogation analogy, it's the equivalent of screaming in the suspect's face, noticing that they gulp before shifting in their seat and scratching their face, then deciding that "they must be lying".
That's an awesome idea -- as someone who rabidly hates whiteboard interviews, to the extent that I'm looking at the responses in this article as a note of who to consider applying with next, I would love the challenge of "explain a complex concept you're already familiar with". I've never gotten that in an interview before.
I try to start there, but most candidates act shockingly uninterested in going into details of their past work. I've directly asked about what the interesting parts of a project were to them and gotten things like "I had to learn ES6 and I hadn't used much JS recently" without much elaboration able to be teased out.
If you can't give me good examples from your past, I'm going to throw my own questions at you. I tend not to do coding on the whiteboard, though. Lots more boxes and lines and schema and system interaction-y stuff.
One aspect of the protomolecule that I quite enjoyed was the manner in which Earth, Mars, and the Belt attempt to control and leverage it to assert their dominance or, at least, maintain a balance of power.
It speaks to the fact that once "Pandora's Box" is opened, technologically speaking, it becomes a fact-of-life that must be adapted too. Drone technology, for example, is a relatively new form of tech with potential to be used for productive AND destructive purposes, and there are no obvious answers as to how it should be handled.
There are also innumerable CG videos on YouTube, targeting children, which appear to be algorithmically generated. Using a combination of popular kids characters (Elsa, Spider-man, etc.) and ostensibly canned animations, the results are often strange, bizarre, and even entirely inappropriate for the intended audience.
The phenomenon is referred to as "Elsagate"[0]; be forewarned, it's pure nightmare-fuel.
There is also an entire genre of children's nursery rhyme videos and toy videos which are not so obviously inappropriate, but which also have an uncanny algorithmic quality to them - I remember seeing those before I even heard of Elsagate.
Sorry for self-replying, but I remember what I was referring to now, the "Finger Family" type videos[0] particularly Toys in Japan[1] who is apparently now streaming Fortnite.
Isn't "-man" an abbreviation of "human"? Justin Trudeau made a gaffe recently when an audience member referred to "mankind" and he erroneously insisted that she instead use the term "peoplekind". Surely "humankind" would be the term requested, if one felt the need to be pedantic.