Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Caltech's CS interview prep course (caltech.edu)
320 points by b8 on Feb 27, 2022 | hide | past | favorite | 320 comments



My career path in tech so far has been junior engineer -> mid level -> senior -> tech lead.

As I've progressed through that path, I have had literally no opportunities of applying the stuff I've learnt on HackerRank, save for a couple of interviews.

At this point I'm convinced that the focus of those interviews was entirely wrong. Once you pass those tests and get hired, you will be dealing with tons of legacy code riddled with dumb queries, questionable code quality, and very often the wrong stack for the problem at hand.

I'd be OK with modern interviews if they reflected the job you will be doing to a reasonable degree. But knowing the state of the industry I feel that they fall somewhere between gatekeeping and a bait & switch scheme.

If you haven't spent enough time practising those problems, you will look like an idiot. If you have spent enough time practising them, your expectations will be high, and you will feel scammed when you read the steaming shit your new employer wants you to improve.


Bingo.

It should be telling that there are a number of, usually for-profit, resources for prepping for just the interview process:

* leetcode

* Cracking the Coding Interview

* Elements of Programing Interviews

* Interview Kickstart, which costs like $5500, is a bootcamp just for passing interviews

* educative.io

* algoexpert.io

That being said, I might just play along with the DS / Algo interview game. Having an interview process that is artificially arduous and way outside the expectations of the job duties means that the job market will stay artificially inflated on the demand side. Hiring leetcode monkeys who can barely deploy and manage k8s cluster on AWS means the demand for my skills will be more valuable than if the interview process actually worked. I can complain all day to my bosses about how hard it is to hire (its not that hard) and meanwhile rip their HR budget to shreds when I threaten to leave unless I get paid more (I've done that before, I recently got a promotion and 50% raise on base alone).

So yeah, I'll gatekeep as long as others play too. I hate the game, but I will play it to my advantage.


I know you think you’re making a “cynical but rational” kind of argument here, but man, this comment is a huge fucking bummer.

You’re acknowledging that you have the both the ability and the knowledge to change (your small part of) a shitty system, and instead you’re pulling up the ladder behind you. That sucks.

You should not let yourself feel good about that, even if it’s “economically rational” or whatever.

You’re making things just a little worse for everyone but yourself — actually, even for yourself, too — and trying to excuse it with “well, that’s just the way things work!”


You'll feel better when you see some of my other comments. I'll paste it here for you:

> What’s your suggested alternative? In my experience, any alternative to leetcode provides a better signal-to-noise ratio. You are better off asking trivia questions.

Additionally, this is the common narrative among leetcode advocates - "If no leetcode, then what?" As a hiring manager, you think harder and do better, that's what.

> Please remember that while some of us would do well and prefer some live pair programming or debugging sessions, it stresses the hell out of some folks and penalizes them unfairly.

So does leetcode. And a candidate that is truly a good fit for the role would be more at ease with the interview than if they saw a leetcode question they haven't seen before.

> Take home tests penalizes people with families and other responsibilities...

So does studying for leetcode for hours.

> ...and probably has a racial bias as well in countries where different races have different societal loads.

Share some sources before making a claim this ridiculous.

> In the end, even if these coding challenges might not reflect the actual work you do, they might still be predictive of success in the job, as much as any other method can be when we look for unbiased markers.

In my 6-ish years of experience, this is false. I have actually tested this theory in practice, and through my anecdotal experience, leetcode is a poor signal for job performance.

I can usually usually accurately screen a candidate for success in about 45 mins. And, unlike leetcode, you can't pretend like you haven't seen the question before, bc I can push the candidate to their limit, which is where you will find hire / no-hire signals. I love talking about this stuff, but I find that my peers engineering peers do not give a shit and would prefer to stay in leetcode hel


What do you suggest instead?


I mean, I don't even have to suggest; I can let the OP suggest instead.

> Having an interview process that is artificially arduous

Make your interviews less artificially arduous.

> way outside the expectations of the job duties

Re-design your interview questions to be more in line with the actual job.

> leetcode monkeys who can barely deploy and manage k8s cluster on AWS

See previous suggestion.

If you want my personal suggestions on interviewing better:

- Interview for problem-solving process, not just for correctness. A candidate's ability to solve an algo problem in 45 minutes is much less important than their ability to calmly, thoroughly, and clearly explain the way that they think.

- Tune interviews to the role at hand. This should be obvious, but for some reason full-stack web devs still have to pretend like dynamic programming is a big part of their daily work.

- More systems interviews. I find that it's much harder to bullshit/memorize here, at least not for very long.

- Interviews based on actually building stuff. "Hey, here's a super simplified version of a feature someone in your role built last month. Can you walk us through how you would build the same thing?"

Interviewing is hard. It's not that fucking hard. When someone is out here complaining about what they see as obvious flaws in their process, and then decide that the best course of action is to "gatekeep as long as others play too," it just reeks of laziness and self-interest.

This does not get better unless individuals decide to start making it better. That's my biggest suggestion, really; don't become a cog in the machine. Every single interview that you, personally, conduct has a small but meaningful impact. It's up to you to have that impact be positive or negative.


> leetcode monkeys

I hate this from someone who manages interviews. They are leetcode monkeys, because they are not dumb and this is what gets them rewards and jobs. If he changes the criteria, they will change what they spend time on.

But, instead, he chooses to insult them ... for doing exactly what he rewards.


> Tune interviews to the role at hand. This should be obvious, but for some reason full-stack web devs still have to pretend like dynamic programming is a big part of their daily work.

This personally hurt :)

Completely agree with you on everything you've written.


This means all salaries go back to 100k. The salaries in silicon valley normalize with india.


I don't hate leetcode, but I'd really rather do interviews in one of these ways

1. Rely on referrals from people I trust. If somebody I've worked closely with and respect highly says that Person X is awesome, then just hire Person X. This obviously has massive problems with creating an insular culture and only works for people with a sufficient professional network, but it is incredibly high signal.

2. Have a long conversation with the candidate. Discuss their prior experience and war stories. This can be gamed, but I suspect is harder to game than leetcode.

Both of these options have a big problem as they scale, which is that interviews do not transfer. You need the hiring manager to do this interviewing. Making every hiring manager at a 100,000 person company do their own sourcing and interviewing is going to be a mess. Megacorps want to allow anybody to interview and then once somebody passes those interviews, they'd be able to join any team. I have mixed feelings about this approach.

So the best option IMO is two 2-3 hour pairing sessions. Refactor some code, develop a feature, and diagnose a bug. Have the person comment on the architecture of the code or system they used and describe large scale changes they'd make to improve it or prevent it from arriving at this state.

This requires more pre-work since you need to create an entirely working fake system and environment. And since there is a huge incentive to practice for these things, people would definitely sell knowledge about the fake system. It is also higher variance when there are new interviewers, since they represent a larger portion of the total interview panel.


Make the pie bigger.


Personally, I'd much rather rather have a strong understanding of Algo/DS than deploy K8s clusters.

To me, not only is that an annoying external cognitive load, but one that will probably be irrelevant within the decade.

I use my knowledge of graphs quite frequency, and what was true about graphs in the 60s is still true today.


What is that supposed to be telling? There are a quite a few industries where there are profitable neighbor industries around breaking in to it. Even ones with licensing and blah blah blah instead of whiteboards.

Hell, the cynics would call the entire current university system one! And if not that, than it least the massive industries around getting into better schools...


Why don’t other STEM fields have those neighbor/parasite industries?


Science typically requires advanced degrees, publications, etc. You're not doing much of anything a lay person would consider you a "scientist" for with just a Bachelor's. Getting these degrees serves as the hurdle/filter.

Math is the same as science. You're probably not getting hired as a mathematician if you just graduated with your Bachelor's.

[Non-software] engineering has all the professional certification, shadowing, etc. that is required to get very far in the field.

Software is the only STEM career you can get into right out of high school, being completely self taught. Yes obviously a degree helps and it's much easier with one, but not having advanced degrees is a non-starter for most STEM careers, let alone not having one at all.


Oh, boy, now let me tell you about medical school.


They do. Google “VLSI interview prep”


> I can complain all day to my bosses about how hard it is to hire (its not that hard) and meanwhile rip their HR budget to shreds when I threaten to leave unless I get paid more (I've done that before, I recently got a promotion and 50% raise on base alone).

Kudos for that. I may be making a similar move soon.


The hiring rigmarole is just an IQ test in disguise.

The only downside, you can prep for IQ tests, and spend unlimited time doing the prep.

If only someone would come up with a way to control for how long they prepped for this: one week on hackerrank, or 2 years of courses.

I bet those that learn the rigmarole faster, will also pickup k8s and other such fairly ephemeral trivia rather quickly as well.


> The hiring rigmarole is just an IQ test in disguise.

Do you have any data to back a claim that ridiculous?

I hear this bullshit all the time, yet nothing to back it up.

If you think IQ would be a good hiring signal, then just administer an IQ test instead of LC question.


I agree with you in that, we should drop this nonsense, and administer IQ tests legally.

Until laws change, this rigmarole will continue though. Administering IQ tests is illegal, that's why this rigmarole exists in the first place.

The military isn't subject to silly laws like that, and they do administer IQ tests directly, and place talent accordingly.


The reason this complaint never changes anything, despite being very popular, is that these interviews serve two purposes that it doesn’t address: 1) they’re an intelligence test; 2) they’re a perseverance/conscientiousness test.

Will you take the time and put in the work to pass one of these interviews? That measures perseverance and conscientiousness. Having put in the time, can you recall and adapt what you learned to perform complex tasks on a whiteboard in a stressful environment? That’s an intelligence test.

In combination, these criteria measure the famed, “smart and gets thing done” in a way that more practical lines of questioning might not. Or at least that’s the theory.

For the record, I’ve only interviewed at one FAANG company, got to the on-site interviews, did not get an offer (in case you think I’m secretly patting myself on the back here).


> The reason this complaint never changes anything, despite being very popular, is that these interviews serve two purposes that it doesn’t address: 1) they’re an intelligence test; 2) they’re a perseverance/conscientiousness test.

No argument on perseverance, but I don't think it measures conscientiousness. At least, not how I understand the term, which has connotations of intrinsic motivation rather than extrinsic.

Now, if measuring how a person responds to extrinsic motivation is your goal, then this type of interview does just fine. It measures a person's ability to delay gratification for a later reward (because you can have perseverance without that, which you can also call being stubborn or determination, eg. "I'm going to figure this out even if it kills me" ).


I agree, I don't think it measures conscientiousness well either. There's not a lot of room for careful thinking and trial-and-error in a timed, 45 minute, observed, pass/fail interview.


They are a memory test not an intelligence test. If you truly wanted to measure intelligence it would have to be based on a priori knowledge and not on the ability to memorize algorithms and their implementation.


The notion that somebody could simply "memorize algorithms" and then apply that information on-the-fly to questions they won't know in advance -- demonstrating adaptiveness, creativity, and the application of general problem-solving knowledge to novel problems -- and that this doesn't measure intelligence is not credible.

I'd bet my net worth that the ability to perform well on these interviews positively correlates with IQ.


I don't think it is as you describe it though, the leetcode questions in interviews are usually just slightly rehashed. What would be interesting is giving someone a puzzle they hadn't seen before. My wife can't remember formulas or algorithms for anything - she defends that by saying she can just look it up if she needs it. Saying that, we did Advent of Code last December and she came up with quite a few optimal solutions by first principle (albeit slowly). She'd suck getting through a leetcode interview. I'm quite lucky in that I remember these things really really easily but really struggle with truly novel problems. I just reach for my toolbox. I don't believe these interviews correlate with genuine intelligence, just a particular type of memory and question recognition.


There is a bit of reasoning backwards from conclusions here.

For example, you will likely find the applicants rejected numerous times from FAANG have higher age-adjusted bodyfat levels and more dental cavities compared to successful candidates. And of course these measures negatively correlate with IQ too.

So what? Correlation doesn't necessarily make for reasonable selection criteria. Variance is too high.


Algorithm design is a skill you can learn. It probably correlates with IQ, but you have to control for experience to see it. Experience is certainly more important than intelligence.

In my experience, algorithm design is primarily pattern matching. You have a toolkit with a set of abstractions for modeling the problem and a set of algorithmic techniques for solving it. If you have the right tools in the toolkit, potential solutions will jump out. You then pick a promising solution and figure out the details. If you don't have the right tools, you have to go back to reading or start building new tools from basic principles. That can take hours, days, weeks, months, or years, if you succeed at all.

I guess this is similar to proving mathematical theorems, but I haven't done much of that after grad school.


> primarily pattern matching. You have a toolkit with a set of abstractions for modeling the problem and a set of algorithmic techniques for solving it. If you have the right tools in the toolkit, potential solutions will jump out. You then pick a promising solution and figure out the details. If you don't have the right tools, you have to go back to reading or start building new tools from basic principles. That can take hours, days, weeks, months, or years, if you succeed at all.

But you're claiming this is somehow not "intelligence"?


Intelligence is usually understood as a relatively fixed factor that cannot be trained effectively. Algorithm design is mostly about the patterns you already know.


IME how many patterns you were able to rote-memorize is a very small factor in how good you are at algorithm design; the limiting factor is more likely to be your ability to filter out irrelevant details and abstract out the essence of the problem and see how that relates your library of patterns, and that part is mostly innate rather than trainable.


and then apply that information on-the-fly to questions they won't know in advance

But that's the whole point of LC grinding: to claw your way through enough of these problems until, lo and behold... there ends up being an 80 percent chance that any given "medium" problem thrown at you -- is one that you have in fact seen and worked on, already.

I'm just guessing at the 80 percent figure. But it seems pretty clear from the cult of LC grinding that one of the goals of this strategy is not just to learn general techniques for solving these problems -- but to get a significantly high percentage of these problems under your belt already, or nearly so.

While also learning to deploy the just the right grunts and moans and pauses to make your interviewer think you're seeing the problem for the first time, of course.


I feel like people who only memorize would not be able to pass contemporary test at most of these big companies.


I feel like you don't know that you're talking about. That's not meant to be an insult but to highlight the fact that "feeling" doesn't matter. Because my feelings about your statement has no bearing on the accuracy or content of your statement.


"I feel like" in this context is a synonym for "it is my belief that," so your response is a non sequitur based on a misunderstanding of a colloquialism.


If you aren't interviewing with the team or hiring manager with the opening and just getting assessed by random SWE in a giant company that seems like it is subject to a lot of variance.

Why don't they just conduct a proctored test? Doesn't make any sense.


I really think you nailed it, will have to borrow this point in the future.

A lot of it is about gauging the candidates aptitude, and of course their willingness to put in the work.

It's like those kids in math class that say, "I'm never going to use this in real life."


Those kids are right. School math is mainly an attrition machine. Most students, including in STEM fields, look forward to forgetting all of their school math on graduation day. The math that engineers need is baked into their high level software. Most engineering teams have a "math person" who handles any higher math related problems that crop up. I'm one of those people at my workplace.

I was a college math major, but believe that the math curriculum is ripe for reform.


This is a chicken and egg problem that I believe exists in Computer Engineering/developer positions because of i) the extremely diverse background of hires (some of them not being engineers or mathematicians or physicists per se i.e. could be coming in with a bio background (ML usually), no degree at all (perhaps a bootcamp -- usually front end work), or other) ii) a lot of businesses do low hanging fruit work with little novelty, iii) a lot of cost can be thrown to "buy more hardware/server time," iv) low quality is not just advised but sought for.

Because of the usual lack of rigorousness -- e.g. most companies will not require you to prove that the system is going to be up for 99.9999% of the time, the load balancing is optimal, the abstract queue in our system is not going to diverge (when it does inevitably we will just reboot and keep doing it until we get more funding) -- the expectation and targets is also to avoid it.

Most developer positions are doing essentially the equivalent of designing over and over the equivalent in civil engineering terms of one or two story buildings.

There are positions that will exercise one's engineering skills (including math). However, the culture and expectation of "math person" does not promote them. Graphics, cryptography, distributed/byzantine systems (I don't mean using AWS here), optimization, ML (new algorithms/improvements), maps, are some areas that one's cs/math/engineering skills can be used at.

But note that when any X of those areas become trendy there is the gut reaction of an "intro to X for programmers" coming out that is a couple of hand-wavy sketches, some sketchy intuition -- most often wrong in the details -- and an API guide.

Having done my undergraduate in Europe, every year was math heavy -- math was the core not the attrition machine. And I do see this mentality in the U.S. startup industry where a lot of times the Edison approach is taken of trying every single filament width choice on a mock-up instead of doing a quick calculation.

Having taught math to undegrads in the U.S. and from discussions with people that TA'd, I would agree with you that the math curriculum needs to be revised from the ground up as being extremely inadequate. On the other hand, most software "engineer" positions, do not require any engineering to be done.


I don't believe it is a chicken and egg problem. I believe there aren't that much of a demand in the industry for solving "Math" problems.

From an (software even!) engineering perspective, nobody needs to solve the same Math problem twice. All "Math problems" are either solved or are open problems (this is almost by definition). The ones which are already solved do not require re-solving again, which means that in the vast majority of cases a competent (software) engineer only needs to learn how to apply them. I don't know about traditional engineering, but in software, often it's just a matter of importing a library and calling an API. The software development ecosystem has developed to a point where you don't need to know how to implement things, you just need to know what you need. Unless you're an expert specialist in a particular field, you often can't do better than that, because whatever idea you have, most likely somebody more competent than you has already written an open source library for that.

I don't think it's useful to contrast general incompetence with a lack of mathematical ability -- though it sometimes correlate somewhat if only because general intelligence often manifests in mathematical ability. There are simply too many unproven assumptions from "math education" => "writing better software". If anything, good software engineers need to be ready to learn, and learning maths outside of formal schooling is just one of the many things they need to do. The general lack of quality in software is a supply and demand problem -- as long as FAANG can afford to pay 200k+ USD to fresh grads with no experience, companies with a smaller budget must make a trade off between lowering quality and not delivering at all.

Ironically(?) the market is actually responding. People doing interview prep are learning the minimum subset of mathematics and engineering required for a high paying software job. Then you get people complain about grinding leetcode. Imagine what happens if FAANG companies add abstract set theory to their standard interview questions! (I'm willing to bet this would lower software quality...)


> The math that engineers need is baked into their high level software.

Well, someone, has to write that software.

When people used to say, "I'm never going to need to know this", my teacher would reply "You're right you won't, but the smart kids will"


Indeed, but I think we could revise the curriculum without causing a shortage of "math people" relative to existing needs. What school math teaches right now is a lot of expression manipulation, and it's all done by hand. That's not how people do math after school, even mathematicians. And it creates a distorted view of math, since it's limited to problems that have closed solutions using relatively straightforward algorithms.

I'd like to see school math place roughly equal emphasis on:

1. Arithmetic, i.e., expression manipulation 2. Computation, both numeric and symbolic 3. Learning from data 4. Theory, i.e., things like sets and proofs

Items 2 and 3 are things that people can use throughout their lives, even outside of STEM careers, and could be blended with the science curriculum. Items 1 and 4 would serve the needs of "math people" and academic mathematics.


Can't that be taught in the higher-education, like most skills that are only required in a certain area? It's not just "the smart kids will", but "the kids who go into the mathematics field".


> math that engineers need is baked into their high level software.

This is only true if you’re working in a mature business with stable usage patterns. Even then, using lower level APIs can be easier and more flexible.


Who writes the software that has the math baked into it ?


Reading comments like this makes me think that the real problem is that recruiters have never worked in engineering so have no idea what the job is actually like.


Recruiters have nothing to do with the structure or format of the interview, so I'm not sure I follow.


If we define willingness to cram, cram, cram as "conscientiousness" - then sure.

The idea that this is a meaningful proxy for "ability to get things done" -- about that I'm not so sure.


I interview senior SWEs at a big tech company (I've done in the order of a few hundred interviews, which were a mix of coding and system design sessions).

I'm always open to hearing about ideas to improve my interviewing, but I find that threads like this are repetitive and not really actionable.

One common misconception I keep seeing is the idea that the interview result revolves around solving the question. IMHO, that's a bad approach (which is unfortunately taken by many interviewers) and in fact it's an incomplete way of assessing candidates, given that most big tech companies have a list of competencies and/or values that are supposed to be evaluated. Big tech companies usually want interviewers to assess on multiple dimensions, ranging from communication quality to specialization fit.

As an interviewer, you're supposed to think of the question as a vehicle to go over these topics, not a pass/fail end goal in and of itself. In my interviews, I pace my hints with the goal of having the candidate complete the question, even if it means dictating answers to unblock them. There's nothing worse that sitting there wasting time feeling smug when the candidate gets stuck on something. Time is short, so it's much better to move on quickly to different topics than linger on something unproductive.

If you're on the interviewer side, my number one advice would be this: question the calibration of the question bank. It's usually a kitchen sink of everything the person writing it could think of, without regards for time. You might already feel that you don't have time to cover everything that the question is supposed to cover, let alone all the non technical criteria that leadership asks for. So instead, cover a few actual technical concepts (syntax, recursion, what have you) to rule out complete incompetence, and then take liberties to go into topics that do cover a wider variety of aspects. For example, discussions about testing, refactoring or debugging can all be productive ways to spend time in a coding interview.

</two-cents>


"One common misconception I keep seeing is the idea that the interview result revolves around solving the question."

Candidates know this.

The issue is:

Interviewers who do their job, and build a broad based picture of a developer's skills and give fair ratings, are rare.

The perception is 75% interviewers don't even bother. They're tired, they're unmotivated.

Their annual bonus doesn't reflect "conducts fair and balanced job interviews."

They underclock their brain to 10% of its maximum, and at the end of the hour look at how shiny and perfect the piece of code at the end is.

Despite whatever the intentions of the system, I don't think anyone has ever hired a candidate with a messy code who spoke and reasoned well.

My extremely strong perception, having never conducted a big tech company interview, is that a candidate who submitted a shiny piece of clean code at the end of a technical interview will always be first pick. No matter how much they mumbled.

A picture is worth a thousand words.

A shiny perfect coding solution is a perfect image in the mind of the interviewer.

Interviewers are human, fallible people. When they're doing the end of day calibration on interview performance, having a shiny perfect solution changes their perception of a candidate utterly.


I had an interview a few weeks ago that ended in an offer and was basically a verbatim Spiral Matrix problem from LeetCode (they had changed the starting direction of the spiral but that's it). Didn't run it once because we ran out of time, and when I pasted it into LC after the interview, noticed several errors to the point where it's honestly probably better we didn't run it. The interviewer was super engaged the whole time.

When I'm conducting interviews, I'll do LC easies or mediums depending on the role we're hiring for - basically a couple easies for a new grad or someone with just a couple years of experience. One medium for a mid-level. If you're a senior we're not doing a coding exercise, we're having you teach us a system you've designed in the past, then we're going to change some of the core assumptions/requirements and see how your design changes. This has worked super well for us in the past, especially at the senior level. I would say most of the mid-levels we've hired since I've started doing this did not have a working solution at the end of the interview.


The "Merge intervals" question, or variations in particular are such a good interview question.

Off by one errors. Test the developers on the off by one errors.

Whatever your skillset it's truly the fairest tool in the book I think.


That sounds even worse in terms of setting realistic expectations for what the job will involve day-to-day. How many times is someone actually going to redesign a system like that?


Respectfully disagree. Teaching and explaining the current system to new and lower-level folks is arguably one of the most important skills for a senior developer. As for the system, they're not redesigning anything, they're teaching something that (as a senior) they should already know, as if we're juniors. Think typical box-arrow system architecture diagram, DB schema, that sort of thing.


Explaining the system is certainly something that happens, but in terms of hours/month I'd say it's something you do less of than leetcode-like coding work. Actually reconsidering your core design in response to some changing fundamentals, which sounded like it's what you're testing in this interview, is something that you do a few times a year at best.


Yeah, this is admittedly a problem. I don't have a great solution other than relentless education, which is... hard. Especially at scale.

From a cultural side, some companies have a competency pillar that is supposed to embody the idea of being a "good person". In mine, we call it "Citizenship", it's tied to performance appraisal and interviewing is one way to develop that competency.

As for code cleanliness, I've passed candidates with messy or even broken code before, if the candidate demonstrated competence in enough other aspects and the code quality issues can be attributed to nervousness or running out of time half way through a refactor or whatever.


I guess the question is, what's the proportion either side?

Is it 25% will do it properly, 75% don't? Am I too cynical about tech interviews, or too optimistic?

I'd also add, for a couple of companies, Google's approach springs to mind.

I think for places where, you know, they do search algorithms, knowing how to print the spanning tree of a graph is probably an important thing to know.


My understanding is that googlers typically don't know/care what role the candidate is applying for (outside of maybe the hiring manager round). There are arguments both for and against their methodology.

Personally I think there's a strong correlation between necessity and hiring quality (i.e. if teams really need the help, they typically engage candidates more meaningfully than the Google-style clinically impartial methodology)


That makes sense.

Sounds a bit like what my old timer buddy says.

Google's KPI is hiring talent away from competitors who physically live in the San Francisco bay area. And the ads/ML teams that they drench in their 1%ers.

Everyone else is window dressing.


Oh the focus of the interview is very good.

It's Google's interview format after all (actually Microsoft's, but operating systems are hard as well).

The great big sleight of hand trick behind all of this is that the entire industry has copied the interview process of a single, extremely large, company.

And Google makes $$$ writing software for *extremely complex graphing and search algorithms*.

It's a great interview process. For Google.

For everyone else, it's holding up a great big sign that says "I'm a moron and I copied my homework assignment from Wikipedia."


I went to an internal job interview about 5 years into working for the company. They were asking me all sorts of textbook questions about defining stuff like polymorphism and explaining some sort algorithms. I bombed. Many of these terms and algorithms had never been used during my 5 years, and I forgot many since they hadn't been used since college.

I wanted to ask the interviewer if they actually use these terms in meetings or used the algorithms, but I didn't. Since it was internal, I already knew the answers would be that they didn't, and I didn't want to appear confrontational.

I was internal, a high performer on my team at the time, and you're going to ask me this useless stuff?


Haha. At some companies, they made the interviews tougher for internal candidates... Amazon used to require full loop, but last time I changed jobs there (I no longer wok there), all it took was conversations with some senior engineers, and somebody reviewing my work sample.

And totally agree with the absurdity of the questions we ask in tech interviews. At least for my current employer, we use simplified versions of real problems we faced in the past, and place more emphasis on how they communicate, rather than just throwing some leetcode medium/hard problems, and let candidates self select based on how much prep work they put in.


Well, there's really two ways it works at my company. The other way it works is managers making backroom deals and then it's just some conversation with the tech lead that counts as the interview for HR's sake. So sort of similar to what you described.


The next big tech interview craze, I have long thought:

Here is a fucked up environment.

Here are logins for each role.

Here is your team.

Here is your role.

Each of you has a chance at each role. There are three of you. So three challenges.

The environment is experiencing the following symptoms. Diagnose, report, fix.

---

Then throw them at some broken stack with admin rights and figure why its broken and fix it.

Create various stack models to throw at teams based on team skill-set.

Never hire individuals. Hire strong individuals who can work in a team.


I was recently given something like this as a phone screen. I was provided an ip address and some credentials and I was expected to ssh in and get some python scripts working.

I was only given this after passing a difficult HackerRank though.


I'd kill for something like that. Don't have me create and solve some arbitrary math problem in code syntax, this isn't quiz bowl and work isn't a scholastic decathlon. This is tech, give me broken technology tell me its expected behaviors, give me space to figure out what makes it tick and and I will fix it.


At least have empathy for your predecessors! How often have you chosen an architecture and stack to solve a problem only have the slow but constant shifting sands of managements' requirements completely undermine the foundation of what you've built, and been given no extra time to migrate and make it right instead of adding on another layer of shortsighted hacks?


I totally agree with you. Some of the content there seems relevant to CS in general, but I hate the fact that you need to study content that you won't use on the job in order to pass an interview. It doesn't make sense to me. I've seen new grads perform as well as senior engineers with these Leetcode-style questions so I've stopped using them to assess engineers. Further, I know we're specifically trying to assess technical ability in an interview, but that's not the whole picture for an engineer. They need to be able to take feedback well and provide good mentorship also. I think we miss this.

Inspired by some realistic interviews that I did recently, I've been working on something[1] that aims to equip companies to hire engineers using realistic assessment methods. It scales like Hackerrank, but I think it's fairer.

[1] https://devscreen.io/


> I've seen new grads perform as well as senior engineers with these Leetcode-style questions so I've stopped using them to assess engineers

To me, this seems to be reasoning from an unproven premise. As tech becomes ever more lucrative, it honestly wouldn't surprise me that the average competence of new grad hires will skyrocket - supply of software engineers is going to increase and each subsequent generation will have more experience with technology.


> I've seen new grads perform as well as senior engineers with these Leetcode-style questions so I've stopped using them to assess engineers.

Good point, it could start a real crisis if the industry collectively admitted that senior engineers are no better than new grads.


do you have a demo video or something for this?


Not yet - I'm going to add one to the homepage this week and also support a free trial.


What’s your suggested alternative? Please remember that while some of us would do well and prefer some live pair programming or debugging sessions, it stresses the hell out of some folks and penalizes them unfairly. Take home tests penalizes people with families and other responsibilities and probably has a racial bias as well in countries where different races have different societal loads.

In the end, even if these coding challenges might not reflect the actual work you do, they might still be predictive of success in the job, as much as any other method can be when we look for unbiased markers.

This is similar to how the entrance exams to the IITs in india are pretty much completely discordant to what you actually end up having to know, the rigor and preparation you need to do ends up reasonably selecting for well performing students anyway. Not perfect though. Any system can be gamed but it works alright.


> What’s your suggested alternative?

In my experience, any alternative to leetcode provides a better signal-to-noise ratio. You are better off asking trivia questions.

Additionally, this is the common narrative among leetcode advocates - "If no leetcode, then what?" As a hiring manager, you think harder and do better, that's what.

> Please remember that while some of us would do well and prefer some live pair programming or debugging sessions, it stresses the hell out of some folks and penalizes them unfairly.

So does leetcode. And a candidate that is truly a good fit for the role would be more at ease with the interview than if they saw a leetcode question they haven't seen before.

> Take home tests penalizes people with families and other responsibilities...

So does studying for leetcode for hours.

> ...and probably has a racial bias as well in countries where different races have different societal loads.

Share some sources before making a claim this ridiculous.

> In the end, even if these coding challenges might not reflect the actual work you do, they might still be predictive of success in the job, as much as any other method can be when we look for unbiased markers.

In my 6-ish years of experience, this is false. I have actually tested this theory in practice, and through my anecdotal experience, leetcode is a poor signal for job performance.

I can usually usually accurately screen a candidate for success in about 45 mins. And, unlike leetcode, you can't pretend like you haven't seen the question before, bc I can push the candidate to their limit, which is where you will find hire / no-hire signals. I love talking about this stuff, but I find that my peers engineering peers do not give a shit and would prefer to stay in leetcode hell.


> You are better off asking trivia questions

Bet you most people would disagree with just trivia.

> So does studying for leetcode for hours.

You honestly do not have to do this to pass the LC interview. I didn't.

Certainly not as much as a takehome test.


> You honestly do not have to do this to pass the LC interview. I didn't.

You were able to prep for 2-4 hours and still pass LC interviews? I find this hard to believe given that I, and peers I know, took much longer to prep.


Having just passed Google's algorithmic interviews, and established a leetcode account specifically to prepare for that, I have a good record of how much leetcode I did:

3 easy problems, 4 medium problems, and 3 hard problems.

It would make sense to count 2 additional hard problems that I worked on (with pencil and paper) but didn't submit a working solution for.

This is more than four hours of work, but it's much less work than people suggested was necessary. (For example, my recruiter strongly believed that candidates needed a minimum of 5 weeks to prepare for an interview.) I wasn't really able to make myself do more leetcode, because whenever I started working on it, what I really wanted to do was to write up proofs that the algorithms were correct, not to submit a solution to one and move on to the next one. (OK, it passed all the test cases, but what if that's just a coincidence?)

But writing up a formal proof can easily chew up most of your day, and it does nothing for you in terms of practicing getting the code out. So I found the whole process fairly demoralizing.


> (For example, my recruiter strongly believed that candidates needed a minimum of 5 weeks to prepare for an interview.)

Your recruiter believes this for a reason and they have probably seen many candidates go through the process.

Not saying you're a liar, but if you got into G with that little prep, I have some follow up questions:

* Where did you level when you got your offer? L3 vs L4 vs L5 makes a big difference.

* How many LC questions were you given per round? I would expect 2 is the norm.

* Did you have optimal solutions or were you nudged to a solution by the interviewer?

* Did you pass any other FAANG / leetcode interviews?

* What's you educational background? Do you have computer science / STEM degree from MIT or similar?

* Did you crush the system design interview?

* How do you feel behavioral interviews went?


I can answer for myself as GP, didn't interview with G but got into a similar company.

> L3 vs L4

Slightly above L3 would be the equivalent ($200-$260 salary band)

> * How many LC questions were you given per round? I would expect 2 is the norm.

6 interviews, 5 were technical -> 3 were algorithmic style, 2 were domain specific. Usually they had 2 questions, but really the first question was very easy and often related to the second one.

> * Did you have optimal solutions or were you nudged to a solution by the interviewer?

No clue, there was one I was struggling on because string parsing is annoying in C++. Not much nudging. I believe my solutions were generally pretty efficient.

> Did you pass any other FAANG / leetcode interviews?

Yes, the only other one I was in. I found actually getting interviews (as entry-level) to be much harder than passing the interviews once you got them.

> What's you educational background? Do you have computer science / STEM degree from MIT or similar?

Yes to the second and I did take a DS&A class in sophomore year of college.

> Did you crush the system design interview?

Probably was my weakest.

> How do you feel behavioral interviews went?

Well, I am good at talking.


> Not saying you're a liar, but if you got into G with that little prep, I have some follow up questions

First off, I didn't get in. I passed the algorithmic interview, I was told to expect a round of "team fit" interviews, and then they notified me that instead of having team fit interviews they were just rejecting me. From what I understand, after passing the algorithm round you're supposed to collect "statements of support" from team leaders at Google, and then your file with the statements of support included goes to a centralized hiring committee. I would guess the difference between passing the algorithm round with no team fit interviews and failing the algorithm round is largely in whether you get rejected before or after that committee sees your file at all.

I can answer the questions anyway:

What level were you applying for?

L3.

How many questions per round?

Well, this depends on what you mean by "round". I had a day consisting of 5 interviews with 5 people. One of those was the behavioral interview; the other four were algorithm questions. Each (non-behavioral) person asked one question, for a total of four questions.

Did you have optimal solutions or were you nudged by the interviewer?

Question by question:

- Near-total failure.

- An interview involving a lot of verbal problem solving verging on system design. But technically not a system design interview. L3s don't get those. The problem was less difficult (which makes sense since the design of the interview allows less time for it) and I didn't need much in the way of help. Preparation was relevant here; one of the things I had done to prepare was to write out a binary search, which came up in this interview.

- A question for which I had a good solution offhand, which the interviewer nudged to a perfect solution.

- A question for which the interviewer did not want an optimal solution. He specifically remarked to me that when he gives this question and the candidate appears to be familiar with the solution, he substitutes a different question. This interviewer spent some time talking to me once my solution was written (and I had taken help from him). I had used a brute force approach. He remarked that my approach was atypical, but that the typical approach was also a brute force solution, and he described the optimal solution for me.

Have you passed any other FAANG / leetcode interviews?

No. I don't do well in interviews. I have worked for Amazon in the past; I got in by winning a programming contest they held at my school. Notably, you do not have to speak to anyone to win a contest. I did not continue to Amazon after graduating for a couple of reasons, but it seems to have been more or less a guarantee for everyone who was hired under that program and chose to continue there.

What's your educational background?

I have a BA in computer science (and in math; double major) from a poorly-regarded school. You might find it relevant that, although I didn't go to a good school, I had good standardized test scores: 36 ACT; 1600 SAT; 800 on each of 3 SAT subject tests.

Did you crush the system design interview?

L3s don't get those.

How do you feel behavioral interviews went?

Iffy.


> What’s your suggested alternative?

I generally give (mostly) simple questions where I explain how to solve them. Assuming you're a competent programmer in the language we're dealing with, you should be able to figure out how to solve the problem in your head in a minute or two. The trick is that I'm gauging accumulated skill, not memorized algorithms.

For example, I'll explain how to use a data reader, explain the constructor for a simple object, and provide a SQL statement. Then you need to write the code that reads a few values from the data reader and constructs and object. After that we discuss edge cases and error handling, and if you should throw an exception or return null. Easy-peasy! (And if it's not easy-peasy, you really shouldn't be here.)

Another example: I'll explain an older API for running a lambda in a background thread, and then an API for running a lambda in the main (UI) thread. (Mac, Windows, iOS, and Android all have the same threading model for their UI.) Easy-peasy if you understand basic scoping and threading. (And if it's not easy-peasy, you shouldn't be working in any job where you have to deal with threads.)

In both cases, the candidate can ask any many questions as they want, and I'll happily steer a candidate.

The point is, I'm not relying on memorized algorithms, or requiring prep. What I'm doing is relying on, and judging accumulated skill. Only a novice will have trouble with my questions; and I wouldn't use these questions when hiring interns or entry-level engineers. If you've programmed with a database before, you should understand the basic concept of a data reader. If you've programmed for awhile, you should understand scope. And, if you've programmed with threads, you should understand running a lambda on another thread. (You probably should understand lambdas, too.)


> I generally give (mostly) simple questions where I explain how to solve them. Assuming you're a competent programmer in the language we're dealing with, you should be able to figure out how to solve the problem in your head in a minute or two. The trick is that I'm gauging accumulated skill, not memorized algorithms.

That's a decent way of interviewing. It depends on the skill of the interviewer though, I can easily imagine someone handling it very poorly.


Take home OR coding challenges is a false dichotomy.

> In the end, even if these coding challenges might not reflect the actual work you do, they might still be predictive of success in the job

They are snake oil. You cannot predict success at some job by asking candidates to prove their ability at something that is only tangentially related to the actual job.


General mental ability is general, that’s the point.

> The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 100 Years of Research Findings

> On the basis of meta-analytic findings, this paper presents the validity of 31 procedures for predicting job performance and the validity of paired combinations of general mental ability (GMA) and the 29 other selection procedures. Similar analyses are presented for 16 predictors of performance in job training programs. Overall, the two combinations with the highest multivariate validity and utility for predicting job performance were GMA plus an integrity test (mean validity of .78) and GMA plus a structured interview (mean validity of .76). Similar results were obtained for these two combinations in the prediction of performance in job training programs. A further advantage of these two combinations is that they can be used for both entry level hiring and selection of experienced job applicants. The practical utility implications of these summary findings are substantial. The implications of these research findings for the development of theories of job performance are discussed.


> General mental ability is general, that’s the point

I've seen PhDs melt under pressure at two different jobs. Which tells me that an exceptional IQ or work ethic doesn't necessarily equate with success in the industry. Those two people had something in common, and it was their poor communication skills, both input and output. If your brain cannot process some coworkers feedback efficiently and it cannot spit out feedback efficiently, the logical / mathematical side of your brain is not going to make up for it.


Do you have any insights on what kind of "integrity test" they used?


The system works "alright" if the end goal of the system is to whittle down candidates in a 100:1 ratio. As someone who has been through both the JEE nonsense and passed FAANG interviews, I really question if this is at all necessary.

In the case of JEE, kids spend pretty much anywhere from 2-6 of their teen years preparing for the exam. Leetcoding on an average, takes a few weeks to a few months depending on the person. And you're going to have to repeat the prep every time you interview (maybe spending less time than you did in the beginning but the ramp up takes time).

I see only one "scalable" solution to this problem: Something like a bar exam that people can take at regular intervals. This could have different levels that could unlock different jobs. Companies require that you pass the exam at a certain level and if there are too many applicants, a lottery picks the lucky numbers.


> they might still be predictive of success in the job

Basing an assumption out of thin air like this, we can throw the opposite argument: 'they might be predictive of failure in the job'.

Writers of bad code and choices mentioned by OP were likely hired out of a fantastic hacker rank score, and yet failed to produce good output.


grinding leetcode is just as bad, if not worse, than a take-home test


Perhaps not everyone has to grind it so much. (I would wager they're the same people who do regularly find themselves thinking about computational and space complexity, and about using different datastructures and algorithms for problems, vs the ones who never see reason to!)

A take home test requires spending free time for everyone. A whiteboard interview requires variable amounts of free time. I would wager that, on average, the people inside companies using these processes require less time to "grind leetcode" than the people outside those companies do.

You gotta be able to convince the folks on the inside, not the ones on the outside. To them, it being hard to "grind" might even seem like a good thing! "It's hard to cram for this compared to other sorts of tests!"


I’ve passed almost every interview loop I’ve had without ever studying leetcode. I have a (maybe uncharitable) suspicion that people who find they need to study are spending all their time at work gluing software together without actually writing any from scratch.


Knowing the data structures and algorithms is different from actually passing these interviews. Interviews are designed in a way that you are going to have to commit a lot of stuff to memory that you would have just looked up if you were just coding up a problem.

A common example brought up here on HN is binary search. It is usually simple to figure out that a problem requires Binary search and even which variant. But the differences between the variant are really small and you can very easily make a mistake when implementing it.

I have passed enough of these interviews myself and used to think similarly (all you need to do is learn the fundamentals and the rest will follow) but my hit rate went up substantially when I realized that I had to do the same problem again and again and it almost became part of my muscle memory. Looking at a new problem, analyzing it and then coming up with a novel solution takes a lot of time, and there are several mistakes you can make on the way. The interviewing culture today is to reject "false positives" so anything apart from a perfectly coded response results in a mixed/negative response.

This whole leetcode thing is a farce and has to end. Let's be honest, the Google guys used it because they wanted "like-minded people". In short, they discriminated based on their own criteria and successfully brushed it off as a "fair and unbiased" way of selecting people.


(Speaking as someone who uses theoretical computer science fairly often) I wonder what the solution to this would be, honestly. It seems like there are a lot of jobs where we just expect that gluing things together is an acceptable, productive, necessary task to be done. Should we interview for these differently? Can we avoid a stigma being attached to this (productive) work?


I think we should automate this work away, or at least figure out what’s preventing that. Boilerplate is a problem, not a good investment of human effort.


Seems like most of the industry is made up of such roles these days.


> grinding leetcode is just as bad, if not worse, than a take-home test

For leetcode, I can study once for a week or so, and then do a bunch of interviews at different companies, where each onsite is gonna take me a single day (4-5 hours or so).

For take-home projects, it usually takes way more time than that (about a whole week of a few hours every day). And it doesn't reduce the more I study or do those projects.

So no, thank you, I would rather take a single day to do a full loop with a company, as opposed to spending a week doing the same. Back when I was heavily interviewing last year, I could easily do leetcode interviews with about 1-2 companies a week without breaking a sweat. I did a take-home project a couple of times, and I am never going to do it again. The time sink on take-home projects is insane, and it is a much weaker signal. Given a lot of people here heard those stories last year where people were even trying to cheat on leetcode interviews (by asking other people to take those for them, and they eventually got caught), I imagine the cheating on take-home project is much more common and is much more difficult to detect.


Having had quite a bit of experience on both sides of the equation, you’re absolutely right.

The very act of extrapolating how a person would perform based on a brief snapshot in time seems flawed. It is unreasonable to task the interviewer with making a good decision with barely any data.

I used to think that a well calibrated question bank or interview style could be a reliable signal. But the moment you ask something specific is the moment you don’t ask everything else. A poor performing but good candidate’s strengths could lie in the latter category and a well performing but bad candidate’s strengths could be in the former.

There needs to be a reliable and credible way to learn about a persons strengths and weaknesses based on their track record.

I’d weight credible reference checks, previous projects and credentials more than their actual interview performance.


The whole psychology around tech interviews tends not to be focused on the right things. At least in the general sense, a lot of it appears to be people copying others without understanding the motivations.

The theory of interviewing should be to ask questions that are as predictive as possible e.g. success on the interview correlates as closely as possible with success on the job. The strengths needed will vary greatly by the role, company, and so on.

But what you tend to see is, everybody has a handful of "pet" questions that they ask over and over, without applying strict rationale for why its predictive to success.

Personally, any good interview question must both be iterative and highly correlated with actual work. Questions that have one "trick" and judging on that basis is a poor interview. Questions that focus on data structures or algorithms that aren't going to be relatively common are also poor. Questions which lack real world applications, also poor.

At least for me, I have a handful of questions, but one of my basic ones is to implement a certain data structure. And honestly, I care more about seeing the naive solution than the highly optimized one... because I care first and foremost that the candidate can code proficiently. The naive solution is super easy and requires little theoretical strength... but it becomes very obvious how proficient of a coder they are by seeing them write out the solution. How quickly can they write it? Do they make syntax errors? How long to grasp the intention of the question?

Some write the solution in 5 minutes, some take 40 minutes. That gap alone is probably the most strong predictor.

There is a more theoretically optimal solution, but I just followup at the end and we'll talk through what it could be. But if somebody can hammer out a naive solution with good code quality, quickly I weight that about 90%.

But my company also needs strong coders over particularly theoretical people. One of my best coworkers bombed on a tree interview questions I asked... but I could tell from all the supplemental details that he was very strong.


Seems like the consensus is that engineering interview is broken, but no one has come up with a better method? You’d think with the war to get talent these days if there was a better way to screen for candidates companies would use it.


> Seems like the consensus is that engineering interview is broken, but no one has come up with a better method?

These sort of "consensus"-es appear for most sorts of screening processes where the majority of people are screened out. See: any form of standardized testing.


Self-plug but I think I have an answer

https://haig.io/post/most-underrated-software-skillset/


I'd be OK with modern interviews if they reflected the job you will be doing to a reasonable degree. But knowing the state of the industry I feel that they fall somewhere between gatekeeping and a bait & switch scheme.

Yep,

Back in the 90s and early 2000s, the interview problems I got were often the actual problem I was meant to solve on the job. Which is to say that I don't think there's any barrier to coming up with questions like. Of course, if the job is as you describe, putting people through hoops is indeed a way to avoid actually telling people what they'll do.


> Back in the 90s and early 2000s, the interview problems I got were often the actual problem I was meant to solve on the job.

Really? In the early 2000s, half the questions I got for dev jobs were completely irrelevant riddles and nonsense like "which US state would you remove?"


> I have had literally no opportunities of applying the stuff I've learnt on HackerRank

I have. Many times.


Yes, the opportunities are possible, but exceptional.

For the most part, you will be wiring crud APIs or trying to figure out why your button is misaligned by 12 pixels. Those problems also pay better too, bc those are business problems. The hackerrank problems have mostly been solved.

Think about that - you think Ed Dijkstra or Tony Hoare ever made more than a facebook swe?


Tony Hoare works at Microsoft as a Principal Researcher, as well as having won over 5 million dollars just in awards. I'm quite sure he makes more than a Facebook SWE.

Dijkstra won a similar amount of money in prizes and worked at UT Austin which according to their 1985 docket paid 110k a year ($287,418.96 adjusted for inflation) and also worked in private industry in a research role, so once again... likely made more than a FB SWE.


Alrigt, alright, I stand corrected. Here's your upvote.


You're still right overall though. Most programmers don't get to use the theory they (should still) know


Picking 2 famous computer scientists was not a good comparison on my part. I'll own that.


If you are THE Walter Bright, I have no doubt you have. Respect


There can be only one.


So wait you are smart enough to get into caltech, smart enough to graduate with a degree in CS. And yet the curriculum on its own does not do a good job enough on its own to prepare you for getting a job? Very hard to graduate with a decent GPA in CS at caltech and not be able to thrive as an entry level engineer. If you got B’s in caltech CS I’d make you an offer for 200k+ without blinking or even asking you anything.

I have a small friend circle, I would say they’re all smarter than I am on raw problem solving and intellect capability. They came out with better degrees from better institutions. All have PhDs and yet they’re all facing delays when landing a job. There is something definitely broken. I think the industry has created this problem and the industry should fix it.


Computer Science is not Software Engineering. A PhD does not translate to real world engineering skills. If you just go through a bachelors CS curriculum, designed to give a survey of all topics in academic computer science, that wouldn't make you a competent engineer. This is especially true now that through the internet, just about anyone with a laptop can get as good as they want at software engineering without ever stepping foot in an academic institution. Sites like Free Code Camp, Kaggle etc. have torn down all of the barriers preventing anyone from learning how to be an incredible engineer. Some of the most important software in industry is open source and anyone can take the time to learn how to contribute to it.

If I had the choice between a Caltech grad who had all A's and no experience or a student from some random state school who had interned every summer and contributed to open source, I'd pick the latter. Many people are locked out from ever attending elite schools in the first place despite competence due to coming from poor backgrounds and underfunded highschool districts.


> If I had the choice between a Caltech grad who had all A's and no experience or a student from some random state school who had interned every summer and contributed to open source, I'd pick the latter.

I really wish someone had explained this to me while I was in college.. I had ONE professor who'd actually worked in the real world who explained it, but all the other professors were lifetime academics who just parroted the same idiotic stuff about how important it is to get good grades.

Getting my first engineering job after I graduated was a nightmare.. I ended up having to go with a crappy internship that barely paid enough for me to live in a bedroom in a shared house, with a terrible boss who constantly threatened me and stressed me out. And that's with a 3.8 GPA.. When I told prospective employers that, they were like "What is that? Out of 4? 5? Is that good?"


A lot of talent is wasted by young people doing their utmost best at uni only to find out that no one cared. I applied to FAANG after graduating CS and no one got back to me.

A friend of mine in the HFT world told me “I know how hard some of your courses were. But you can’t highlight them on your cv because no one will believe you that they were that hard. And even if they did, they wouldn’t believe that those skills are transferable.”

Going to yet another job interview, I spoke to my interviewer who had a similar CS specalization but from another uni. He told me “my master was of baaad quality.” I responded with: “mine wasn’t, and my classmates all feel the same because we’ve learned practical skills and our teachers prided themselves in giving difficult yet practical courses.” He looked at me with disbelief.


This is partially why same alumni hire each other as they already have a priori knowledge of the education the other has received e.g. whether or not a course is difficult/easy etc.

Back in undergrad, I was talking to a hiring manager who was an alumni from my school that said if he will hire anyone no questions asked if he sees anyone with X course on their transcript/resume (listing the project from the course)


>Computer Science is not Software Engineering. A PhD does not translate to real world engineering skills.

OP is not comparing a caltech grad to someone who contributed to open source. OP is comparing caltech grads to someone who studied algorithm interviews.

I agree. Degrees and accreditation are sort of gateway policies to conduct deliberate filtering and maintain a class based social hierarchy. There is some merit as it does take a brutal level of hard work and intelligence to get into caltech.

However, algorithms are the exact same type of filter, both are pretty much divorced from any relationship to the job at hand.


> If you just go through a bachelors CS curriculum, designed to give a survey of all topics in academic computer science, that wouldn't make you a competent engineer.

True, but it's probably not a worse predictor than most interviews.


It's not the interview that counts but the interviewer (or, in some degenerate cases, the "hiring committee").

A good interviewer can assess a candidate's competence even with crappy questions / question bank. A bad interviewer will often make wrong hiring decisions even if they asked good questions.


That’s a fair point but the kind of interview this course is preparing you for is a CS test, not a practical software engineering test.


Yes but the interviews are primarily used to screen candidates for SWE roles


To the extent that companies test CS fundamentals to screen SWE roles, however, we can reasonably argue the implementation of the CS fundamentals test.

It seems to me that four years worth of coursework and exams provide a better assessment than a couple hours of on the spot performance.


Nobody is a "competent engineer" fresh out of university.


They are if they learned how to be an engineer in parallel.


That is basically impossible. Becoming a good engineer requires learning from the consequences of decisions you yourself made years ago. Doing internships will not give you that no matter how many you do.


That's what open source projects are for ;)


Here we see what the alternative to whiteboard interviews: credentialism. Get into the right school and muddle through and get a guaranteed 200k!

Is there more to you than your high school performance? Push back against this if so!

(Calling the interview Leetcode is misleading IME, since I've rarely been asked something that would be near the upper tiers of that site's difficulty).

Top-tier tech companies face a LOT more competitive pressure than top-tier universities; why should I trust the university selection process more than those corporate hiring ones?

Ask yourself why the rest of the industry isn't happy to simply hire every B student from CalTech/Stanford/MIT/Ivies without an interview?


> why should I trust the university selection process more than those corporate hiring ones?

I don't know the US education system enough, but there are some schools out there which are much more selective than any SWE interview process. You can be guaranteed that the worst student from these top schools could nail any algorithm interview with minimal preparation.

But these big companies hire many more people than these few schools can provide, and they do so globally, from countries that may not have such elite schools.


Needing to hire more than just the top schools graduate is certainly a reason companies can't limit themselves to university selection processes.

But that's not all that's going on here! CalTech is pretty selective as far as US schools go (6% acceptance rate). Slightly more than MIT, slightly less than Stanford. SAT score stats all in the same 1500+ range.

But they're offering this course, and that makes me think that no, the worst student from these schools in the US, at least cannot nail any algorithm interview with minimal prep!

And that makes sense, from a US perspective! Getting into CalTech or Stanford or MIT is about doing well at high school/early-college level material in high school, and about extracurricular activities, with a dash of being able to afford it and being told it's something you should try for. Most of that high school curriculum - even most college-level CS material - isn't super applicable to a software engineering job. It's heavier on memorization (we have google for that!) and it's heavier on theory than on creative application of things.

I might argue that any test or course where the answers to the exams come straight out of the textbook isn't worth as much as it used to be, since we've got Google and SO and all the books too!

That's a big part of why a lot of these whiteboard interviews aren't simply "write out Dijkstra's algorithm" (boring, textbook!) but are about seeing if you can figure out how to use apply CS in a situation where the way the problem maps to the algorithm isn't immediately obvious.

Creative work has a long history of not aligning well with traditional US university admissions criteria and evaluations.


> But they're offering this course, and that makes me think that no, the worst student from these schools in the US, at least cannot nail any algorithm interview with minimal prep!

I think it's impossible to nail these interviews without minimal prep. It's very specific, and every candidates are well-prepared, many of them have done 500+ leetcode questions before interviewing.

Regarding the US universities, I'm not sure but I wonder if some students can somehow compensate a lack of academic mathematics excellence with extracurricular activities, or being rich or from a minority. Maybe companies want to filter those out?

As for creativity, leetcode interview questions is all about preparation. There's little creativity.


> I think it's impossible to nail these interviews without minimal prep. It's very specific, and every candidates are well-prepared, many of them have done 500+ leetcode questions before interviewing.

It’s definitely possible. Mostly you’d want to learn your algorithms well in college and continue to exercise them in your work. The rest of the interview is a lot about how you present yourself, which is mostly along the lines of “work out the problems out loud and ask questions”.

Also, FWIW, I’ve had a decent amount of opportunities for creativity on Leetcode-style questions. When permitted I often pick the language that I feel would make the solution simpler or easier to understand. When doing things like linked list or graph traversal you’re often asked to define the datastructure you’ll be using, which is a good chance to use different types of polymorphism or algebraic datatypes that can make your task easier. Even if you’re just churning out for loops, a lot of them can be replaced with functional methods or selective application of interesting utilities such as array rotation.


> Mostly you’d want to learn your algorithms well in college and continue to exercise them in your work. The rest of the interview is a lot about how you present yourself, which is mostly along the lines of “work out the problems out loud and ask questions”.

Not my experience. When I interviewed at Google for a senior position, I mostly had difficult questions with little interaction with the interviewer.

Before my preparation, I could solve all the questions within let say a couple of hours and I knew very well all the algorithms. But that's not what's expected to pass. You're competing with people who have been working very hard on this style of interviews and you need to be among the best.

You're expected to find the answer to a couple of questions within a 40 min. That includes finding the solution, writing the code (not pseudo-code), testing your solution and discuss complexity.

Thinking 15min + sketching some pseudo code is not enough, or maybe for a particularly hard problem.


I can understand not being used to writing pure algorithms for an interview, so I guess that is something you can certainly spend a bit of time working on improving. But I’m a bit curious about being able to solve a question in “a couple of hours”. In my experience most of the interview questions I’ve seen are of the “I know how to do this and I can implement it, maybe needing a little bit at the end for some edge cases” or “I have no idea how to do this, I’m just going to brute force to have something and then spin my wheels for the remainder of the time”. I can see the time being useful for competitive programming, where you often need to put things together in nontrivial ways, but I find that most interview questions are fairly straightforward once you know what the algorithm or data structure is. Unless you’re rederiving it in the fly, which I have done sometimes, but it’s still not something that I would spend a couple hours on.


> But they're offering this course, and that makes me think that no, the worst student from these schools in the US, at least cannot nail any algorithm interview with minimal prep!

My take on this is that the problem is misaligned expectations. I failed Google's algorithmic interview round twice under the assumption that when the interviewer asked me a question, the point was to see if I could come up with the answer.

But that isn't true. The interview questions are chosen with the expectation that candidates cannot come up with the answer themselves. Instead, the point is for the candidate to cold-read the answer from the interviewer.

When a test -- or a computer -- asks you how to do something, it won't care or even realize that you spent ten minutes thinking about the problem before starting to write a solution down. But an interviewer will fail you for doing that.

That insight was enough to pass the interview round. And it also suggests that a course in "how to pass an interview" should boost CalTech grads' passing rates to nearly 100%, because all they need is to be informed of the "proper" technique.


> I don't know the US education system enough, but there are some schools out there which are much more selective than any SWE interview process. You can be guaranteed that the worst student from these top schools could nail any algorithm interview with minimal preparation.

There’s a whole lot of spread at any college. Assuming a hypothetical ranking from 1-100 with 1 being unable to program or problem solve and 100 being your Jeff Dean/Fabrice Bellard/Linus Torvalds programming god of choice an average state school might give you people from 30-70 and a top school might be 40-80. Lots of smart people go to state schools and lots of idiots come out of top ones.


I went to a highly selective university and I certainly could not not nail any algorithm interview with minimal preparation. Algorithms is one class. It consisted mostly of proofs, a little pseudocode, zero working programs. The particular algorithms we spent a lot of time on were pretty obscure in the interview world: Gale-Shapley matching, Ford-Fulkerson max flow. The goal was never to have a wide repertoire of algorithms that you could write down on command, and certainly not anything to do with implementation.


Caltech does get the occasional bad admit, where the admissions office gets bamboozled or takes a risk. The one case I knew personally could write code decently well compared to many Stanford grads, but couldn't handle the Caltech curriculum and would probably do poorly on a coding interview.


> You can be guaranteed that the worst student from these top schools could nail any algorithm interview with minimal preparation.

Considering that CalTech offers this course, they must be realllyyyy misguided, or you’re talking about schools that are far more selective than CalTech (7% acceptance rate).


I don't know what it takes to study at CalTech. But I nailed the interviews at FB and Google and there's no way I could have studied maths at Cambridge for instance. Leetcode is just a matter of solving enough problems so that you can do them almost out of memory on the white board. The curriculum is basically algorithms at undergraduate level. Seriously, it's easy.


> So wait you are smart enough to get into caltech, smart enough to graduate with a degree in CS... If you got B’s in caltech CS I’d make you an offer for 200k+ without blinking or even asking you anything.

I went to Caltech and would definitely not do this. The CS faculty was fantastic, but the undergraduate option has some problems:

- a wildly disproportionate share of cheating cases came from CS undergrads

- CS graduation requirements are very flexible, which is a mixed blessing. On the one hand, giving students more freedom is broadly a good thing, and many of them use that freedom to double major or go deep into specialized grad level work. On the other hand, some of them use it to construct schedules much easier than most other options would allow. Not all of them, but enough that "B's in caltech CS" is not as strong a signal as you might think.

Some signals I would consider stronger for Caltech students:

- CS/Math double majors. Or CS/[Any technical subject], for that matter.

- Summer research: someone paying five figures of opportunity cost to do something they think is interesting is probably not a $/effort optimizer.

- Grad-level CS coursework. CS 139 (Design and Analysis of Algorithms) in particular pulls no punches.


Just out of curiosity, do you consider “$/effort optimizer” to be desirable or not?


obviously not, judging by his tone, people who optimize for money might only study leetcode. Algorithm skills probably account for at most 10-20% of the skills you need to become a good software engineer.


> If you got B’s in caltech CS I’d make you an offer for 200k+ without blinking or even asking you anything

Why? This is not necessarily a good signal for competence on the job and also doesn't cover people who have lied or cheated.

Also, this wouldn't help create a less broken job market. You'd just be breaking it in a different way.


"So wait you are smart enough to get into caltech, smart enough to graduate with a degree in CS. And yet the curriculum on its own does not do a good job enough on its own to prepare you for getting a job?"

This was my first thought too. A course just for passing interviews seems odd. We had two small activities that were part of a business course (business administration minor), but after hours. One was a mock interview to give you feedback about professional appearance, not fidgeting, etc. The second was a mock business dinner to provide feedback about etiquette, customs, etc. These were valuable, but they focused on soft skills that aren't covered in the regular course content.


Its possible that this is another sign interviewing for software jobs is broken, but it is equally possible that this shows the skills required to excel in high school and be accepted to college are easily gamed and Cal Tech isn’t exactly filled with the most intelligent problem solvers.


This is probably because interview questions are more often than not just brainteasers stacked on top of into-level content. Much in the same way that juniors in high school taking Calculus still need to study for SAT math: it's not really consistent with measuring the content of advanced courses at top places.


>If you got B’s in caltech CS I’d make you an offer for 200k+ without blinking or even asking you anything.

thanks, it means a lot you trust my ability so much :)

>All have PhDs and yet they’re all facing delays when landing a job. There is something definitely broken.

Everyone agrees it's the leetcode interview format. I agree to some extent, but I wouldn't throw it out completely. For me, I give four problems, and the candidate must solve two of the four with full internet access in an hour. It seriously reduces interview day variation for good candidates. If they can't do graphs but can do text formatting, they get a pass. I'm sure there's still some better way of doing this.


> And yet the curriculum on its own does not do a good job enough on its own to prepare you for getting a job?

Welcome to "Education". We don't teach you life skills, but we will teach you stuff that you'll immediately forget and look up online the one time you need it in life (if at all).


The industry recreates the problem every so often with different variants. Previously it was the Microsoft/Google “thought problems” such as “Why are manhole covers round?” and “How would you estimate the number of gas stations in a city?”.


Thet change when they realise the current method doesn't actually work. They're not recreating the problem; they're looking for a solution.


In their quest to find a solution they simply recreate the problem with a slightly better approximation of an actual work sample evaluation. It’s like watching a Newtonian approximation algorithm.


No clue if this is even remotely true, but here is what I think: the most successful companies can basically pick and chose from a really large set of applicants. Given this luxury, the can, in essence, filter by any kind of metric they want and still get loads of suitable people.

I short: if you have a million applications you can sort by GPA and still have a hundred people to chose from. Sorting by GPA means you will overlook lots of raw talent, but who cares, there is still enough real talent in your funnel as well.

For me, someone hiring for a way smaller and poorer company, this is actually good. These processes overlook lots of great people, which we can snag up.

This observation is based on the behavior of big companies in Germany (think carmakers and similar), not sure how well it translates to the silicon valley situation, but seems similar.


Yeah, I'd say this is exactly what's going on at big tech companies. Add to that the need for an interview process that is very standardized for (1) scalability in a large company and (2) legal protection against discrimination (at least nominally), and it makes a load of sense.


If you don't think modern interviewing practices are a problem, this should be proof enough. Not only do we have an entire programming book sub-genre dedicated to this nonsense, but now we're also wasting class time on memorizing sorting algorithms. Fantastic.

Of course, we're going to see the inevitable arguments that "40% of the people I interview lie on resumes" or "I interviewed this guy once, and he couldn't even write one line of code"—let me pre-empt that by saying if you ever get to that stage, your screening process must absolutely suck and it's still your fault. I've interviewed dozens of people and never had that experience, because I check OSS contributions, professional history, published material, and ask for code samples before I even engage with a candidate in a technical discussion. It's not rocket science.

I've told at least 5 or 6 companies that wanted to white-board me to suck it. It's honestly insulting that I'm forced to white-board when I have two books with my name on them (published by Apress), contributions to Golang (small commits, but they still got me in the AUTHORS file), and OSS projects with hundreds of stars on GitHub.


> I've interviewed dozens of people and never had that experience, because I check OSS contributions, professional history, and ask for code samples before I even engage with a candidate in a technical discussion.

How are you supposed to hire 50,000 people a year with a process like that? It seems unlikely there are much more than 100,000 people with significant OSS contributions in total. Published material isn’t going to be much more than that if it b even gets that high. And most people who are good at programming will have nothing for either. If you’re not going to spend at least ten, more likely 20 hours calling up references your going to be fishing from a pretty narrow pool there.


> How are you supposed to hire 50,000 people a year with a process like that?

This is a straw-man. What company has to hire 50,000 engineers a year? Facebook has (in total) ~10,000 engineers (hired over a decade+). Someone has nothing published and no OSS? Great, ask for code samples and the candidate to walk you through the code. It's completely trivial to fix the busted status quo, but people are invested in this nonsense—there are entire companies that serve at the white-board interviewing altar (Triplebyte, Coderbyte, Hackerrank, Leetcode, Turing, Toptal).


I think the more important overarching point is that most candidates aren't prolific github people like you, and likewise most companies are not going to cater their interviewing practices to woo your preferred flavor of interview style.

Most candidates don't have code samples laying around (e.g. they only write proprietary code for work), so asking for some is a non-starter in a lot of cases.

Another point worth of reflecting, your accomplishments in OSS say nothing of interviewing expertise, they are completely different skillsets. Companies may not hire in the order of 50k people a year, but interviewer calibration in a large org is absolutely a problem that isn't easily solved by just following the opinion of some random 6-digit salary SWE.

As someone who's interviewed in the order of a few hundred senior SWEs, I think there are good and bad ways to go about any interview format, including leetcode-style.

Complaining about how companies don't just throw away their interview practices in favor of [insert pet format here] doesn't really help, since it isn't really actionable.

We talk about the perils of full software rewrites, I think similar concerns exist for hiring as well.


What percentage of people study leetcode and can't provide coding sample vs can provide coding sample / have something on github / etc


Anecdotes are not data etc, but in my experience (hiring senior SWEs for a big tech co) less than 1 in 20 candidates have a link to github or side project listed in their resume. Distribution-wise, they seem to appear more frequently w/ younger candidates.


If it is completely trivial to fix the busted status quo you should be able to make a great deal of money doing so. Those companies you named exist to get people hired. If you have an alternative system that works you will be fishing from a different pool of candidates. That didn’t work for starfighter.io. If you can make it work for you you’ll get very rich.


Did any company even take Starfighter seriously? The companies the previous comment mentioned are at least used in industry. Even Google or ex-Google employees have gotten into this with Byteboard:

https://techcrunch.com/2022/01/26/byteboard-nabs-5m-seed-to-...


It's pretty simple to get a rough estimate of how many people you need to hire a year. You take your engineering count, say ~10,000, how long engineers stay at each company, say 4 years, and you come out to 2,500 engineers you need to hire every year, assuming you don't need to grow, to maintain the status quo. So then you take a larger company like google, with ~100,000 engineers, and they'll need to hire at least 25,000 engineers a year.


Google’s average tenure was under three years (as of 2015 anyway), and their acceptance rate is famously below 1%, so they’re hiring at least 33k and screening three million.

edit: I don’t know their boomerang rate, that might matter.


> How are you supposed to hire 50,000 people a year with a process like that?

That's not a valid question. No single hiring manager hires 50000 people in a year.

If the company is giant enough to hire 50K people a year, the work is distributed among the many many thousands of hiring managers.


> How are you supposed to hire 50,000 people a year with a process like that?

It's insane that this is even an expectation. It's like trying to hire 50,000 cabinet makers. There aren't that many! So instead we just require people memorize and sketch the steps for putting together a cabinet and then hire them. Even if they haven't sharpened a block plane in their life. You can then hire enough "cabinet makers", but I wouldn't buy a cabinet from 'em.


> I've told at least 5 or 6 companies that wanted to white-board me to suck it. It's honestly insulting that I'm forced to white-board when I have two books with my name on them (published by Apress), contributions to Golang (small commits, but they still got me in the AUTHORS file), and OSS projects with hundreds of stars on GitHub

That is all commendable but none of that tells me if I can work with you. When I interview candidates I use whiteboarding as a collaborative opportunity to see how the interviewee thinks. I treat it as a similar experience when I am the one being interviewed, which is why I have never understood the hostility toward whiteboarding. How else in the limited window of time that is the interview can I learn as much about the people I would be working with?


> a collaborative opportunity to see how the interviewee thinks.

In principle, this sounds great. In reality, if the interviewee fails to come up with the correct/optimal solution for the duration of the interview, they are going to be rejected.


lmao yeah, ask my ziprecruiter interviewer how well I "collaborated" despite being rejected for not getting the optimal dp solution in 45 minutes


> I have never understood the hostility toward whiteboarding

Maybe you haven't been blown off and treated like an idiot by some interviewer when you couldn't answer their pet question? Even when you're more experienced and skilled than they were?


> I have never understood the hostility toward whiteboarding

If anything a candidate that rejects whiteboarding is a good signal that they're not a good team fit.

I personally prefer to conduct coding interviews in an environment where the code can be executed (if that's the main objection for "whiteboarding"), but even so the environment is often too different from what the interviewee is used to (IDE / availability of familiar libs / OS environment) that it's not that different from whiteboarding anyways.

In my personal experience there are indeed occasionally people that I'd recommend for strong hire with their past experience / credentials alone, but it's too rare to make any rule out of it, and if any candidate outright rejects it, it probably gives a signal that I'd personally interpret as negative anyways.


This is the dumbest excuse for white boarding/l33t code that isn’t even applied in real life.

You can be the most pleasant person who’s articulating their thoughts in simple and easy to understand ways and if you don’t solve the bullshit problem in o(-1) time you’ll get rejected.


The problem with looking at OSS and publications is that this penalizes a lot of people with real lives outside of work. Especially problematic if you’re into equitable hiring processes aiming to hire people who aren’t in their 20’s anymore.

Looking at past work penalizes people who primarily have experience in large systems where their work is NDA’d, impossible to understand without a loooot of background, or simply unshareable for any number of reasons. Also really bad for new entrants into the industry. They don’t have past work.

The take-home exam penalizes people who have lives.

The “do smol thing together so we can talk about it” interview penalizes people with performance anxiety.

There is no silver bullet. But if I’d have to choose, I’d pick the least bad typical interview approach that judges everyone based on their performance on a small job-relevant task. At least you avoid a lot of bias.

source: I’ve interviewed over 100 engineers over the past 18 months as we scaled the engineering org roughly 3x. I honestly don’t even look at candidates’ resumes anymore (this is after initial screens have been made) because it always leads to bias.


> The “do smol thing together so we can talk about it” interview penalizes people with performance anxiety.

Does this apply to the Leetcode style interviews?


If they’re any good.

But really, it depends what people think of when they say “Leetcode style interview”. There’s a lot of coding interviews that superficially look like stupid leetcode, but the job really does benefit from that.

A basic example: I ask you to model some toy problem as a state machine. Is that a stupid leetcode challenge? Maybe. But many if not most complex UI interactions really are state machines and if you’re the kind of person who intuitively thinks in state machines, you’re going to write more maintainable code.

My personal blind spot is that thinking in graphs is difficult for me. I just fail to see ”Oh, this is a graph problem!”. A lot of times the code I write would be much easier to maintain and write had I realized I’m solving a graph problem. Better performance too.

edit: if you mean the performance anxiety party, yes. Several research papers have shown that anxiety induced by the interview context impacts people’s performance.


> OSS projects with hundreds of stars on GitHub

I'm in the same boat, yet most companies don't seem to care about OSS contributions, despite their rhetoric.

They want us to solve a few leetcode medium/hard questions within the span of an interview.


> I've told at least 5 or 6 companies that wanted to white-board me to suck it. It's honestly insulting that I'm forced to white-board when I have two books with my name on them (published by Apress), contributions to Golang (small commits, but they still got me in the AUTHORS file), and OSS projects with hundreds of stars on GitHub.

So you've done a bunch of stuff that's easy to game, that requires me to learn a bunch to even evaluate whether it's good or not, and you feel insulted by being asked to do some honest coding? I'd far rather hire a fresh grad who did decently on leetcode and was able to hold a vaguely normal conversation in the interview. In my experience that has a much higher rate of finding people who will contribute positively to actually getting the work done.

There's nothing wrong with the interview process, just a bunch of entitled prima donnas who are always going to think they're better than any standard process. You're better off without those anyway.


The class is being taught in the Computer Science department, not the software engineering or programming department. I think the bigger issue is the conflation of CS and SWE.

To your point about how you interview, unless there is an alternative I would never be able to apply to your company. I have no open source contributions or code samples to share and no interest in writing a book. I also have no interest in applying to places with take home exams where the company gets me to spend time interviewing without a commensurate investment of time on their part.

Where does that leave us? I guess it’s not ideal but, at least for me, the current process seems like it’s at least a local maximum.


> without a commensurate investment of time on their part

Doesn't grinding LeetCode apply here as well?


Perhaps it’s a bit of an unfair/vicious cycle, but I found doing interviews at my previous company was enough to keep me feeling practiced enough. While I have studied up a bit before previous interviews, I’ve never used LeetCode.


> Not only do we have an entire programming book sub-genre dedicated to this nonsense...

Mate, wake up: ~billion dollar "interview prep" start-ups are a thing.


Sell shovels to the miners instead of mining for gold yourself.


No matter what criteria you used in interviews, these people would still be practicing it, because, other than people whose parents donated buildings, that's the type of people who get into elite universities.

It doesn't tell you anything about the interviews, just about the achievement-driven personalities that the universities select for in their admissions process.


I agree with you; the whole interview process is broken.

But looking at the bright sight, teaching young people how to “work the system” is not a bad thing. Again, we should fix interview. But in the meantime, a class specializes on “how to beat the man”, looks like a fun class where you can learn more than just CS.


Do you think companies would make more money if they altered their interview practices? Or is this way of interviewing somehow morally wrong ("insulting")?


For context, this is a CS11 class which is a catch-all small form factor elective (3 units, average class is 9 units, units are supposed to be that many hours per week) that is used to offer a dozen or so small topics that are not extensive enough to warrant a full class. In any given term they will usually offer various CS11s for learning new programming languages and a few other random topics with some being run by grad or undergrad students.


Sometime around my Junior year of college my goals shifted from "I'm here because I want to learn" to "I'm here because I want a job". I was surprised to find that doing well at learning (getting As) not only wasn't good enough for an entry level job, it wasn't even really correlated. The more time I spent on coursework the less time I had to grind through interview questions. I wonder if there are any other industries like this.


Symphonic musician. The first stage is nailing a blind audition, perfectly.


Not anymore! Blind auditions are now a thing of the past:

https://www.nytimes.com/2020/07/16/arts/music/blind-audition...

However, since you can't choose your parents or where you were born, there's little you can do to prepare for the Diversity, Inclusion, and Equity goals. (I could probably get a lift by changing my religion. People with the exact same ethnicity and appearance as me but of a different faith are considered "people of color" by most DEI committees)

I started out wanting a career in music but quickly switched to a math major. While I still perform at amateur recitals, I'm glad I didn't try to prusue a professional career.


Even though blind auditions are less and less, I do want to add to analog's point that orchestras don't care about anything except for how well you can nail a huge group of assorted excerpts and whether or not you suffer from anxiety that may affect your performance. Plenty of great musicians just never get a job because they get a lot of anxiety or don't have the patience to practice the same 12 measures of music for 12 years hoping that the audition they takes asks them for that.

Orchestra excerpts are just leetcode for musicians, and just as lacking in testing the real qualities of good workers, artists, and people.


Good intentions, terrible execution (if implemented).

This is communism for social currency. In fact, I'll wager this will develop and become _the_ modern form of communism in the next decade or two (in the US/West), provided the following remains true:

We (continue to) live in an a world of abundant material resources. And, we (continue to) live in a world of limited social resources (and likely always will).

Anyways, my thoughts on the matter as musician/composer. The people that deserve to be in an orchestra are the ones who work the hardest towards achieving goal, and have the talent to fulfill the requirements of that position. It is an incredibly competitive, limited position and would be massively unfair if people were favored/unfavored for admission because of the color of their skin (hmmm...).

I think that would create a lot inter-orchestra tension (which is already sky high because of how competitive it is[0]). Decades of sweat equity will be thrown out the door (most start playing while still a single digit in age). 45% or more of classical musicians end up leaving the career because they cant earn enough money to survive. [1] It's already an impossible discipline to succeed in, regardless of your background.

It is deplorable to shift and count colored beans on an abacus to keep track of an organizations social currency, particularly when that organization has blind admissions. There is not a non-obsessive, wont-be-abused-by-those-who-count way to implement such a gate keeper. Blind auditions solve all these problems, and bring together the most talented people in the world to work together for the benefit of every member of the audience, without the racist undertones.

[0] I used to play with Juilliard musicians. They have plenty of horror stories to tell, such as the time when the NYC Phil woodwinds tallied mistakes made by other musicians during practice. Or when one of the famous string quartets' members refused to stay in the same hotel together.

[1] https://www.nytimes.com/2004/12/12/arts/music/the-juilliard-...


That article is behind the NYT paywall but the bit of it I could see appears to be saying not that

> blind auditions are now a thing of the past

but that they should become a thing of the past, so that orchestras can ensure that their makeup (race, gender, etc.) matches that of "the communities they serve".

There's a big difference between "one person writing in the New York Times thinks orchestras should stop doing blind auditions" and "blind auditions are now a thing of the past".




That article doesn't give one single example of an orchestra that isn't using blind auditions any more.

It does give one single example of something, namely an orchestra looking to appoint something like an assistant conductor (that isn't the job title; the article claims it's an equivalent post; I haven't attempted to find out whether they're right) specifically from the non-white non-Asian population. Maybe that's a good thing, maybe it's a bad thing, but it has nothing to do with an alleged end of blind auditions: you literally cannot give an orchestra conductor a blind audition.

(When I say "one single example", I mean it; the article suggests that this sort of thing is becoming common, and for all I know it may be, but it doesn't deign to give any actual evidence other than this one example, along with a link to the same NYTimes article asking for an end to blind auditions.)


I think getting As correlates pretty well with grinding through interview questions. Perhaps not the same type of questions, but it's still "learning standardized skills from a prescribed area of study".

I don't think most algorithm interviews correlate with what you actually do on the job though. There's a lot more communication (eg trying to figure out what the boss/client/pm really wants, explaining non-trivial concepts to impatient/bored colleagues, or trying to convince other people that your "crazy" idea is actually sane), and a lot less hard problems thrown "in your face" (eg. prove that there is no solution for a^n + b^n = c^n where n > 2), but a lot more hidden traps that don't show themselves until something blows up in production (eg. turns out a minor version difference of python triggered a serious outage).


I was also CS undergrad. I see it more as a choice between industry or academia. Junior year i also had to make that decision. For industry, that then meant getting into internships, which then translated into job offers. You needed a minimum level of grades, say, 3.0, and needed to have classes on your transcript that looked more like industry software engineering, say, operating systems, databases, design. Companies cared more about clubs, extracurriculars, side projects, and of course you needed to be able to pass your interview questions.

While, if i was going for grad school instead, then i'd need a 4.0, start taking grad school classes, start talking with professors, try to get a job as a research assistant, try to find a professor that i could latch onto and start working with. And that was a path i had considered at some point, when i really did enjoy taking hard classes, going in deep, and pushing the edges of my learning.


Are there other major economic sectors which have circled the wagons around making job applicants do remedial coursework as live performance art?


No. Normal economic sectors without decades long explosive growth have some kind of licensing system to keep people out so that insiders make more money. Merely being able to teach, engineer or reason about the law taking into account precedent is not sufficient to get a license for it. In some cases this probably has some attached quality control implications but we know education degrees have no effect on teacher effectiveness[1] so there’s at least one case where that’s pure rent seeking, where the insiders benefit and everyone else pays.

[1] It's easier to pick a good teacher than to train one: Familiar and new results on the correlates of teacher effectiveness https://www.sciencedirect.com/science/article/abs/pii/S02727...


I wonder how many “engineering” jobs actually require PE.

Biotech is an another counterexample to this: there’s no licensing requirement (though many jobs do require a masters or PhD) and the interviews also (mostly) aren’t quiz show lunacy. You usually talk about your past work and answer questions about how you’d approach new problems; no one makes you (say) sketch crystal structures on the board or recite the Krebs Cycle.


You can get into software engineering completely self taught or after a (3-12 month) boot camp. If biotech was that open they’d have similar interviews checking for basic familiarity with the field. Must have a Master’s at minimum is very different from must pass an interview as a minimum hiring filter.


Depends on the field. Even the PE requires you to work in the field before you take the full licensing exam, meaning there is work for non-licensed "engineers in training." Some fields like electrical and mechanical allow an "industrial exemption" if you work for a company that makes things rather than offering engineering services to clients.

An engineer is someone who might say "no" to releasing a product that might fail and cause injury or economic damage. I'm not sure what that would correspond to in the software world, where everything fails.


I feel that it's mostly a certain part of the tech scene. FAANG companies, startups, and those cargo-culting those. I've actually been to dev. interviews that were 1-2 interviews long, almost zero technical questions.

Now, those places were in now way "prestigious", or paying $250k starting salaries. Just your garden variety F500 companies / gov. agencies / non-tech companies.

Also, my experience with "traditional" engineering companies have been similar, 1/10th duration of tech interviews, some fundamental questions related to your trade, then mostly questions on personality, you experience, etc.

The closest I've been to tech, was finance. It's very much a predictable process - you know what kind of questions you'll be asked, what the next round will include, and all that...but still not anywhere near the rigor of prestigious or trendy tech firms.


This is largely a result of low barriers to entry into the profession. You do not need to have a qualification from an accredited institution to be a practicing software engineer/programmer/developer.

Unlike doctors, lawyers, dentists, civil engineers...

So you have a well paying profession, hot demand, good working conditions, and no external framework to ensure a base level of competence. Of course people will try and game the system (wittingly or otherwise).

IMO this is a good thing. It allows people from all walks of life to get into the industry, and has helped it to grow more rapidly. But it does introduce a big issue, that its very difficult to know if someone is an expert, or just trying to fool you (and perhaps themselves) into thinking they know what they are talking about.


Nah it’s still broken. Using lawyers as an example because that was my past career, 3-4 years of law school also doesn’t teach many practical skills. In fact, most of us took what amounts to a semester long course just to prep for the bar exam. And even then, you still don’t know anything!

As I shifted into tech, the hiring process didn’t surprise me but for the wrong reason. All I learned was software engineers put up with similar bullshit to lawyers!

All that said I don’t know how to fix any of it easily. Maybe no one else does either, and so here we remain.


The difference being that lawyers generally need to pass the bar exam once. You need to brush up on your leetcode every few years.


I've noticed that people here paint the whole industry with a broad brush that reflects what a handful of giant companies and companies imitating them do. I've been on both sides of the interviewing table at a few places and it's never been the whiteboarding dystopia that everyone talks about here. The last job I took had a long interview process, and at the end of the day someone spoke up and said "oh yeah, we probably should ask you some code question", which I blew since I was exhausted and it was an obscure C++ thing. They laughed it off and I still work there.

HN is a poor barometer for the broader tech industry - everything from hiring practices, tech choices, etc. HN has a tendency to make it sound like all the world acts and smells like a giant tech company or a startup. There are a TON of companies that are not like those.


Actuary?


Interviewing is broken.

When I was at Amazon, it became impossible to hire candidates, and generally the engineers we brought on board were non-US citizens. Typically they're Indians without U.S. residency status, often times U.S. college grads or often even transfers from Amazon's Indian offices. Not only did we pay below other FAANGs, but with these hires, we'd pay them at the low end of our own scale.

What are they going to do? Quit? Ha! If they quit, they have to leave the country. So most of them just suck it up. Many of them had spouses and children. Quitting Amazon meant they'd have to uproot their entire family.

A second order effect the entire industry on how to work around Amazon (and other big tech's) styling of interviewing. What amazes me are all the Youtube channels from Amazon engineers and other big tech companies with this "influencer" persona.

At the end of the day, many of these kids at Amazon are doing shitty migration projects, shoveling legacy code around all day, and dealing with on-call hell.

But we have this entire cult of working in big tech and going through all these gymnastics to get through pointless interviews.

Perhaps Google, Netflix, etc. are much better places to work, but it boggles my mind anyone would invest so much energy in getting a job at Amazon. It's absolutely a shit place to work that doesn't treat people like human beings.


Unless your an L1 transfer, you can still change jobs fairly easily on an H1B, because everyone else in tech is also willing to do H1B transfer paperwork. So you can quit, it just takes longer...

And even as an L1, you can still apply at other jobs, because they will apply for the H1B for you and if successful, hire you then.

So yes between the waiting periods you do have to keep it up, but the barriers are not that high compared to citizens otherwise.

And this is my personal experience as a visa worker.


The issue is the Greencard sponsoring through EB-3. You have to wait till the process reaches i-140 or I-485 or otherwise you have to restart the process. This is worse for Indian and Chinese citizens due to their priority date from the 7% rule.


At least in my day if you changed the employer without changing the role you didn’t lose your place in line after PERM application was completed. I knew many Indians/Chinese nationals who changed several jobs while waiting EB2/3 to come through.

Frankly I think green card golden handcuffs are heavily played up in these discussions to advance a specific narrative


Yeah the start to perm process to get your priority number is a 6 month - 1 year thing, and you only really need to do it once. And once your priority date is up maybe you have to stay there for 1 year.


Your info on how long it takes is out of date (edit: at least for companies not large enough to "optimize" the process; see comment below mine).

First, PERM currently takes about 16-18 months now. Processing times are way up: the PWD (Prevailing Wage Determination) phase alone is now 6 months; PERM certification itself is another 6 months (source: https://flag.dol.gov/processingtimes). To this you have add ~3 months for the recruitment phase, plus however many weeks/months it takes management and the lawyers to work out the job description. If you're audited (which I can tell from personal experience can happen for the most straightforward cases), that adds 4 months.

What's worse is that, as per our law firms briefing, due to a recent lawsuit that Facebook lost, companies now have to be more conservative with the recruitment phase (make a much stronger effort to find Americans). And with that, the recruitment phase should be expected to "fail" (from the perspective of the potential immigrant) more often, which can multiply the PWD and recruitment phase time.

This means that going forward, all in all it could take several years to get to PERM approved. If you switch jobs before you've filed I-140, gotten it approved and waited out 6 months (for portability under AC21), that resets the progress. Also note some companies like to delay starting the process until a few months after the employee starts working for them.

Now remember that the maximum duration of an H-1B (without an approved I-140) is 6 years (5 years for L-1B). So there's definitely a bit of time pressure which makes changing jobs somewhat risky. Possible, but not a decision made lightly.

You are correct that you only need to do it once... as long as your job duties stay within 50% of what your original job description was. If it was somewhat specialized, and the immigrant changes jobs, then they incur some amount of risk of an I-485 denial later.


Just a nit, really: this is true at smaller companies but many bigger companies have a "batch PERM" system such that individuals essentially skip the wage determination and recruitment steps. See e.g. [1]. In such cases hiring to PERM approval is 7-11 months.

[1] https://www.teamblind.com/post/Which-companies-have-a-BATCH-...


So not really that different from 10 yrs ago when it took me roughly 6 months. The processing time for everything in uscis is longer now - can take up to a year a longer to stamp n400


+1. I was on H1B Visa for a decade (6 + 4). Worked at Amazon (AWS networking) for 5 of those years and changed jobs 4 times in those 10 years. I honestly loved working at Amazon, job was challenging, learnt so much because people around me were incredibly smart, and this job practically gave me a career. I moved back to India last year though and now given the cost of living here, I don't have to work if I don't want to for the rest of my life.


I think you are generalizing Amazon to the rest of the tech world. AFAICT, Amazon appears to be uniquely problematic in this regard.

> it boggles my mind anyone would invest so much energy in getting a job at Amazon.

If they can sponsor a visa... just consider the sort of poverty that exists in much of the rest of the world.


My firm (non-FAANG) does it too. It's like indentured servitude and it's awful.


I’ve never worked at Amazon but this seems to be a common theme with interview candidates who work there. They talk about to burnout and stressful on-calls shifts. I’m not sure how this unfolds for Amazon in the long run.


They're already having problems with hiring. I get pinged by one or two amazon recruiters DAILY. Literally, not even exaggerating here.


The entire post is about how a specialized course is required to pass a technical interview in companies like Amazon and here you are making a comment that implies desperate Indians are hired with zero qualifications for the job only because they are desperate.

I can't comment on the working conditions at Amazon but I have to say that those Indians must be highly qualified to have that job and should be paid fairly.


> here you are making a comment that implies desperate Indians are hired with zero qualifications for the job only because they are desperate

I’m not implying Indians are desperate. Plenty of Indians work all across the American economy and contribute as much as any other ethnicity. I’m making a claim about Amazon hiring practices, especially with Indian transfers. Amazon pays below other top tech companies, and we paid Indian transfers at the bottom of our own pay scale.

I’m not making any claim on these Indian people’s technical qualifications. Between two different candidates, a corporation goes with the one that they can get away with paying less. That’s the reality.

Based on your name, I assume your Indian. If you’re offended, don’t attack the messenger. Your issue is with corporate America.


Those Indians are indeed qualified people and amazon also pays well enough. I don't know what the OP is comparing to when he/she says pay is low.


Amazon is an exception in the industry as they routinely hire from tier-2 & tier-3 engineering universities. On the contrary, Product Managers come from tier-1 MBAs. My hypothesis is that given Amazon's strong emphasis on engineering processes, they are able to train and uplevel these devs to tier-1 level talent over time.


The pay for a new grad (even non US citizens) entering Amazon is 120k+. This not low at all. It should not be mind boggling why someone would choose to interview at Amazon.


$120K is on the low end and nothing to brag about in Seattle, WA or in the Bay Area. It’s also quite a bit less than other tech companies.


>Interviewing is broken.

Nothing you describe suggests to me that it was the interviewing that was broken, it sounds like the employment was broken.


LC interviews are well past the curve of utility. 10 years ago, FB and others would do a simple fizzbuzz + lc easy/medium demonstrating that you had a good grasp of complexity, pointers, recursion, and iteration. I got a job or two by showing that I could DFS a binary search tree, count letter occurrences in a paragraph, or return the top k elements from two sorted lists.

Then people learned to practice these simple coding questions, and employers had to up the anty. Now everyone wants an LC expert who can do an LC hard in under thirty minutes with perfect code and no hinting. I wouldn't be surprised if LC has to open a new category of problem difficulty based on current trend lines.

At some point you really are hiring for a skill which is only applicable in interviews. Solving an LC hard in under 30 minutes has little bearing on whether a candidate could solve a novel research problem or difficult refactoring task in their day to day work.


Exactly its like an arms race. It has become ridiculous, they want you to solve an LC hard and that too within 30 mins. And there are trending questions, for ex: if GOOG started asking segment tree questions, everyone starts asking that.


Other than the course outline, is there any substantive content available?

I saw only a link to the first problem set, and even that requires a Caltech login.


Tech interview discussions always seem to run the same course and hit the same points, so I'll repeat one of mine:

What's wrong with the idea of turning the standard Leetcode algorithms interview into a license that only needs to be completed once, or maybe once every 5-8 years, similar to what physicians must undergo? Much more efficient than having to retake it with every single company during an interview cycle.


> What's wrong with the idea of turning the standard Leetcode algorithms interview into a license that only needs to be completed once, or maybe once every 5-8 years, similar to what physicians must undergo?

How far from this is the A level in Computer Science[1] which should take at minimum 20 weeks full time study for the target 16-18 age group? Or the EdX MicroMaster’s in Data Structures and Algorithms[2]? If any one of the FAANG companies said “Get this qualification and we guarantee you an interview” that would already be massive, never mind skipping one or two steps of the interview process.

[1] https://www.ocr.org.uk/Images/170844-specification-accredite...

The content of this A Level in Computer Science is divided into three components: • Computer systems component (01) contains the majority of the content of the specification and is assessed in a written paper recalling knowledge and understanding. • Algorithms and programming component (02) relates principally to problem solving skills needed by learners to apply the knowledge and understanding encountered in Component 01. • Programming project component (03 or 04) is a practical portfolio based assessment with a task that is chosen by the teacher or learner and is produced in an appropriate programming language of the learner’s or teacher’s choice.

[2] https://www.edx.org/micromasters/ucsandiegox-algorithms-and-...


For the non-UK folk you might have to explain A-levels. But your question it’s valid.

> How far from this is the A level in Computer Science[1]? Or the EdX MicroMaster’s in Data Structures and Algorithms[2]?

Pretty far from being standardized and the key point of the previous comment you replied to is re-licensing.

> If any one of the FAANG companies said “Get this qualification and we guarantee you an interview” that would already be massive, never mind skipping one or two steps of the interview process.

Is it? Network engineers are required to keep up certifications for jobs in the US (CCNP or rarely CCDE). So do Security Engineers/Professionals, Auditors, Cloud engineers/architects/consultants. Government employees or contractors require them for assurance. It’s not that massive of a shift imo.


“The standard” belies a lot of variation. Some interviewers want perfect code the first time on a whiteboard, some let you go back and forth with a compiler as much as you wish. Some interviews stick to relatively shallow algorithms knowledge like DFS, others will ask you two LC Hard questions in an hour. Sometimes it’s about whether you solved the problem, sometimes it’s about your thinking and collaboration during even a failed attempt. There are too many opinions and too little hard evidence to sort them out. It’s a challenging coordination problem.


You're completely right, and this actually is another layer to the whiteboarding technical interviews are broken narrative: there are no objective standards, even if there is huge overlap in the content being interviewed from company to company (or even team to team). A very discouraging aspect about these interviews is that the rubric is completely hidden away from candidates and feedback is almost never provided. At least a licensure or certification test would be more transparent.


There are no objective standards for the output of the software development process. It's a unique mishmash of building the blueprints and the final artifact at the same time. Not because we want to be a bunch of cowboys, but because the unique ability of it to be that flexible, and to be patched over time, means speed is possible in a way it just isn't in other fields, and the market has rewarded companies that move fast.

So until that's changed, I find it hard to see how we could standardize developer hiring. My coworkers and I all wear a different amount of hats with varying competence. The number and variation of certs required would be rough! "I'm level 4 backend engineer with level 3 data design [streaming level 1, traditional ETL level 7], level 1 hacky frontend dev, and level 2 devops" - sheesh, now I'm spending my whole time keeping those up to date.


You are talking about the nitty-gritty aspects of actual software engineering. I’m talking about doing away with the redundancy of having to retake the industry-wide Leetcode fizzbuzz portion instead of passing it once.


I'm asking that since the job itself is wildly unstandardized, how on earth are we supposed to standardize a single "industry-wide" credential for interviews?

There's no "industry-wide Leetcode fizzbuzz portion" - Leetcode and fizzbuzz are famously on complete opposite sides of the whiteboard spectrum! I've only been asked actual fizzbuzz once, and can't immediately think of any other duplicate whiteboard problems I've seen either. Many of them aren't ones I've seen on Leetcode, and would probably be on the "easy" scale there if they were there.

I couldn't pick out any single problem that would make everyone at all my last few jobs happy, let alone the whole rest of the industry.


Leetcode is the new fizzbuzz, I meant.


Why would you want a license? The whole allure of software development is in its accessibility, that it's easy to get started with nothing more than a laptop and an Internet connection. Putting up artificial barriers to entry is anathema to that. What you'll end up with, in practice, is one of two outcomes:

1. A race to the bottom where licenses become easier and easier to get, until it becomes no more a demonstration of competence than drivers' licenses are to one's driving skills.

2. Protectionist policies make licenses mandated and difficult to obtain, then the talent pool and innovation moves to places that don't require that license.


By license I mean the current Leetcode interview process except you do it once and are good for at least five years. Obviously when candidates are interviewed there would still be technical questions asked, but perhaps those would be more domain-specific. The whole point is to avoid the redundancy of the existing interview process.


> By license I mean the current Leetcode interview process except you do it once and are good for at least five years

Some companies do seem to be offering something similar to this, see CodeSignal's general coding assessment or back when TripleByte was big a few years ago. I think the problem is that even within the specialized topic of LC style interviews, many companies don't agree on a similar strategy when it comes to types of questions and difficulty. There isn't one standard that every company agrees on, so most have their own flavor of it.


Yea, think of it less as a "government issued license allowing you to do software" and more of a certificate that says "I recently passed a level $LEVEL leetcode grind, you only need to test me for specialized skills or for a level greater than $LEVEL." If it actually worked and couldn't be gamed/cheated (good luck with that), it could reduce interviews to a simple check of domain knowledge not covered by the certificate.


It is already a massive waste of human potential to overstress leetcodes.

I'd rather have candidates studying unit test design, debugging, security hardening, technical writing, localization, or pretty much anything else that comes up more often in practical development.


As someone who is licensed in a different field, awful idea.

Each state tends to have their own (often quite different) license requirements. They tend to be expensive to get and need to be renewed frequently (more $$$). There are usually other hoops you have to jump through to get both license and relicensed. I would never wish licensure on my own profession.


I agree on the downsides, but I believe licensing is a good forcing function to get others to 1) gain knowledge and 2) standardize training for a base level of competence.


We've tried that and mscd factories popped up and gave a bad name to that process.


We tried that once.

https://ncees.org/engineering/pe/software/

But nobody was interested.

https://ncees.org/ncees-discontinuing-pe-software-engineerin...

---

I've interviewed software developers who have on their resume "Oracle Java 8 certified", but aren't able to name three different spots where the static keyword appears in Java or the situations where they'd be used.

There are certainly many possible reasons why someone wouldn't retain that knowledge, but I've encountered far too many candidates claiming mid or senior level experience with "enhanced" resumes and certifications from everywhere that are not able to satisfactory do more than follow instructions (e.g. not able to determine steps needed to complete tasks nor able to solve the larger problems and break them down into tasks).

So, with the resume and certifications not things that I have confidence in, asking those questions to try to figure out if the person is able to approach solving a novel problem is what gets asked. If other companies do the same, then that candidate will likely see similar lines of questions in each place.


> "But nobody was interested."

No, no one took the PE exam for software engineering because it was effectively impossible.

To become a PE, first the candidate has to pass one of the Fundamentals of Engineering exam to become an engineer-in-training. Except, whoops, there wasn't ever a software specific FE exam; the most relevant one is the EE/Comp. E. exam. Take a look at the list of topics: https://ncees.org/wp-content/uploads/FE-Ele-CBT-specs.pdf. Most developers aren't going to pass that even with a CS degree.

Secondly, you need 4-8 years of job experience while supervised by a licensed engineer. Again, whoops, there are barely any software developers with a PE license, so who would they get to supervise them?

Only then do you get to take the PE exam for software engineering and become a licensed professional engineer. Frankly, the situation was so absurd that one has to suspect that NSPE didn't want to certify software developers as PEs.


It's interesting how this mirrors the Computer Science GRE subject test being discontinued.


This is a test on software engineering, not computer science.


I would contend that it makes it more applicable to the general "what do you do professionally" for most people outside of academia.


One way to bypass these interview tests is networking. If you are recommended by someone or get an interview through someone, you generally won't be given these tests. After my first job out of college, I've never had to do these types of interviews as I got my next jobs via former co-workers who moved onto other companies and personally recruited me to their new company.

I'm not sure if licensing will work. We already have "licensing" via Microsoft/etc certificates. And my experience is that we test these people with certificates even harder because of the poor reputation of these certificates.


That’s not always true. I’ve recommended 3 engineers from my previous company to apply at my current company. All of them I deeply respected for their skills and thought they were smarter than I was.

None of them got offers. One didn’t even get past the take-home. I worked with them. I KNOW they had the tech skills to do it.


>One way to bypass these interview tests is networking.

Has that ever worked at a FAANG?


Networking will often get you past a phone screen, to be fair. (But not the offsite which is usually algorithms-heavy.)


I don't necessarily want someone with great algorithm test taking skills... such a cert would be of relatively minor value to me. A single data point out of many and not one with especially great weight. I want someone that can build stuff on time, using the tools our team uses, within budget and do all of this while being a collaborative, reasonable, not unpleasant person to work with.


Sure, and perhaps that means you're currently already not asking algorithm questions in your interviews. Meaning candidates grousing about Leetcode wouldn't have similar grievances interviewing with your company.


An equivalent idea is to hire anyone who has worked at any well-known company with similarly rigorous interviews.

But people from those places still perform really poorly in interviews at about the same rate as everyone else. I wish I knew if it was a fluke that they bombed this time or that they passed the original.


> But people from those places still perform really poorly in interviews at about the same rate as everyone else

This doesn’t match my experience at all at least wrt to coding sessions. I don’t think I ever had someone from goog/fb bomb a coding part but I regularly had tenured engs from other well known shops not being able to code 2 nested for-loops // I did 200+ of interviews not counting when I worked for goog myself


A successful engineer at well known shops will fail a pressured evaluation where a google engineer will pass the interview. It make sense because google's interview process is harder.

That says nothing to what happens after they get hired.

It also doesn't tell you tenured eng can't do two for-loops. It probably tells you that this person hasn't been interviewing for years. You are a part of their process and are a practice interview. Each time this candidate interviews they will get better and hired. If a person is really good at interviewing that probably means they do it a lot and are probably looking at moving more often. If you want to hire someone who will stay you need I would give tenured engineer a chance they will probably blow you away.


> A successful engineer at well known shops will fail a pressured evaluation where a google engineer will pass the interview. It make sense because google's interview process is harder.

Nonsense. Googlers will pass bc they on average ship more code than an average eng at all other companies i worked for combined together. More code written -> more practice -> better outcome in coding sessions. It’s not really a great mystery. I can only assume same is true for fb based on the outcomes I’ve seen personally

> If a person is really good at interviewing that probably means they do it a lot and are probably looking at moving more often

No it doesn’t mean that. It’s also really obvious from the resume


More likely is that any company big enough to have a reputation is also big enough to have strong & weak departments, and it can be hard to tell which the candidate is from.


A license exam would ask practical engineering questions adjacent to safety otherwise it would not be warranted as field needing a license -- control systems, finite state machines, power consumption, queueing theory, poisson processes, etc.


For a lot of space-cadet interviewers this would work well. They just want the right answer and aren't thinking about how easy it would be to work on a team with these people.


Reminder, our field isn't regulated.

Anyone can go online and learn enough to be useful at a high paying entry level job.

At some point there has to be a filter and that's what these interviews are for.


The curse of a field which is both competitive yet difficult to measure. A meritocracy where your true status is opaque.


The kind of license I’m suggesting doesn’t change any of that. It would literally be formalizing the existing Leetcode process so it doesn’t have to be retaken many times.


The trick is proving to an employer what you know. Hard to get a resume past HR when you've worked in other industries and learned all your coding online.


I'd say cert, not license. I don't think anyone should have to pass a LeetCode exam to professionally code. But otherwise a good idea.


Sure, I'm using them interchangeably to mean an "officially-decided standard that allows you to not have to retake the same interview at every company." Whether that standard is created by some sort of industry consortium, the IEEE or ACM, CS academia, or some combination thereof, is left as an exercise to the discourse.


A cert provides formal evidence, but a license grants permission you wouldn’t otherwise have.


Is this the Triplebyte model?


I thought Triplebyte just allows candidates to skip the tech screen step, the on-site can and does still include algo/ds questions depending on the interviewing company.


That’s more the way employers treat Triplebyte in practice, and it is probably how they would treat any similar attempt.


Last time I interviewed, I thought about how it probably would have benefitted me to take one less class a semester in college and just drill leetcode-style questions in that time. Given how almost all "prestigious" companies use these problems heavily for interviewing, it's likely to increase earnings more than any other course.

I think what caltech is doing here makes a lot of sense and demonstrates an awareness of the realities of interviewing at companies a lot of their students will want to work at. College is also a better time to build up this skill than later when other responsibilities tend to pile up.


Idea: Professional CS interviewers!

I'm a good programmer, but not a good interviewer. That makes me pretty typical.

I could spend a lot of time getting better at it, but I would only marginally improve, and it would take time out of the work I'm actually good at.

This feels like a typical problem for division of labor to solve.

Maybe it's already done in done form?


You would not "marginally improve", you would dramatically improve. That's the dirty secret of these interviews they are not that hard to get good at. There are at most a little over half a dozen data structures and about a dozen algorithmic technique patterns that show up in these interviews, once you do enough of those types of problem, you get better at it.

I am willing to bet that spending an hour every day on leetcode you can probably be good enough to pass interviews at most companies in a month or two (assuming you had exposure to DS and Algorithms in college, if not add more time but the general idea applies). Maybe not Google but I would bet you'd be good enough to see double digit increasing in your onsite -> offer conversion percentage. Whether that would be worth it is a different question, the answer for many people who already have good jobs is no.


Not sure what you mean. If your idea is some kind of technical interview training, a million services for that already exist.


I mean professional technical interviewers that companies hire to interview candidates for them.

I know it's an unusual idea, but i didn't think it would cause such confusion.


This already happens to some extent with online screening software and such. Ultimately though no team is going to trust a fully third party assessment of a candidate that they'll have to work with day to day.


Yeah, I would still interview the candidate for "do you want to work with this person" factors.


I feel that the interview format heavily already favors new grads who just spent an entire year or more focused on algorithms courses and learning a lot of the theory you might get pop quizzed on.


This is absolutely embarrassing for the industry.


Seriously, who teaches this course, the "Cracking the Coding Interview" author?


Why is it not embarrassing for Caltech’s admission department and algorithms courses?


I agree it is embarrassing for Caltech as well. They are one of the biggest engineering schools in North America not Greendale Community College, this kind of class is beneath them. No major research university should be offering a course like this.


The Caltech course on algorithms is probably focused more on mathematical proofs of correctness and runtime bounds, rather than the implementation details required to pass coding interviews


If someone cannot code an algorithm they certainly don’t understand it.


I'd argue it's pragmatic


I understand part of the point of posting this is to demonstrate the problems with the educational system for CS majors and the hiring practices of the industry, but what is the actual point of sharing it here if the resources aren't open for all of us?


Isn't this the sort of thing university is not meant for? In Canada we separate out college and university, and this would mayyybe make sense as a college level course maybe, since it is more geared towards trades. But this sort of thing just reeks of corporate involvement in university and education of future scientists. It's quite clear to me that this course is purely dictated by "interviewing style of the day" and is not really pushing forward the knowledge of humanity AT ALL. What is next? AWS console UI programming 101? That is not transferable knowledge is it...


I don't think that my US college education pushed forward the knowledge of humanity at all. The least it could have done was prepare me for the professional world, but it didn't even do that very well.


I'm a bit confused. The course number on the page and in the URL is CS 11. But the Caltech course catalog lists CS 11 as "Computer Language Lab" [1] and the description is nothing like the CS 11 at the submitted URL.

[1] https://catalog.caltech.edu/current/courses/department/CS/20...


It seems to change topics depending on the term


Any way to access the assignments without a Caltech login?


I'm just dumbfounded that an university that has this:

https://www.caltech.edu/about/legacy/awards-and-honors/nobel...

...also has to offer a whole course to prep for job interviews :-|


This is a standard second semester Freshman year intro to programming curriculum. Most Universities have a class that looks exactly like this. I guess the only thing unique here is that when you're ready to graduate, you can take a class you already took 3 years earlier.


all this jumping through hoop bullshit needs to stop. just take a genetic scan at birth, assign someone their lot in life, no more worries.


Wait I'm confused what happened that this link sparked so much discussion about how interviews are broken?


Their "problem solving" section is pure algorithm b.s. Developers rarely ever spend time on algorithms in the professional world, and if they do they aren't going off memory, they're googling "best algorithm for X" and adapting it. There's almost nothing on this page that relates to actual work at a company or the kinds of things employers should care about. It looks like it only exists to try to get you a job at a FAANG with a terrible interview process.


The link looks awesome, but is it just a listing? I couldn't find any of the content


Could anyone kindly provide the materials associated with the course? Thank you!


I think the thing going on with interviews is that (most) of software engineering is actually pretty easy. Anyone can do the job outside of an interview session with the right experience.

Note how I stressed the word (most).

The interviews mostly serve to keep salaries high.


I mean I guess on a surface level this might be true if you only consider like developing small web features, but large and complex code bases and distributed architectures are not easy to work with imho. You can see that its actually pretty difficult because brand new junior devs can take upwards of 6 months to be useful, not to mention to occasional stand-in dev that sub contracts their job out, then when taken into a room and asked to read "their" code cant even understand what is going on.

I graduated with a physics degree and I still remember how steep the programming learning curve felt when I took my first programming in C course despite having done some python work in the past.

So I don't think the "most" qualifier is necessarily correct either.


>I took my first programming in C course despite having done some python work in the past.

Most stuff isn't done in C anymore. C is waay less safe then python and that's despite type checking.

Most jobs don't require C or C++. I would say because of this most programming jobs are easier now.

>You can see that its actually pretty difficult because brand new junior devs can take upwards of 6 months to be useful,

This means it's easy. 6 months is incredibly short. Mastery of anything usually requires a couple years.


What you mean is basic web development is easy, software engineering is a much broader field than web development and there's lots of hard problems there. Even basic web development can get really complex with scale , I think the issue is people conflate being able to write a few lines of code with being a Good programmer. It's like saying writings easy because everyone knows how to write


I mean both. Web development is easy. But the majority of software development jobs are web development. A huge majority.

This majority makes it so that the broad statement of software development being easy mostly true.

Think about it in terms of availability of jobs over the breadth of engineering topics.


The problem is that the way software works , every part of software interacts with every other part. Web apps use databases , compilers , networks and so on. Now ofc most of the time you won't need to understand the details behind those but it's good to understand them well enough to be able to if you need.


But this doesn't have anything to do with the topic at hand. Most jobs are easy but interviews filter for something very specific. Algorithms. Algorithm interview skill is largely different from even the details behind compilers and databases.

It's like competitive typing. Everyone can type. But interviewers filter for the best.


I think algorithms allow you to pick up all of those other things more quickly. I'm not saying that the interviews are a perfect way to measure this , but a good understanding of algorithms is really useful to understand any software


No it doesn't. You can pick up all the other things without competitive algorithms knowledge. That's the key word: competitive. This is knowledge excessively optimized for coding competitions like top coder.

Think of it this way. Can all theoretical mathematicians do well in a timed math competition? Certainly there are those that will, but certainly there are many great mathematicians who won't do well yet have produced many great papers and discoveries.

These companies are filtering for competitive algorithm skills. Its like only allowing the top competitive typists to work in data entry and paying those data entry people 500k salaries.


Most faang employees would not do well in competitive programming challenges because those are way harder than interview questions. Some interview style questions are unreasonable but a lot can be solved with basic concepts


Yes, there are degrees of hardness with competitive programming being the very tip top and FAANG interviews (googles especially) being not so far behind. Think of it like this, in competitive programming is S-tier. Faang interviews are A-tier.

The job only requires C-tier. Just know you basic DFS, BFS, recursion, heaps and all that stuff and how to look shit up when needed is C-tier.

Look, the above is just my opinion against yours, but what do the metrics say?

Googles acceptance rate is less than harvard. That's like less then 1%. There's two conclusions you can draw from that metric:

   Conclusion 1: Only 1% of people who applied are able to work at google.
   Conclusion 2: The google filter is extreme and filtering way beyond people who are just capable of working for google.
Google even admits this. They bias the interviews to be hard to get rid of false positives who can potentially make it through slightly easier interviews. As a result they end up creating a lot of false negatives...

That's the narrative google makes up to excuse their less than 1% acceptance rate. Makes them seem less seem less elitist. But to crank up the hardness of the interview to less than 1% for this excuse? That's a little extreme. If the acceptance rate was 20% sure... but not less than 1%. They're asking me to believe a reality where there's so many potential false positives that they have to crank up the difficulty to a 1% acceptance rate? LOL.

It's BS. But either way. Google needs to filter somehow as they can't have everyone working for them. There's not enough money or space for everyone who applies. The question is whether algorithms are the best filter? My argument is no, it is not. Rejecting the creator of brew (and on the facebook side, the creator of hadoop) for example is the kind of stupidity that comes from religiously worshipping algorithms as the one true indicator of programmer skill.


> The interviews mostly serve to keep salaries high.

What incentive do tech companies have in keeping salaries high?


I guess think of it like religion. There's no logical reason behind religion... but it exists. It's more of a cultural phenomena then a strategic action.

The people who believe in the religion can't differentiate between religion and truth. Hence if a huge number of people in a company believe it as truth, it becomes part of company policy.

In addition to this, many occupations over time develop standardized testing and filters to keep supply low. Keeping supply low is a logical incentive but there's no one guy who set the policy to be this way with the purpose of creating a cartel. Much of this happens organically as cultural phenomena. Doctors and lawyers are good examples of this.

Training to become either a doctor or a lawyer is actually quite excessive and there's no proof that the people who make it through these gateways are better performers. In fact, there's statistical proof that doctors in the US who go through one of the hardest filters ever to become a doctor are actually much less effective then their European counterparts. This indicates that the filters don't exist because they create more effective doctors.

I think software is going through a similar process. It won't get encoded into government accreditation. But it will be used by elite companies to form the typical class based social hierarchy which is a cultural phenomena that forms across every single culture.


Thanks could not get main site to load. This should be at top.


Thanks! I moved it to the top.

(Just to conserve space there, I detached this comment from https://news.ycombinator.com/item?id=30492698 and marked it off topic.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: