Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Every time the topic of algorithmic questions in interviews come up I am left wondering if these questions are really as hard as they are made out to be.

I would expect most programmers who are not just plumbing together existing technology to be able to implement push,pop,... for linked lists without having to study. You have to do harder things as part of regular programming.

> take "determine if a linked list has a cycle in it."

I do think that this question if asked without any support if they get stuck is a bad question. You would be relying on the interviewee to either be aware of that research, require that they have the spark of insight, or allow a very suboptimal program.

That said, just being aware of the general solution (two pointers moving at different speeds) is enough for me to be confident I could make a solution.



"determine if a linked list has a cycle in it."

That's only hard if you can't put a depth count field in each list node, and can't use much memory outside the list. If you can do either of those things, it's easy.

Many of the classic list algorithms assume that memory is expensive but random access is free. Today, memory is cheap but random access to a lot of it is expensive.


Easy is relative. I've don't even know what a linked list is, much less able the implement one and I've been programming for 10 years.


That says a lot more about you than the interview question.


You've never used a linked list in 10 years?


In my opinion, it doesn’t matter what they ask at all. Give me a question about Kubernetes deploys for a shader programming job if you want. I’m a programmer, my job is to solve coding problems both inside and outside my sphere of knowledge.

If you “get stuck”, what are you going to do when you bump up against your limitations at work?

Part of the job is getting yourself unstuck. If you can show you have ways of making progress when you are out of your depth then you aced the interview, even if you were nowhere near the correct solution.


> If you “get stuck”, what are you going to do when you bump up against your limitations at work?

If I got stuck on finding whether a linked list had a cycle in it, I would Google for "how to determine whether a linked list has a cycle in it". Ironically, that's the one thing I'm not allowed to do in a coding interview.

In fact, trying to invent something from first principles as you're theoretically supposed to do in an interview would be a waste of my employer's time.


What do you do when Google doesn’t have a good answer?

The point is I can easily fill an hour applying different debugging and learning techniques that have a chance at getting me unstuck. That’s one of the things that makes me a good programmer.


Realistically, you'd walk the list with a visited set and validate you didn't visit a node twice.

This would solve your problem in any realistic use case and you'd never even recognize a constant space solution was possible.


What are you going to do when the solution to your problem is not on Google/Stack Overflow? Give up? If you're only going to code things that have already been coded, and can be accessed merely by Googling, then what's the point of your job?


> "If you can show you have ways of making progress when you are out of your depth then you aced the interview, even if you were nowhere near the correct solution."

In my extensive experience on both sides of the interview table, this only works for open ended questions, notably architecture/design. With something like architecture, you can grasp what the candidate's baseline knowledge is, see how they acquire information (asking you questions), and how well they are able to synthesize a solution within their knowledge level/parameters specified. So to your point about Kubernetes deploys for a shader programming job, well, yah, that can be a solid architecture question, but you'll have the freedom to ask me all the questions you want.

I have almost never seen a candidate be deemed as "acing" an algorithms question when they were "nowhere" near the correct solution. This is in part due to the large empathy gap existing in algo questions -- it's not obvious to me what baseline info the candidate knows and doesn't know. I mean if I tell you to do detect if a linked list cycles on itself in constant space and O(n) time, what sort of partial solution is there? If you are able to have the insight that this type of problem is in the general class of "two pointer" problems, there's a damn good chance you are going to solve the problem. Absent that, you are just grinding your wheels because the solution space is so large.

(Because of this fact, I generally find whiteboard algo ("leetcode") coding makes for poor interviews. The only types of whiteboarding coding questions I see as useful (for systems engineers) are hybrid architecture/coding questions and/or multithreaded programming, which more become tests of actual programming ability and less so of did you do 50 leetcode questions)


I agree that "getting yourself unstuck" is part of the job, but interviews also try to get an idea of how well you will do on average.

So a previous graphics driver implementer may be a great fit for the shader programming job, but be beat in Kubernetes deployment by someone who has done multiplayer web based 3dgames.


+1. I 100% agree and this is a great way of putting it.

Being able to think through how you would approach something is infinitely better than having memorized some details (although job-specific details are important too).


Programming problems are not solved in about 30 minutes, which is a very typical amount of time in my experience.


> I am left wondering if these questions are really as hard as they are made out to be

It's not really about how hard they are, and more about:

"Ive never once in my career had to use this on the job, why is it asked in an interview, thats stupid!".

That's usually the answer you'll get. But I think that answer contains a lot more info than what we can gather from it at first glance.

First, good places will give you hints about what the interview might contain long before you show up. If it's for one of the well known big techs, the questions are all over the internet. They know this. The question essentially amounts to: "Can you make sure you know something about computer science if we ask you to know it". If your answer is "No, that's not worth my time, I'm better than this!", well, I can see why someone wouldn't hire you.

The second part, is that it is a self fulfilling prophecy. 20 years ago, a typical team was mostly CS majors, with the occasional odd one out who didn't (I was one of those odd ones out). That means if you didn't know it, it didn't really matter. You could just turn around and bounce it off one of your colleagues. If you made a mistake, they'd catch it. Today however, especially in fields like frontend development, its extremely likely ZERO person in the team has a CS background, or the 1-2 people who did don't remember anything from their college years. That means some problems quickly fall in the category of "This isn't worth trying, let's use a 3rd party solution or wait until the one expert in the company does it for us".

Thus, self fulfilling prophecy: you don't need to know these things because the industry punted on the problems you'd need these things to solve, or deferred them to other departments. Web apps, for example, are often very animation light, or low on more advanced graphics, because no one knows the math to make these things happen anymore aside from using a library to make a pretty chart. Since your competitors are in the same boat, there's no pressure to change that. And then those folks feel its dumb to ask these questions in interviews (They don't need it!), and the cycle keeps going.

"Why would I need these if Im building forms in javascript all day!". Well, maybe if more people in the team knew how to do more than build forms, we could try and build something fancier than forms.


>Thus, self fulfilling prophecy: you don't need to know these things because the industry punted on the problems you'd need these things to solve, or deferred them to other departments. Web apps, for example, are often very animation light, or low on more advanced graphics, because no one knows the math to make these things happen anymore aside from using a library to make a pretty chart. Since your competitors are in the same boat, there's no pressure to change that. And then those folks feel its dumb to ask these questions in interviews (They don't need it!), and the cycle keeps going.

I've heard a lot of post-hoc rationalizations for why the "google style academic interviewing fetish" is more rational than "ask interviewee to perform tasks or answer questions relevant to their job" but using it a justification for continuing the tradition of wheel reinvention in the javascript ecosystem has to be the best.

>"Why would I need these if Im building forms in javascript all day!". Well, maybe if more people in the team knew how to do more than build forms, we could try and build something fancier than forms.

Sadly for the aspiring academic fetishist, building non-fancy forms all day is what a lot of businesses want.


This isn't worth trying, let's use a 3rd party solution

You act as if this is bad thing. The last thing I want is another bespoke logging solution or ORM.

You can’t imagine the times I gladly ripped out my own bespoke solution for a third party one after finding one was available.

No sane company hires experienced developers to “develop”. They hire developers to have a breadth of industry knowledge to know when to build and when to outsource the “undifferentiated heavy lifting”. During the last few years, both companies I’ve worked for both in an architect level positions have never blinked at solutions that cost money over costing developer time in both development and maintenance.

Web apps, for example, are often very animation light, or low on more advanced graphics.... Well, maybe if more people in the team knew how to do more than build forms, we could try and build something fancier than forms.

Again you act like this is a bad thing. I came into a company where the dev leads were younger and relatively fresh CS grads and even the manager of the department was relatively young. He was the founder of the company before it got acquired.

They had so many ideas about the “right” way to do things and spent so much time arguing about how many angels can dance on the head of a pin they couldn’t ship software for crap.

You see the same thing at Google. After two decades, billions of dollars, untold cancelled projects and with all the smart people they have, almost all of their revenue still comes from advertising.

I saw so many head scratching overengineered custom developed systems that could have been done a lot simpler if they had had any real world experience.

On the opposite end, I was hired two years ago with a mandate partially to migrate all of the bespoke, complicated systems and use managed services/use third party packages where ever possible.


> The last thing I want is another bespoke logging solution or ORM.

A recent place I worked had a home-grown ORM -and- Object Model written in Perl by one "dedicated" person many years ago. The root cause of many problems...


> act as if this is bad thing. The last thing I want is another bespoke logging solution or ORM.

I didn't quite capture the nuance of what I meant. Sorry about that. Of course if a third party does what you want go for it. But in the situations I'm thinking of, sometimes it's not and people just force their requirements into a suboptimal model. Form validation libs are an obvious example. High level chart libs are another think highchart vs D3)


Business decisions are often made based on forcing requirements onto a commercial third party solution that gets you 90% there instead of spending money up front and in ongoing maintenance to get 100% there. If your company isn’t going to have a competitive advantage, make enough money or save enough money to make that 10% difference worth it, the sacrifice is often worth it.

How many companies adjust their entire process around Salesforce, some Oracle enterprise solution, or project management software?


I think there's some of that, but... no, part of it is because straight logic problems are... hard. And lots of candidates, even solid candidates who have held jobs in software engineering, can't write code with simple-yet-non-trivial logic requirements reliably.

"Write a linked list" won't tell you someone is a genius. But it will weed out a lot of people who can do "development" but can't hack.

And if you have a job that needs hackers (not all do!), these questions make for cheap filters that save everyone time.


> 20 years ago, a typical team was mostly CS majors [...] Today however, especially in fields like frontend development, its extremely likely ZERO person in the team has a CS background

A commercially successful frontend development requires visual design skills much more than anything else, or no-one will visit the site if it looks bad. I disagree with you about software development teams in other fields having no CS background. The CS majors write backend code and frameworks for frontend developers to use.


> complex animations & advanced graphics

No please don't. I'm very happy with everyone just building normal boring predictable easy-to-use forms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: