The "haters" section leaves out one critical reason for not doing something like this that I think is quite relevant. For those of us who have never had to do a BST before (I do not have a degree in CS, though I've been coding professionally for 8 years), expecting us to solve a BST problem in the 10, 15 minutes alotted is not really great. Even if I've never seen it before, the idea that I can solve it in 15 minutes in a high pressure environment is unrealistic.
In a real world scenario, if I had to solve this kind of thing, I'd rely on existing works first. I'd google the problem, and I'd see what solutions are out there. For most problems, there's something that can point me in the right direction.
For solving a BST issue from scratch in 15 minutes, well.. let's just say the first guy to come up with BSTs probably didn't come up with his solution in 15 minutes.
So I would give it my best shot, but if I don't succeed, you shouldn't hold it against me. I disagree with the premise that if I can't solve this, I can't solve novel problems. That's just blatantly incorrect. I solve novel problems every day -- just not on a whiteboard and not in 15 minutes.
I would much much much rather see a real-world problem. Something you've actually run into. For instance, in my line of work, I deal a lot with addresses. One of the issues that I've had to solve in the past is parsing addresses. If I give you a string of "123 Main St, Boston, MA 02123", how do you break that up into street, city, state, zip?
That's something I've actually had to solve. It's something that anyone with critical thinking skills can take a decent shot at in 15 minutes. A binary search algorithm? I got a question like this one time, and I floundered because I got 90% of the way there, but I couldn't figure out how to solve one of those edge cases. The guy basically ended the interview right there, but I asked for the solution. When he gave it to me, it was something straight out of an academic textbook.
I said, "Oh, you guys must use binary trees a lot." And he laughed and said, "No, never. It's low-level stuff for databases."
And I just sat there, dumbstruck. Why would you give me a problem that you've never had to solve yourself?
I dunno. I just disagree with the premise of the article. I think there are far better questions to ask. Problem solving in general is required -- solving academic exercises much less so.
As an former interviewer, I also preferred to give algorithmic problems to candidates, though I avoid the kind of problems that involve practice, such as graph stuff or dynamic programming. The rationale being that developers need to know the basics of CS, since we were into algorithms. With the mention that I didn't want to see perfect solutions, being much more interested in how the candidate approaches the problem.
But yes, I agree - my only problem with real world stuff is that it is hard to come up with a set of 15 mins problems that involve analytical thinking.
I've heard phrases like "I wanted to see how the candidate approaches the problem" and "I want to see how they think" a lot with regards to coding questions. I'm currently working on the thesis that such phrases are nothing but hand-wavy BS to dodge real criticisms of algorithmic questions. As such, I'd be really interested to hear what constitutes good signs and bad signs in questions where you "want to see how they approach it"? What are your actual criteria?
First of all no interviewing method is perfect, short of working with the guy for about 2 months, unfortunately people are harder to fire. The algorithmic question you do fail at an interview is almost never the only reason for rejection (maybe the interviewer didn't like talking with you, maybe you've got some red flags on your resume, maybe he saw some samples from your GitHub account and didn't like your style, maybe he doesn't like your blog, maybe he's being a dick and you're better off without them, etc...) - plus if you interviewed at placed like Google, it's not the questions per se that makes the interview difficult, but rather the fact that they don't give a shit about losing potentially good candidates, because they can afford to.
The tests I was giving involved 3 problems that were meant to be solved in 60 minutes, in front of a laptop using whatever language the developer wanted to use (I hate whiteboards myself). The interesting thing about my approach is that all 3 problems were almost impossible to solve in 60 minutes, unless the candidate had experience with algorithmic coding contests (in which case it was a question mark btw). And so I didn't expect all 3 problems to be solved for a successful interview.
1. the first problem placed in the set was always the hardest to solve. This problem didn't involve knowing specific algorithms and ideally it would be something that could be solved by brute force - it's very hard to come up with problems with this requirement btw. I had candidates choke on the first problem, trying to solve it within the big-O constraints, while ignoring everything else. This wasn't necessarily bad - one time I had this guy that approached a good solution to an otherwise very hard problem (too hard for a time-constrained interview in fact, it was an accident - advice for the wise, always solve those problems by yourself within the time constraints, before giving them to candidates). He failed to even touch the other two problems, but I liked his idea so I considered him OK. However, choking on this problem with nothing to show and failing to do anything about the other two problems was a red flag.
2. one of the problems was hand-picked to be easy enough to be solved by everybody, in a short amount of time - preferably containing the possibility of a one-off error or things like that. I never placed this as the first problem in the set of 3 - pragmatic developers that know how to prioritize their tasks, would first read all 3 problems before picking one to solve and this problem was a no brainer, while many developers tend to be hooked on the first problem failing to see there's another apple that's easier to pick (in the context of a time-constrained contest were all 3 problems are allegedly scored equally). Also, for some people I also got a hint of an ego problem - seeing that they fail to solve a hard problem, some of them gave up before the time was up, without looking at the others. Personally I don't like ego problems. Also mistakes in solving this problem were considered a red flag - for example, I expect everybody to be able to reverse an array in place on a laptop on which they can test, without any mistakes.
3. The third problem usually involved an implementation of a basic data-structure, like a HashMap. I think well rounded developers should know the data-structures they work with daily. Or maybe something else that's common, like something involving parsing. Mistakes in implementing this fully were tolerated, I only wanted to see that the developer could do it if he wanted to. Again, I never expected people to be able to solve all 3 perfectly in 60 minutes, otherwise I'd start wondering - WTF is this guy doing all day?
I could go on, but the truth is judgement was also handled on a case by case basis - I was actually trying to understand what the developer thought of while solving the problem, even though the solution was not correct or broken.
For my address problem, here's a couple of things I want to see:
* You can look an address and identify the different parts
* You can come up with simple ways to split up the address. Most people start off with saying "Well, there's a comma after the street and a comma after the city, and then state and ZIP have a space in between, so if I split it up like that, I have all the parts."
* You can tackle more difficult addresses (i.e., What do you do if there are no commas? If someone leaves off a ZIP code?) Bonus points if you consider these things before you bring them up
* Actually write an algorithm to solve it. This is a little more involved, but basically, the way you write the algorithm tells me things about what you know. If you choose to use regular expressions, that tells me you're confident enough with regular expressions to use them in an interview. If you decide to explode the string and work word by word, that tells me something else. If you decide to work left to right, right to left, etc. If you make assumptions or not, that tells me things.
The great thing about addresses is that the format is pretty standard and everyone pretty much understands the nuances of it. For instance, what kind of ZIP codes are there? What are the possible values for a state? Given this knowledge, you should be able to make some assumptions and very easily figure out where State and ZIP are, or if they're even present.
You'd be surprised how many people never get that far, even with hints and prompting from me. They'll stop at the comma thing, or they'll say "I'd use machine learning" like that's a magic wand of some kind.
More importantly, when I introduce edge cases they hadn't considered, how do they change the algorithm? Is it a minor change, or do they have to go back to square one? Some edge cases dramatically change the entire situation.
And then there are some really tricky cases. For instance, if you decide to make the (perfectly valid) assumption that streets end in a suffix (e.g., "Street", "Avenue", "Boulevard" or their abbreviations) and then decide to split the string there with the street on the left and the city/state/ZIP on the right, that sounds like a good idea. But what if it's "123 Main St W"? How does that change things? And then imagine that you have "123 Main St West Palm Beach FL"... does that West belong with the street or the city? What if there's also a "123 Main St Palm Beach FL"? How does that change things?
And so on. Most of the time, people solve the simple thing. Then as I add edge cases, I want to see how they solve them. Usually this takes about 15, 20 minutes total. The best candidates wind up getting close enough to the way I did it when I first solved the problem. (I've since then solved for all of these edge cases, but that took me months.)
I don't really care about the exact solution. If you get stuck at the part without commas, that's okay, I'll skip it and give another edge case. I just want to see how you think, how you solve the problem.
I can teach you Python or Ruby or PHP. I can teach you the syntax necessary to parse a string. What I can't and don't have time to do is teach you how to solve problems. The programming bit is trainable. The problem solving bit... not so much.
How is this different than the OP? You said that you're keen on a real world problem, except that BSTs can definitely be involved in a real world problem. If anything, this problem is more complicated than the one the OP presents.
From afar, it sounds like a translation of your posts are "I'm not familiar with BST, that's unfair. I'd rather a real world problem like the one I'm familiar with". As someone who never works with addresses, while reading over your various descriptions of the sub-problems I think your familiarity with the problem is clouding your judgement on the "very easily" parts, at least as compared to the original BST problem space (i.e. "someone can figure out what's necessary in ~10 mins in a high pressure situation).
Perhaps that's unfair of me but I don't see how your arbitrary problem which is fairly simple in the standard case but with a variety of interesting edge cases is any different than his problem of a similar description.
I've had to deal with addresses too and I think this would drive me crazy. You can't make any assumptions about addresses because they'll all be wrong eventually. Even your assumption that an address _has_ a zip code is pretty thin. Addresses in the US have zip codes. Internationally, not so much.
The "perfectly valid" assumption is also definitely invalid, even within the U.S. A few counterexamples in NYC: "Broadway," "The Bowery," "Avenue of the Americas." Boston has "The Fenway." Many rural addresses are in the style of "300 State Route 20."
Sure, there are always edge cases. There are cities with no street addresses (check out Carmel-by-the-sea for some craziness: http://en.wikipedia.org/wiki/Carmel-by-the-sea). How do you parse an address when the address is "10th and Lincoln"? Or when you wind up with something like in rural Wisconsin where the address is "N323 W77532 County Road 120, Wherever, WI"? Or when you wind up with two cities that are different but the USPS, the residents, and Google/Bing consider them the same place, like Rockville and Potomac, MD?
Don't even get me started on the Virgin Islands or Puerto Rico...
But for the purposes of the interview question, there are "perfectly valid" assumptions that you can make to solve the task before you. I'm not asking you to solve the entire scope of the problem -- I've been working on this for over a year with varying degrees of commitment, and I assure you I have not solved the entire scope of the problem. I don't expect you to figure it out in 15 minutes.
You can assume that there are 50 states (but that leaves out the Virgin Islands and Puerto Rico). You can assume that ZIP codes have 5 digits (but maybe someone provided all 9 digits, or maybe they left it off completely). You can assume that all streets end in a reasonable suffix (but that ignores post-directional suffixes or the examples you provided).
However, by making assumptions, you can more quickly solve the problem in an interview context. As long as I understand the assumptions you're making, I can judge the quality of your output. It won't solve everything, but maybe it's better to parse 99% of addresses using one simpler algorithm and then handle the edge cases using a second (or third, or fourth, or fifth) algorithm.
I don't know why carmel-by-the-sea's addressing is considered odd. It was a very common system until recently (well, within the last century) and is still common in many countries. My grandmother never gave up her name for her house even when the town assigned a street number in the 1970s, and mail came just fine.
More importantly: street addressing was added to support the crude technology of the time (paper maps & phone books) -- with computers we should become MORE amenable to the vagaries and desires of humans rather than changing humans to fit the machines.
So, I guess I should specify that the addresses in question are United States addresses. My company operates entirely within the US and does not deal with international addresses. In this regard, Carmel is very unusual within the US.
In my opinion, the way the interviewer screwed up wasn't giving you that question, it was rejecting you for only getting 90% of the way there. The purpose of a simple artificial question like this is to act as a binary filter, separate people who can program (and will take a decent shot at it) from people who can't (and will just stare blankly at the board for the allotted fifteen minutes). An interviewer who starts expecting it to be more than that is just adding noise to the selection process.
The thing I particularly like about this is that the problem spec is very rigid and clear: no generic solutions (ints only), don't worry about balancing, don't worry about visibility, make it work, and then they give example usage of the API.
It's really easy to screw up a one-page toy problem spec: I've done that myself, and got many replies that failed to either follow the spec or failed to identify ambiguities in it. Having a simple problem as a baseline is really helpful.
That said, what I don't like about this is that it doesn't really test the candidate's ability to reason about systems or architectural things--then again, as a better "fizzbuzz", this seems to fit perfectly.
How many of the candidates you interview start off by writing simple test cases to make sure they've thought of all the edge cases before they dive into coding?
Have you noticed any correlation between the test-first people vs develop-first people?
I was recently in an interview where the interviewer asked me a rather convoluted and not easily explained algorithm implementation question. It took the interviewer some 10 minutes to explain it. This interviewer was the CTO of this particular, rather success, start-up and had formerly worked at other very large and success companies all over the Bay area.
My very first instinct and the thing I started to do was write tests on the board to make sure I understood the requirements. The interviewer stopped me and said "Please, just get to the code, the tests are not important." I tried to explain that the tests are paramount to the success of an implementation because I need to make sure I understand the requirements. However, the interviewer disagreed. I actually told them I was no longer interested in interviewing with their company and excused myself without writing anything further on the board. The funny part is that the next day the CTO in question called and offered me a very nice position (I respectfully declined after a discussion of my concerns).
So, two points. One, not all interviewers care about tests, which is sad. Two, the original post states, "you don’t get to choose how you get interviewed". This is false, you can certainly choose how you will be interviewed, as long as you are ok with the consequences of that decision.
[sarcasm]Please continue to use this question. It’s a wonderful marker for candidates (like me) who don’t want to work with folks who think questions like that are useful.[/sarcasm]
To be honest, I think it’s far more interesting to set aside a real world problem that you have encountered at your company and see how a candidate would think through it (out loud, of course). Many issues you’ll need to solve on a day-to-day basis have nothing to do with programming — and instead have to do with knowing related technologies (like a TCP/IP network, or a database) and how they interact.
You might be shocked to discover how many developers have a deep understanding of algorithms but can’t troubleshoot/design/architect/solve their way out of a paper bag.
Also: the thought of you ACTUALLY chewing your arm off is morbidly humorous. Thanks for that. :-)
"You might be shocked to discover how many developers have a deep understanding of algorithms but can’t troubleshoot/design/architect/solve their way out of a paper bag."
I might, but I have been shocked by the number of developers in the opposite situation: can create wonderful, beautiful designs but can't actually write code better than your average bonobo. And I won't even mention troubleshooting or actually solving problems. If you can't actually write decent code, I don't actually care if you can design or architect and I doubt your other skills would actually help me if I'm looking for a programmer.
This problem in no way requires a "deep understanding of algorithms". Hell, writing a balanced tree is hardly deep understanding, and I admit I'd have a bit of an uphill struggle with that myself. But this is just basic programming logic. If you have any problems with that, I don't care if you're Bob Braden or Edgar Frank "Ted" Codd.
As the article says, this is fizz buzz. The purpose of it is to decide if you can program, not if you're a good programmer and not if you're a good engineer. Also, it's not an algorithm question. Actually, I'm not sure why I'm writing this up, he's specifically addressing your point in the article.
> I think it’s far more interesting to set aside a real world problem that you have encountered at your company
I would too. But the domain space of (things that aren't dreadfully trivial) intersect (things I can explain to an outsider in <10 minutes, including the relevant domain and context) intersect (things that an outsider can make reasonable progress on in the remaining 35 minutes) intersect (things I didn't just bloody fix when I came across them) is very, very small. I'm not even sure all those circles even overlaps, ever.
If this is just Fizz Buzz (ie, a very low-level filter to see if you can program at all), why is he testing for elegance? For cleanliness? For speed of writing on a whiteboard, for god sake? He may claim it's Fizz Buzz, but he gives it and treats it like a normal programming test, with all the problems associated with it.
The whole concept of fizz buzz for an in-person whiteboard interview is absurd. The type of candidate who would fail that test should have been filtered out way earlier in the process and the type of candidate who hasn't been filtered out by that point (i.e., a 'top candidate') would be extremely insulted.
I think the people who ask asinine questions like this haven't done their due diligence going in. They haven't looked at the candidate's online presence (including GitHub) nor even read her resume.
Why are you requiring 'things you didn't fix when you came across them' ? It'd seem to me those'd be ideal - and they should be sitting around in source control.
[sarcasm]Please continue to use this question. It’s a wonderful marker for candidates (like me) who don’t want to work with folks who think questions like that are useful.[/sarcasm]
That's a two way street. Asking this question to ward off people who are offended to be asked it sounds like a net win to me.
Also, some of the replies to the original article got me thinking about my own experiences as an interviewee. I realized that those engineers who have interviewed me in the past and did not ask much technically have been amongst the weakest engineers I have worked with. Really nice people, not really good engineers.
> I think it’s far more interesting to set aside a real world problem that you have encountered at your company and see how a candidate would think through it (out loud, of course).
So how do you do that consistently? Do you just go with whatever you were working on last week? Do you keep your favourite bugs to show to candidates?
> Many issues you’ll need to solve on a day-to-day basis have nothing to do with programming — and instead have to do with knowing related technologies (like a TCP/IP network, or a database) and how they interact.
Both are important, but programming is easier to test. Most candidates haven't used exactly the same technologies, so unless you happen to be an expert on every database under the sun it's very hard to ask a fair question about how to use one. And my impression is that it's easier to cargo-cult how to connect to a database than how to write a binary tree (though I've certainly seen people do both).
I am getting asked to write a job description for new person.
"What is required?"
Basically I would be happy with someone who didn't have any of the required skills, as long as they had a decent attitude and were willing to learn, (and had some sort of background that was relevant). But apparently we need required skills on the ad.
I was thinking about this the last time we were hiring at work, and decided to try to take notes the next time I find a bug or issue that could be repackaged as a hiring challenge.
(I haven't succeeded yet, have been too distracted trying to solve the actual issue to remember to take notes...)
So is your argument actually that it's not necessary for people to be able to do this? Or is it that it's not nearly in-depth enough. If you're arguing the latter, I'm pretty sure everyone agrees; this was described as his version of fizzbuzz, to weed out people who literally just can't program. If you're arguing the former, that seems insane. Knowing how a TCP/IP network works is great, but if you're incapable of writing code to implement your knowledge, you're not very useful
C'mon, this is not about knowing some algorithm by heart. He clearly describes the specifications and gives a test case. With that information you should be able to "troubleshoot/design/architect/solve" a solution to his problem even if you've never seen it before.
This is a fizzbuzz, someone posted a quite sparse C++ solution in 75 LOC above. C++ ! That would be half in python, less than one laptop screens height.
> You might be shocked to discover how many developers have a deep understanding of algorithms but can’t troubleshoot/design/architect/solve their way out of a paper bag.
In interviews I'm curious to hear about developers' favorite debugging tools or techniques for precisely that reason.
We do a quick phone call screener and then do some pair programming. We only do half a day of pairing (for which they're paid for) but I find that this is the perfect place to do a stealth interview with the candidate. They feel comfortable because they're in their daily environment (we do it remotely using floobits, so they're sitting at their usual desk with their usual apps etc.). Here you can get a really good conversation going about things that are actually relevant to the job. Anything else just seems like guessing now.
I find the responses so far on HN quite interesting. The author of the article is pretty clear, but still, people question the appropriateness or whether it's passive aggressive.
What you want to know from a programming question is:
- have you actually written code and can you still do so?
- do you have a "feel" for how the code works or are you a cut 'n paste person?
- do you know "enough"
(the last is elusive: it's like being able to calculate in your head: sure, I use a calculator, but I can tell when I see the result if I've made a key press error along the way because I have a rough idea of what the answer will look like.).
And this is another reason not to sweat the small stuff. The odd lost semicolon is no big deal; not knowing the difference between recursion and iteration is.
"Real" problem are rarely bite sized. Plus the interviewee won't have enough context to know which parts of the problem statement are significant and which aren't. Much better to provide something simple and abstract that only takes a couple of lines to answer.
Another good starter question is "make a deck of cards however you like and shuffle it." An amazing number of people feel this is under constrained and ask for more definition. I don't care about the implementation of the deck (integers mod 13 for face and divided by 4 for suit work OK, or you could define an object). It's surprisingly common that people just make a complicated copy or inverse than an actual shuffle.
One guy reversed the deck and when I gently asked if it was actually random laughed out loud, smacked himself on the forehead, rubbed out a bit of brain damage on the whiteboard and put a random() in. He turned out to be an awesome programmer. No way did I want him to feel bad about making a silly mistake. He knew how to solve that (and a couple of other problems).
A little problem like this supplies plenty to talk about (what if we did this? How would you deal with this scaling up) to show the interviewee how we think about things and vice versa.
As for the address problem: it is basically a huge set of special cases. That's a different kind of problem and not actually a programming question.
I put this first, because that is why I started writing this comment. I have to disagree with some of judgments made by the author. The first one is debatable but usually you don't want duplicates in your binary search tree - I consider all the example implementations broken. But he really lost me when he started talking about the coding style. The given example is not perfect but I consider it better then most of the other examples when it comes to coding style - if and else without braces, if with break but without else. Everywhere. It is a lot of religion, it is a lot of personal preferences, but all the more it seems really wrong to judge more on personal preferences of style than on correctness and other hard facts.
And here what should have come first. I think this is a relative unrealistic task because writing your own data structures is quite a rare task for most programming jobs. Nonetheless it is a very basic task and every professional developer should be able to get this right within 15 to 30 minutes.
While this was easy to do in ~10 minutes using gedit/clang/valgrind, I have to wonder how well I would have done had I been forced to use a whiteboard.
> Even if you haven’t studied or used BSTs in a while – even if I have to explain them to you on the spot – you should be able to complete the problem without much difficulty.
I agree with this statement if you have access to a minimal development environment, but not on a whiteboard, which is entirely unforgiving to making any changes. In the end, this serves only to select candidates who know exactly how to solve this problem beforehand.
I dig this problem. One thing it doesn't touch, though, is the importance of using descriptive variable names to organize your thinking (in this case the var names are kind of a given--left, right, node..."current" is the only one someone might find a useless name for). I've found the candidate's ability to choose descriptive variable names is strongly correlated with general organization and clarity of their thinking.
Generally, when it comes to coding, I'll ask the interviewer if we can just do it on my laptop in a real development environment. Nobody actually codes on a whiteboard, so why should it happen in an interview? I want to play with the code, reason about it, refactor it as my understanding of the problem space expands. I can't do that on a whiteboard.
On the other hand, I think I write better code when I sit down and think about it first. Sketch some ideas. Sometimes implementing bit helps change your way of thinking, as you see problems that you didn't before.
My problem with whiteboarding anything other than pseudo code is that I'm pretty dyslexic and I've spent years learning syntax at the keyboard. That learning does not translate to the whiteboard. To give an example of how much the presentation of problems matters to me, the first time I took the GREs I got a 340 on the verbal. The problem was that I had practiced out of a book not on a computer. The second time I to the GREs I scored a 740 on the verbal because I spent the time to learn how to take the test.
Anyway, I don't have the time to learn how to program on the whiteboard. I also don't think it's a valid signal of how well I can program at a computer.
Isn't the optimal strategy for an interviewee to game the system? There are groups like http://codingforinterviews.com where you can find a set of classic and popular problems to memorize. Interview in rounds and refine your problem set.
As interviewers... is this what you're looking for? What are the selection criteria after this then?
This is very common for binary heaps, but for binary trees that can be unbalanced, this won't fly. The space usage is exponential if you insert sequential items into the tree (Since you allocate the space for the pointers to every potential node in every level of the tree).
People will occasionally try that, but my point to them is that they should be writing a general purpose BST, which should also allow degenerate trees. So no, I do try to steer people away from theoretically correct but desperately inefficient approaches.
To be fair, it's "desperately inefficient" only in the face of unbalanced trees, no? Which mostly only happens for unrealistic scenarios.
As stated, the problem assumes that time lookup isn't important, even though someone who wants a BST almost certainly chose it to get faster than linear lookup along with fast insert/delete operations and ordered searches. (Otherwise, why not use a hash table?) A self-balancing tree gives that additional guarantee, and in that case an array-based data storage won't be "desperately inefficient".
Indeed I suspect an array will be competitive or even better than an object based version, where each object has its own pointer overhead and associated memory fragmentation.
At the very least, the early AVL work in the 1960s used a tape for storage, and I suspect they didn't waste all that much tape space.
So someone's intuition from real-world use of BSTs might end up giving you a false negative.
That's when the flustered interviewee can either fight (patch his solution) or flight (start all over using a class with refs). The most obvious patch is to use a large hashmap instead of a massive array. Still inefficient, but no longer that desperate (and actually has some added benefits).
Dude, you lost me when you consider "Switching the left and right sides" a critical mistake. It's an interview. Candidates can be very stressful and make mistakes easily. Also you are interviewing a human not a machine.
As daunting and necessary interviews and phone screens are , there is one thing that's surprisingly not used enoughto vet a candidate - Referrals.
At http://referralhire.com we are adding that important data point to your hiring process. To have a person referred by an accomplished developer is extremely valuable and is something that cannot be judged in an interview. It will also help avoid a lot of false negatives!
Very talented developers we have met were rejected at multiple interviews just because of the interview day performance.
All companies need to start using this more in their candidate search and decision making process.
Why would you implement a binary search tree in any of those languages. There are likely to be well written, tested, optimized libraries out there for that sort of thing.
In a real world scenario, if I had to solve this kind of thing, I'd rely on existing works first. I'd google the problem, and I'd see what solutions are out there. For most problems, there's something that can point me in the right direction.
For solving a BST issue from scratch in 15 minutes, well.. let's just say the first guy to come up with BSTs probably didn't come up with his solution in 15 minutes.
So I would give it my best shot, but if I don't succeed, you shouldn't hold it against me. I disagree with the premise that if I can't solve this, I can't solve novel problems. That's just blatantly incorrect. I solve novel problems every day -- just not on a whiteboard and not in 15 minutes.
I would much much much rather see a real-world problem. Something you've actually run into. For instance, in my line of work, I deal a lot with addresses. One of the issues that I've had to solve in the past is parsing addresses. If I give you a string of "123 Main St, Boston, MA 02123", how do you break that up into street, city, state, zip?
That's something I've actually had to solve. It's something that anyone with critical thinking skills can take a decent shot at in 15 minutes. A binary search algorithm? I got a question like this one time, and I floundered because I got 90% of the way there, but I couldn't figure out how to solve one of those edge cases. The guy basically ended the interview right there, but I asked for the solution. When he gave it to me, it was something straight out of an academic textbook.
I said, "Oh, you guys must use binary trees a lot." And he laughed and said, "No, never. It's low-level stuff for databases."
And I just sat there, dumbstruck. Why would you give me a problem that you've never had to solve yourself?
I dunno. I just disagree with the premise of the article. I think there are far better questions to ask. Problem solving in general is required -- solving academic exercises much less so.