Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Developer shortage, or time to rethink the technical interview?

public://pictures/mitch.jpeg
Mitch Pronschinske Senior Editor and Content Manager, HashiCorp
 

Is there a developer shortage? There's research that says yes, but there's also plenty of data indicating that the developer shortage could be an illusion created by picky hiring managers. I'll admit, it's a complex question that doesn't have a simple yes or no answer. For the purposes of this article, let's entertain the possibility that the developer shortage could be a false perception. 

The two trends that are most likely creating the perception of a developer shortage are embedded in the interview process of many software companies. One is the idea that "We only hire the best," and there simply aren't enough of "the best" to go around. (And who can objectively say what "the best" means, anyway?)

The other trend preventing companies from seeing the wealth of developer talent out there is the rise of algorithm-centric technical interviews.

Anyone who has been following developer opinion blogs is probably familiar with this topic already. It raises the question...

Why does it seem as if there's a blog post going viral every other month about how bad developer interviews are?

It seems to indicate a consistent, widespread desire for change in the industry, doesn't it?

Most of the controversy centers around the whiteboard interview, which could mean any technical interview where a candidate is asked to write code, design diagrams, or illustrate some technical concept on a dry-erase whiteboard. What many developers often conflate it with is the algorithm-centric technical interviews that I mentioned before. So it's not the whiteboard itself that's the problem, especially not architecture diagramming. The problem is that candidates are often asked to write memorized, bug-free algorithms on the whiteboard during whiteboard interviews (now it has become an algorithm-centric interview).

And in a whiteboard interview, companies are often asking candidates to write significant amounts of code by hand. This can feel tedious, especially when you could just as easily use a laptop connected to a projector.

[ See Part II: How to fix the technical interview (and maybe the developer shortage) ]

True, hand-writing code on a whiteboard seems cool in a meeting, but it's not necessarily practical or expedient. So, besides the archaic requirement for legible handwriting, what is the problem with whiteboard interviews?

The short answer: They often involve those algorithm questions I talked about, which are often irrelevant and impractical.

The long answer: Instead of asking questions that check if you can do the actual day-to-day coding that the job requires, whiteboard interviews (as they are often called) tend to involve questions about using various algorithms and calculating the Big-O efficiency of code. Algorithm-centric interviews tend to be more academic and less practical, and that's where the problem lies. These are important topics in computer science, but they are not important to memorize in most cases for business-driven web development. 

In a technical interview, even veteran programmers can't implement all of the possible algorithms they might be asked to describe from memory—not without studying them ahead of time, as you would for an exam. In the real world, if you need to use any of these algorithms, you just Google them.

O RLY? Whiteboard interviews

But isn't it important that you at least know what algorithms exist, so that you know what to look up when you have a situation that might benefit from a certain algorithm?

It depends. Many of the algorithms you need are hiding in open-source libraries that are found by looking up the name of the business problem you are trying to solve.

If you want to be familiar with the most useful algorithms and data structures, you could certainly get a sense of them by reading a book or taking a free online course.

But getting back to studying this stuff before an interview as you would for an exam: This might be good practice for a junior (entry-level) developer, but for veterans of the industry, their work ought to speak for itself, and rehashing these school subjects from several years ago can give interviewers a very inaccurate picture of a developer's skills (again, because they don't focus on the practical skills), and in the worst case, it's downright insulting to the veteran dev.

Aren't they just being arrogant? I know some developers have a tendency to act that way. #notalldevelopers. Would it kill them to just work with the interviewers?

It's true, some people can be very prideful once they think they've reached a certain stage in their career, but the thing to focus on here is unfairness and relevance.

Interviewers should know the context of the position and the candidate's experience. Using scripted interview questions "as is" without adapting them to the candidate’s known skills before or during the interview makes the whole process very alienating for the candidate.

So what CS stuff should you study and memorize before a technical interview at a major tech company?

  • Hash tables
  • Linked lists
  • Binary search trees
  • Dynamic programming
  • Breadth-first search, depth-first search
  • Quicksort, merge sort
  • 2D arrays
  • Dynamic arrays
  • Big-O analysis

To name a few. Those are just the really, really common ones.

Isn't that a lot?

I'm sure it's excessive in a lot of cases, but it always depends.

The worst part about these—I guess let's just call them what they are: algorithm exams—is the luck-of-the-draw aspect. Some candidates may do better in an interview if they get lucky and are asked about an algorithm that happens to be what they studied the night before.

Imagine someone who understands his space and the related algorithms very well but decided to study some algorithms outside his day-to-day needs. If he goes into a technical interview that's for a position in his area of expertise and gets a question about an algorithm that he doesn't know because it has been irrelevant to his work up to this point, and it wasn't one of the extra ones he studied, he will feel cheated—as if it were bad luck, not a skills issue. And I think we can all agree that there should be no element of luck or chance in technical interviews.

Yeah, I agree with that. Aren't these interviews supposed to identify the subjects you've mastered, not the topics that you studied last night?

Exactly! You've touched on another huge issue: If your studying and memorization skills are good enough, you could trick interviewers into thinking that you're the stronger candidate. Plenty of developers have said this happens sometimes.

And does the ability to solve any algorithmic task really fast necessarily mean a candidate can become a good web developer really fast? I don't know. Does a Ph.D. in classical mechanics automatically make me a good car mechanic? Of course not.

Okay, but do we have any research around these technical interviews? Is there any information that proves that academic, algorithm-centric interviews don't work?

The director of Google's HR department (a.k.a. "People Operations"), Laszlo Bock, wrote a book on how to find the best employees. However, the biggest takeaway for many readers of that book was this: 

Google studied tens of thousands of their own hiring decisions and concluded that their interview process—a process that used puzzler questions and algorithm quizzes—was broken.

The findings seemed to say that they would have hired just as many successful candidates by randomly picking them during interviews. I'm skeptical that their process, specifically some of the sections more relevant to the position, didn't help at all. How did they know that the candidates they didn't pick wouldn't have been worse? 

Well, if you make candidates play the algorithm lottery, you're playing the candidate lottery.

I'll have to write that one down!

Bock's book also cited a study from 1998 that was a meta-analysis of different types of assessments and how they predict performance. It included 85 years of data, so I don't know how much I trust data from work contexts that existed so long ago, but their conclusion was that a work sample test was the best predictor of performance.

In the work sample test, they had the candidate complete a sample piece of work. The test was only able to predict success for 29% of the candidates.

None of the top three predictors have any parallels to the kinds of algorithm questions developers have to answer in many industry interviews. Which makes sense, because knowing an algorithm implementation off the top of your head is never going to be an accurate simulation of the work a developer will do in the real world.

But don't take my word for it. Listen to some of these developers:

“The only world where you would actually need to be able to recall an algorithm would be a post-apocalyptic one, where the hard drives of all the computers connected to the internet were fried, and all copies of foundational academic papers and computer science textbooks had been reduced to ashes.”

Quincy Larson

"How many people really get to think about breadth first or binary trees daily? Most of us spend our days wondering why Chrome doesn't render a div in the expected way or why some API doesn't do what it should and finding workarounds."

maxxxxx

So Google stopped doing this after Bock shared his findings, right?

Well, you have to understand that Google—despite the fact that everyone says it has a system for hiring only the best candidates—is not a monolith. I have friends who have conducted developer interviews at Google. There's a lot of disagreement over interview questions and styles among developers there. 

Before the wave of protest against these algorithm-centric, whiteboard interviews, the backlash was primarily aimed at brainteasers.

"Some of these interview questions [brainteasers] have been and I’m sure continue to be used at the company. Sorry about that. We do everything we can to discourage this, and when our senior leaders—myself included—review applicants each week, we ignore the answers to these questions."

Laszlo Bock, senior vice president of People Operations at Google

So it's possible that Google discourages the algorithm lottery questions, too, but it's doubtful that it has purged them from every corner of the company. Interviewers have requirements they have to follow, but they also have a good deal of freedom to ask their own questions.

Is Google to blame for all of this?

I don't think so. Together with Amazon, Microsoft, and other tech titans, Google certainly influences other tech companies that are obsessed with duplicating its success by using the same hiring practices. And of course, other companies often assume that the industry titans only hire the best of the best, so many other companies try to imitate them. The Google Interview™ is often referenced as the main offender when it comes to technical interviews, but it's not just Google.

One of the most popular developer books ever, "Cracking the Coding Interview," is all about preparing for interviews at companies such as Google, Apple, Microsoft, and Amazon. The author knows, because she has been through those interviews.

Now I'm wondering how I can make money off this trend.

You'll be in good company. There's already an entire cottage industry around preparing for algorithm-centric interviews. Even the majority of coding bootcamps have appended algorithm question prep to the final weeks of their programs.

It reminds me of SAT prep.

FYI, the SAT is another example of testing that's a poor predictor of future success.

But isn't interview prep a basic part of any career path?

Normally, yes. But as I've stressed above, algorithm-centric interviewing is a skill that has nothing to do with software development in the real world. While everyone accepts that interview preparation is a vital part of everyone's education, developer interview preparation goes beyond the soft skills of good self-marketing. The time it takes to brush up on CS fundamentals is time that could be better spent building something great and being judged on that.

A GitHub account isn't a résumé in most cases, but a portfolio should be a major factor in the hiring decision. The interview should be designed around understanding the context of a candidate's body of work, not random knowledge tests. You should give candidates a chance to talk about their work, why they coded things the way they did, and why they're passionate about it!

Okay, okay. But couldn't candidates just suck it up? I mean, don't these companies have a ton of applicants? They need some way to objectively compare them.

Valid points. If there weren't any challenges in hiring large quantities of qualified developers, we wouldn't be having this discussion.

Yes, candidates could suck it up and play ball. Most of them do. Some realize that they don't have to.

Yes, some companies interview so much that they don't have time to look into the work you've done.

Here's a comment from an interviewer complaining about this.

"I don’t have time to read your github projects, or your blog, or your existing code, or your meticulous attention to detail in your commit messages. I get a resume in, I look at it for 10 minutes, if it’s sane I set up a phone screen. ONLY to prove that you can pass the most basic coding test. You need [sic]

Once we meet in person, you need to show me that you can code and articulate a design to a problem. The format sucks, sure. But you need to be prepared. I’m not going to spend 2 hours reading your blog for every candidate that I interview. I don’t have time for that."

And yes, everyone wants to be objective when testing people's aptitude. This is why so many education systems around the world rely on standardized testing, which also gets mountains of criticism, but no one can agree on a better method.

Is there a "but" coming? I sense that there's a "but" coming.

Wait for it. ... BUT these algorithm-centric interviews are far from objective in all cases! Again, unless you're likely to use those algorithms in your day-to-day work, it doesn't help you judge people equally on the development skills that matter. If anything, interviewers can ignore the answers if they like the candidate, or use them to say they have an objective justification when they don't like a candidate.

Google and maybe a few other development firms started using this algorithm-centric interview because they have a rare requirement: They want the developers they hire to be able to work on anything (or just about anything). If your company doesn't have this requirement, it's probably a good time for you to stop trying to mimic The Google Interview™.

Well, what if I do want my developers to be able to work on anything?

In that situation, it's perfectly reasonable to conduct more than just a fizzbuzz test and review candidates' experience. It's not unreasonable to expect a candidate to be able to write code to solve a real-world problem.

For example, you might describe some objects that have a value and link to similar objects. You need to find a certain value by first looking at all our nearest neighbors, then at their nearest neighbors, and so on. Even if you've never heard of graph theory or BFS, or seen code for these before, you as a developer still need to be able to turn ideas into code. 

But in real-world software development, looking for a clever solution is almost never the right approach. If a candidate says there's probably an open-source library available to solve a problem, that's good. That's most likely what he should use in a real-life situation. Clever solutions to these algorithm-centric problems are unaffectionately referred to as "reinventing the wheel."

I've heard that reinventing the wheel does tend to happen a bit too often with each new generation of developers.

I'm sure there are some wheels that need to be reinvented too, but it's extremely rare when you can't refer to a textbook or use an existing library that provides a much better solution than your clever one. At a fundamental level, pretty much all of the problems you'll encounter in software engineering have been solved before.

If you need to filter out developers who can't "work on anything," just ask questions that will screen out the one-trick ponies. For example, maybe an iOS developer is mainly good at hooking together existing APIs but can't think through to a sane data representation. You may only need to focus on these kinds of questions that identify developers who are skilled in only one or two narrow domains.

There's no problem with trying to filter out these non-generalist developers. The problem is that most companies don't need to. Most people who have shown proficiency at learning development in one area can definitely get up to speed in another area as well.

So you're saying companies should hire based on learning potential? With less of a focus on existing knowledge?

Totally.

Another problem I touched on earlier is that companies will use algorithm questions to make unfair judgments. In the worst cases, CS questions can be used to support interviewer biases with "objective" findings. Don't like a candidate because you've got a gut feeling? You can put him right in the "no" pile because he couldn't invert a binary tree (the other candidate couldn't do a bunch of other stuff, but it's all about which questions you decide are important).

In other cases, the algorithm-centric interview can be taken to the extreme—when missing one algorithm question disqualifies a candidate regardless of the context. Ever heard of the package management tool, Homebrew?

This tweet from the creator of Homebrew always gets a lot of attention in these whiteboard interview conversations.

Google didn't hire the guy who made Homebrew?! Even I've used that software!

Me too, and I don't even code for a living. You can read the reply thread if you're interested in the full story around that tweet (it's very interesting!), but it highlights how algorithm knowledge is not as important as being able to create practical solutions that solve a problem for hundreds of thousands of people.

Why are companies so afraid of hiring the wrong people?

Now you're getting to the core of the issue. No one wants to get into trouble for hiring the wrong person, so they all use these questions on CS fundamentals to shield themselves from criticism when new hires don't work out, because apparently everyone decided (or actually just looked to the leaders such as Google, Amazon, and Microsoft) that these questions were objective developer aptitude measurements.

The research I cited above proves that these types of interviews don't identify candidates very well, but I think some companies either aren't aware of 1) the research (and the general loathing for this kind of interview among developers), or 2) how the inertia of this interview style is still keeping companies from taking the blinders off and forcing themselves to change.

I will agree that hiring the wrong person is a significant hit for most companies when you look at the wasted effort in recruiting and training the new hire. Every team member feels the difficulty of picking up the slack when a team member leaves and they can't seem to get anyone in fast enough to fill that hole.

But, as you said, algorithm-centric interviews don't fix that. So how DO we fix this?

What? I took so much trouble to lay out the arguments for why this is a problem and now you want me to figure out how to fix it? Well I've written a second article to share some examples of companies that I think are doing interviews the right way. Plus I'll have some simple "dos" and "don'ts". Go read that now.

Keep learning

Read more articles about: App Dev & TestingDevOps