JavaScript is disabled on your browser.
Two people talking with their hands, with speech bubbles around them, with a third pair of hands holding a pile of resumes.

Being a good interviewer, with former Twitter VP

Rich Paret is a tech leader who was formerly a VP of Engineering at Twitter, a Senior Director at Google, and an early leader at many startups. He has built and scaled teams both large and small, and most recently started his own company to fix what’s broken in the hiring process.

I asked Rich a few questions on how he developed a framework for hiring from his experience building teams.

Our tips for hiring managers based on key takeaways from the conversation are:

  • Evaluate candidates along two dimensions: behaviors and functional skills to get to a “yes.”
  • Move quickly. Focus on the “minimal effective dose” or the minimum number of interviews you need to get to a “yes/no.”
  • Use pair interviewing. Doing it together means you grow and improve as interviewers and also introduce less bias in your analysis.
  • To get consistent results across candidates, develop a matrix that captures company values and write down concrete examples of what that looks like for the role you’re hiring for.
  • Interview like a journalist anchoring on what happened in the past. You want to understand if the candidate was an agent or a bystander in these stories. For startup hiring, you need agents.
  • Interview tests are important for accessing functional skills but can be biased in subtle ways. Develop assessments for functional skills that closely mirror the types of situations, activities, and decisions candidates will actually experience.
  • Make the requirements of the role concrete by focusing on the first 90-day fit. What would a person need to accomplish in 90 days to make them an outstanding hire?

Interview transcript - edited for length and clarity

Chris Bourdon: As an engineering leader finding great people to join your team is a critical skill. But it’s an area where we have some of the biggest blind spots.

Where do teams go wrong in their interview process? And how can they improve?

Rich Paret: I’ll just try to zero in on the things that I see that are easy things that we should fix. First of all we feel like every single person, especially when we’re small, needs to be involved in the interview process, because we want them all to meet these potential candidates. So we make every person an interviewer. That is not a very good idea, because you end up exploding the amount of interviews and types of interviews. Instead, I’m a big fan of the idea of minimal effective dose. Minimal effective dose of exercise. Minimal effective dose of interviewing. What is the bare minimum you need to make a decision.

It turns out you need two things. You need to know if this person has, I’ll call them broadly, behaviors. Sometimes people call it culture fit or soft skills. You need to know if this person has the right behaviors, values, etc. to be successful in this role. And you need to know if this person has the functional skills and abilities to be successful in this role. Sometimes those things are interrelated. Oftentimes you can draw two non-overlapping circles. And so the perfect interview scenario is one where you do two tests. You clear away the prerequisites and for any kind of knowledge worker roles you’ve passed what I call the fizzbuzz test. This is the very basic test like if you are an engineer can you code, or if you’re a product manager can you write sentences that are clear.

…always do pair interviewing. Because interviewing is one of these business skills, everyone knows that you get better with practice, and you get better with feedback.

So now you get down to these two tests. When I try to get people to do this, I say, hey, every time you do these assessments, you should do them with two people. So always pair interviewing. Because interviewing is one of these business skills, everyone knows that you get better with practice, and you get better with feedback. But traditionally, we say, oh, interviewing is one on one. It’s a candidate and interviewer in a box. And there’s no inspection on the process, and there’s no feedback.

That’s nuts. You should put two people in the room and take turns driving. One person drives the interview one time, then you switch. You’re going to improve as interviewers much more rapidly, and that has meaningful outcomes on your ability to hire.

Chris: Talk more about that.

What is the core rationale that drives pair interviewing?

Rich: As an interviewer, you’re trying to do a lot of different things. You’re trying to keep the interview on track. You’re trying to listen to the candidate’s questions. You’re trying to drive the interview forward with your questions, etc. And because you are in this operational mode, the amount of listening you can do is sort of detuned. It takes a lot of practice to be able to drive it and also listen. If you have another person, they are just listening. So after the interview, when you debrief these two people, the other person says, ”Hey, did you hear when they said x, when they explained this? Did you pick up on the fact that they were really just talking about the thing at high level?” And you say, “Oh, no. I didn’t realize that they didn’t go into the level of detail. Because we were running behind, I was just trying to hit it quickly.” And then more critically the other person might say, “Hey, Chris, when you’re in this interview, you really started mirroring that person. You said things like, ‘Oh, that must have been tough.’ And you were being really informal with them. As a result, I don’t think you asked enough hard questions and I feel like you didn’t do a thorough enough job.”

Now all of a sudden you get this feedback. Oh, I actually didn’t do the right thing. As an interviewer, I didn’t ask the right questions, or I didn’t pick up on the right pieces, or I used words or phrases that might not necessarily be the most effective. So the next time I go to interview, I am a better, more effective interviewer. So that’s why it’s dangerous to go alone. Go together, grow, and improve.

So the next time I go to interview, I am a better, more effective interviewer. So that’s why it’s dangerous to go alone. Go together, grow, and improve.

The same thing applies in the technical side. There is a person that is proctoring the interview and there is another person who is working with that person and providing feedback. They might help them understand subtle hints or flag biases like, “Hey, you gave the cisgender white guy a ton of hints but the person who is from a rising identity, you were really hard on them. Did you mean to do that?” You can see that these soft, subtle examples won’t occur to the interviewer, but the observer is listening and identifying these things.

Also when there is one person giving the interview on the technical side they might get a feeling that this person didn’t do very well on this interview. Is that because the person is not doing well? Or is it because our interview needs to be updated. But if you have two people in the room, now all of a sudden, when you debrief, you can go “Wait a second. You got a funny feeling? I got a funny feeling.” Now maybe it isn’t the candidate. Maybe we have to go back and retool this interview. So you can actually evolve the process.

Chris: So you’ve established the framework for the interview. You have one technical and one behavioral. Use pairs to get the best evaluation of the candidate and to identify areas for improvement.

What behavioral questions should we be asking the candidates?

Rich: Keep it super simple. The only thing you should ask on the behavioral side is a past behavioral question. A good question is; “In this job that you had most recently, what accomplishment are you most proud of.” And then you get into why and how and these high velocity questions that allow this person to talk. You just act like you’re a reporter or a journalist, but always anchored on what actually happened in the past. Not hypothetically. Not theoretically. Just what happened.

You’re listening for what the behaviors were. Does this person say things like, “We did this as a team, and here was the role that I played. And here was the impact that happened. And here’s what I learned about it.” You want to understand if they were an agent in these stories or if they were a bystander. That’s what you are trying to suss out because, especially for hiring for startups, you need agents. People that are actively doing things and not things that are happening to them. And if you just listen, it’s basic 101 stuff.

Too many people are focused on what I call knockouts. They want to hit candidates with a ton of questions and find out if they’re not good, super fast, so they can go on to the next person. That’s wrong. Listen to people’s stories and then decide if these people are literally agents in their own story or are they bystanders.

My dad is a ski instructor, and has been for many, many years. As a kid we’d get on the quad ski lift with two strangers and by the time that lift got to the top, my dad would have basically learned the life story of the people on the lift with us. What brought them to the mountain today? Where they came from? How many kids they had? Where they grew up? When I was a kid, if I could have teleported myself off of that chairlift I would have. But what I realized was that this is his superpower. He’s just very curious about people. People will open up to you, you just have to be curious.

That’s what you are trying to suss out because, especially for hiring for startups, you need agents.

So just act like a reporter, anchored on what actually happened in the past. You want to understand if they were an agent in these stories or were they a bystander. That’s what you are trying to suss out because, especially for hiring for startups, you need agents.

Chris: Being able to judge those answers is rooted in the values of an organization. What you hear from an agent versus a bystander is accountability. If the outcome was good do they take responsibility. If the outcome was poor do they become a bystander.

Rich: You raised a very important point. The first step is really listening, because that’s the hardest thing because people just don’t want to listen, full stop. The second thing is to actually quantify in advance what you are listening for. I start with determining what people value in good employees. And we do a little brainstorm. Turns out that there are basically only five or six master cultural indicators that load for all of these other things. These are adaptability, integrity, collaboration, results orientation, customer orientation, and detail orientation.

Turns out that there are basically only five or six master cultural indicators… These are adaptability, integrity, collaboration, results orientation, customer orientation, and detail orientation.

Everything else you can think of like; Is this person a leader? Is this person a self starter? Is this person someone who’s gonna run through walls to overcome adversity? Is this person going to go out of their way to make a customer happy? All of those things sort of load up, in research, to those major factors.

Chris:

So how do you enable a common framework so you can get consistent results across your organization?

Rich: I like to have a matrix where we can capture our values. If you care about adaptability, what what does great adaptability look like for you in the context of this role? What does an okay level of adaptability look like? And what does a bad answer look like? Get interviewers to sort of prefetch these things and do the same thing for the technical screen. If this person needs a lot of hints, is that bad? If this person doesn’t do tests is that bad or good? Getting people to prefetch what answers look like makes this process so much easier, because instead of them having to fight against their own implicit biases, they can just sit back and listen. At the end, they might say “I felt this person. He and I had really good dialogue. And we had really good chat, I really liked them.” But how much of that was down to familiarity and ease of communication and how much of that was down to the sort of things that we care about.

If you have that thing to refer to, this is a good way to fix the power dynamic between a manager and their report pairing up for an interview. If, as a manager, I come out of the interview and say “I loved Chris. He was so great. I want to make him an offer right now,” it’s hard for my report to challenge me. It’s hard for people in that position, in that power dynamic. They think, “Rich is excited, so I’ll just be excited.” But if you have something written down, now the report has ammo. They can say to me, “I know that you’re excited, but when this person was talking about collaboration, he was using all this ‘I’ language. I think he’s talking about collaboration but it’s really all about him. Did you hear that?”

Did I just get totally snowed by this person? Am I about to make this big mistake…

All of a sudden, there is an opportunity to hit pause. Wait a second. Did I just get totally snowed by this person? Am I about to make this big mistake where I bring in somebody like that white guy with a beard who fits my bias or whatever. But I really missed this critical cue that someone else heard because they have a different perspective. They were also listening with less rapture and were able to pick up on the signals and bring the evaluation criteria in line with our predetermined values system. And that I think helps on the behavioral side.

On the technical side, let me talk about this in the context of product managers because this is one of the more obvious mistakes. A lot of times I see people hire PMs and they don’t know how to interview them. And so they make the PM candidate give a presentation on a product. And what ends up happening is they end up hiring people that are good at giving presentations. But a PMs job, unless you work at a very big company, is not 90% giving presentations. It’s a whole bunch of other things. So you have to be very careful that what you are testing is actually the thing that you value. Testing a PM for giving a presentation or testing an engineer for being able do an algorithm from scratch on a whiteboard is like the drunk who’s looking for his keys under the light. You know the story. You come upon a drunk and ask him what’s going on. He says he lost his keys. You ask where he lost them and he says “down the street.” So you ask why he’s looking here and he says, well, this is where the light is.”

You have to be very careful that what you are testing is actually the thing that you value. Because you choose what to work on, you end up optimizing for the wrong thing. This is a hard problem. You have to devise a test that replicates what actually happens. The canonical way to do this is what’s called the situational judgment test. If you ever went to college or did any kind of schooling, they give you case studies that came from Harvard. You read the case study and figure out what you are supposed to do. That’s exactly what a situational judgment test is. Here is a scenario that presents a dilemma. What do you think the right answer is here. And I think that kind of litmus test for any kind of leader — anybody that is gonna be a manager, anybody that’s going to be a product manager — that is the gold standard. A lot of times the proxy variable we assign to successful managers is scope. How many people did you manage? This is a very bad idea.

Chris: Last question. In order to get a consistent approach across the organization it seems that preparation goes a long way. You talked about prefetching answers earlier.

What else is important about preparation to make interviews successful?

Rich: It’s really one of these things like in engineering, we call it garbage in, garbage out. There is so much anxiety wrapped up in interviewing, both for the interviewee and for the interviewers, it’s not a thing that we get a lot of formal training on. It’s not a thing that we do every day. I can tell you people I’ve interviewed who were big PM Directors at Facebook, and a minute before the interview they were Googling PM interview questions. They were just winging it. And we don’t do that in any other context at work. We prepare. And I think that the type of preparation that you should do is more about thinking about the critical factors for success in this role. We tend to overload. We try to find the mythical platonic form of a person. But that person doesn’t exist.

…a minute before the interview they were Googling PM interview questions. They were just winging it. And we don’t do that in any other context at work. We prepare.

We use that person though to compare the very real people in front of us. I always tell people, literally, in a dynamic environment, think about what success looks like in 90 days. In 90 days what would this person need to have done for you to consider them an outstanding hire? Usually people can back out a mission or mission topics from that. So listen to this person’s story, listen to their past behaviors, test their functional skills, and just ask yourself, can this person fulfill the mission? Because that question is a lot more structured and a lot more nascent. It’s a lot more actionable. Can they be the best software engineer I’ve ever seen? Can they do the thing I need to get done to move the business forward in the next 90 days?

Thanks Rich! You can learn more about evidence-based hiring and leadership from Rich via his mailing list, starting with The Career Story Interview, and sign-up to pilot his platform for hiring managers at tenarch.com.

Read more articles