Interview Questions

Imagine that you are working for me, and that I want your help in qualifying and hiring new staff. I start by giving you my idea of how to interview a candidate for a job.

“Prepare a set of questions with specific, predetermined answers. Asking questions is expensive, so make sure to come up with as few questions as you can. Ask the candidate those questions, and only those questions. (It’s okay if someone else does the asking; anybody should be able to do that.) Check the candidate’s answers against what you expected. If he gives the answers that you expected, you can tell me that he’s good enough to hire. If he doesn’t, send him away. When he comes back, ask him the original questions. Keep asking those questions over and over, and when he starts giving the right answers consistently, then we’ll hire him.”

Now, a few questions for you.

1) Would you think me a capable manager? Why or why not?
2) What might you advise me about the assumptions and risks in my approach towards interviewing and qualifying a candidate?
3) What happens in your mind when you replace “interviewing a candidate” with “testing a product or service”, “questions” with “test cases”, “asking” with “testing”, “answers” with “results”, “hire” with “release”? Having done that, what problems do you see in the scenario above?
4) How do you do testing in your organization?

8 replies to “Interview Questions”

  1. Very pointed. I love the comment, one that I echo to others in my organization.

    Testing is more than validation, or verifying the spec. There is more to quality than black and white validation of output or behavior.

    I ‘test’ in an R&D division. Everything is exploratory, discovery, fuzzy. And many testers that I talk to, don’t understand how I can test, how I can evaluate quality.

    Or, I integrate with a product in a way that is ‘unsupported’ and find new and interesting (sometimes rather important) bugs.
    The most difficult is conveying how I discovered these bugs to developers and other testers. In the end, identifying the assumptions in the original testing.

    Michael replies: That’s a skill we call test framing, and it’s a very important thing to practice. I have a number of related posts at this link. I hope some of the material and suggestions help.

  2. I just want add one more restriction to the described method: “There is only one room for the interview and it is busy all the time, so there is only 1 hour for the interview per each candidate. It is a pity but 80% of them are late for about 15-30 minutes. So, you can ask only part of prepared questions and still have to make a desicion about hiring”

  3. I like the path that you brought me through here. This reminds me of BDD. I was thinking that before I got to the questions. The analogy’s weakness is that the expected result is never announced to the candidate from what I saw. I would be happy to find a candidate that researched the questions, reviewed the possibilities, and improved her performance. I’d be really impressed if the developer could solve the failed test without that information, either from a bug report or a list of the steps and criteria.

    Michael replies: Imagine that we told the candidate “Here are the right answers to the questions—and these are the only questions that the dude is going to ask you!” Assuming that we stuck to that set of questions, wouldn’t the results be even worse? Wouldn’t we get at least people who had swotted up on those questions, and no others? (Does this remind you at all of any software testing qualification scheme?)

    Overall, I like the point you made.

    Thank you. For more—or for those who are still fuzzy on what I’ve tried to show here—please see here.

  4. I think the approach you describe could work well not as an interviewing and hiring process, but as a pre-screening process. If you define a set of “must know” questions, and if the candidate doesn’t answer correctly to one of them, then you send him home. If he does give the right answers to your questions, you don’t hire him, but you schedule an interview.

    The same approach could work well not as a “testing a product or a service” approach in general, but as a smoke testing approach. If you define a set of “must work” features and if the build you are checking fails on one of these, you send it back as “broken”. If all the checks pass, you don’t ship it, you start testing it.


Leave a Comment