DevelopsenseLogo

The End of Manual Testing

Testers: when we speak of “manual testing”, we help to damage the craft.

That’s a strong statement, but it comes from years of experience in observing people thinking and speaking carelessly about testing. Damage arises when some people who don’t specialize in testing (and even some who do) become confused by the idea that some testing is “manual” and some testing is “automated”. They don’t realize that software development and the testing within it are design studio work, not factory work. Those people are dazzled by the speed and reliability of automation in manufacturing. Very quickly, they begin to fixate on the idea that testing can be automated. Manual bad; automated good.

Soon thereafter, testers who have strong critical thinking skills and who are good at finding problems that matter have a hard time finding jobs. Testers with only modest programming skills and questionable analytical skills get hired instead, and spend months writing programs that get the machine to press its own buttons. The goal becomes making the automated checks run smoothly, rather than finding problems that matter to people. Difficulties in getting the machine to operate the product take time away from interaction with and observation of the product. As a result, we get products that may or may not be thoroughly checked, but that have problems that diminish or even destroy value.

(Don’t believe me? Here’s an account of testing from the LinkedIn engineering blog, titled “How We Make Our UI Tests Stable“. It’s wonderful that LinkedIn’s UI tests (checks, really) are stable.

Has anyone inside LinkedIn noticed that LinkedIn’s user interface is a hot, confusing, frustrating, unusable mess? That LinkedIn Groups have lately become well-nigh impossible to find? That LinkedIn rudely pops up a distracting screen after each time you’ve accepted a new invitation to connect, interrupting your flow, rather than waiting until you’ve finished accepting or rejecting invitations? That these problems dramatically reduce the desire of people to engage with LinkedIn and see the ads on it?)

Listen: there is no “manual testing”; there is testing. There are no “manual testers”; there are testers. Checking—an element of testing, a tactic of testing—can be automated, just as spell checking can be automated. A good editor uses the spelling checker, while carefully monitoring and systematically distrusting it. We do not call spell checking “automated editing”, nor do we speak of “manual editors” and “automated editors”. Editors, just “editors”, use tools.

All doctors use tools. Some specialists use or design very sophisticated tools. No one refers to those who don’t as “manual doctors”. No one speaks of “manual researchers”, “manual newspaper reporters”, “manual designers”, “manual programmers”, “manual managers”. They all do brain- and human-centred work, and they all use tools.

Here are seven kinds of testers. The developer tests as part of coding the product, and the good ones build testability into the product, too. The technical tester builds tools for herself or for others, uses tools, and in general thinks of her testing in terms of code and technology. The adminstrative tester focuses on tasks, agreements, communication, and getting the job done. The analytical tester develops models, considers statistics, creates diagrams, uses math, and applies these approaches to guide her exploration of the product. The social tester enlists the aid of other people (including developers) and helps organize them to cover the product with testing. The empathic tester immerses himself in the world of the product and the way people use it. The user expert comes at testing from the outside, typically as a supporting tester aiding responsible testers.

Every tester interacts with the product by various means, perhaps directly and indirectly, maybe at high levels or low levels, possibly naturalistically or artificially. Some testers are, justifiably, very enthusiastic about using tools. Some testers who specialize in applying and developing specialized tools could afford to develop more critical thinking and analytical skill. Correspondingly, some testers who focus on analysis or user experience or domain knowledge seem to be intimidated by technology. It might help everyone if they could become more familiar and more comfortable with tools.

Nonetheless, referring to any of the testing skill sets, mindsets, and approaches as “manual” spectacularly misses the mark, and suggests that we’re confused about the key body part for testing: it’s the brain, rather than the hands. Yet testers commonly refer to “manual testing” without so much as a funny look from anyone. Would a truly professional community play along, or would it do something to stop that?

On top of all this, the “manual tester” trope leads to banal, trivial, clickbait articles about whether “manual testing” has a future. I can tell you: “manual testing” has no future. It doesn’t have a past or a present, either. That’s because there is no manual testing. There is testing.

Instead of focusing on the skills that excellent testing requires, those silly articles provide shallow advice like “learn to program” or “learn Selenium”. (I wonder: are these articles being written by manual writers or automated writers?) Learning to program is a good thing, generally. Learning Selenium might be a good thing too, in context. Thank you for the suggestions. Let’s move on. How about we study how to model and analyze risk? More focus on systems thinking? How about more talk about describing more kinds of coverage than code coverage? What about other clever uses for tools, besides for automated checks?

(Some might reply “Well, wait a second. I use the term ‘manual testing’ in my context, and everybody in my group knows what I mean. I don’t have a problem with saying ‘manual testing’.” If it’s not a problem for you, I’m glad. I’m not addressing you, or your context. Note, however, that your reply is equivalent to saying “it works on my machine.”)

Our most important goal as testers, typically, is to learn about problems that threaten value in our products, so that our clients can deal with those problems effectively. Neither our testing clients nor people who use software divide the problems they experience into “manual bugs” and “automated bugs”. So let’s recognize and admire technical testers, testing toolsmiths and the other specialists in our craft. Let us not dismiss them as “manual testers”. Let’s put an end to “manual testing”.

You’ve read the blog; now see the movie.

24 replies to “The End of Manual Testing”

  1. I really love this discussion. I’ve been trying to humanize the testing process myself by relating testing to the things we do in our daily lives. The tools we use to find the issues within the products we support are the more important functions of what we’re really talking about.

    I’ve taken to trying to explain the concepts as blending the testing tools, approaches and strategies as opposed to manual vs automation. Thanks for the read. Shared with my network too!

    Michael replies: Thanks, Nick. I’d gently dispute that the tools are the more important function of what we’re really talking about. (More than what? Did you mean “among the more?” I could agree with that.) I appreciate your focus on humanity, and on relationships between tools, approaches, and strategies.

    Reply
  2. Michael,
    I think these sentences say it all “They don’t realize that software development and the testing within it are design studio work, not factory work. Those people are dazzled by the speed and reliability of automation in manufacturing. Very quickly, they begin to fixate on the idea that testing can be automated. Manual bad; automated good.”

    As an “Automation Guy” I know that all I do is mechanize a process with a tool. I automate the execution of a process, and that it is up to the human to decide if it is correct or not, if it is valuable or not. If you don’t use the computer between your ears first it won’t matter what you do with the computer at your fingertips. This is partly why I use the phrase “It’s Automation, Not Automagic”.

    And yes, we are all Testers.

    Michael replies: Thanks, Jim.

    Reply
  3. I think the main problem is that “tester” does not differentiate between salary-levels. As a non-coding tester, ones salary will normally be 50%-70% of an equivalent coding tester (unless there’s another valued specialization such as performance or security). People use titles to filter job adverts and to communicate – If an employer can afford paying more, they’ll post for “automation engineer”, if an employer prefers to pay less and hire non-coding tester, they need a title for that.
    So far, I’ve seen some use “exploratory tester” (and was a bit offended by it – when I write code I test and explore just as any other tester), but nothing that means “here’s a good analytical mind that will help do something constructive that isn’t coding”.

    Still looking for a good title for that. Simply “tester” (which I try to use myself until something better comes up) seems not to hold.

    Another point I have against “manual testing” is that this role description is defined by what one is not expected to do (I did rant about it not long ago: https://always-fearful.blogspot.com/2017/03/abolishing-manual-testers.html )

    Michael replies: “Tester” should be up to the task of describing a good analytical (testing) mind whether coding is involved or not. Why define ourselves in terms of things that we are not? “Non-security tester”? “Non-performance tester”? “Non-analyst”? I don’t understand why one might get offended when someone is asking for an exploratory tester?

    Reply
  4. A lot of testers I know agree with you, but not many managers, directors, CIOs, or VPs. The word “manual” carries a stigma in board rooms and annual stack-ranking meetings. That stigma can be reduced or erased by demonstrating to stakeholders some practical outcomes of applying systems thinking etc. to the current application under development. Seeing is believing, and is memorable. Arguments, even when successful, aren’t.

    Michael replies: That’s my point here. If we didn’t talk about “manual testing”, we wouldn’t have to argue about it. Nobody argues about “manual programming” or “manual management”. There are not open positions for “manual directors” or “automation CIOs”.

    Reply
  5. Hello, Michael!

    Thank you for writing on this. I can not agree more.

    I will go further – I am ready to challenge anyone defending the term “manual testing” to explain what does actually “manual” mean and how does it practically help him/her to explain their testing better.

    There is no manual neurosurgeons, there is no manual lawyers, there is no manual pilots, there is no manual software developers – what it is so specific about software testers that require the use of term “manual”? And another rhetorical question – if I am testing voice recognition am I still “manual”? I am doing it without using my hands?

    According to my current knowledge – “manual” was never correct, nor useful term to describe testing. The only thing it was useful for, is to support false statements about testing – that it is all easy, repeatable, sometimes brainless, set of steps, cases, scripts – those actions that you and James refer to as “human checking”.
    I bet any complex action, could be considered “manual” set of steps, if it is observed by a person lacking expertise and understanding of its complexity.

    Example: I can consider golf a “manual” activity – to me (never played it) it looks like just hitting balls on a green terrain. If I do this, I’d show the lack of knowledge I have about the wind speed, power of the swing, use of proper golf stick, the terrain, the spin of the ball or whatever else. Same thing happens whenever non-testers take a look at testing. What we are doing wrong here, in my humble opinion, is we are letting such people to define testing and agree with their false terminology, instead objecting it. This might help us to move on in short term, but really harms our craft in the long run.

    Thanks, Michael!

    Reply
  6. The past few years, I have been telling people that the most impost piece in an automation test set up is the human being who configures and set up the tests, then analyse the results. Brilliant piece Michael, absolutely Brilliant.

    Reply
  7. It is easy to see how it could have gone:

    First, there was Testing.

    Then came Automated Regression Testing, in short : Automated Testing.

    But, that implies that the non-Automated Testing would be : Manual Testing.

    And, since Automated (regression) Testing is so much efficient than non-automated (regression) Testing, it begs the question :

    Who still needs Manual Testing? Its costly and ineffective when compared to Automated Testing!

    Q.E.D

    Michael replies: It’s true, I think, that no one would be talking about “manual testing” if someone hadn’t made the error of thinking that automated checking was “automated testing”.

    It doesn’t take very long to remind reasonable people that automated checking is a tactic within testing; that operation of a product and observation of it can be performed algorithmically; that decisions rules about those observations can be applied algorithmically; that a human can be alerted to the output of those decision rules algorithmically. It doesn’t take long for those reasonable people to recognize all the skilled testing and coding work that must precede and follow that algorithmic sequence. Those people don’t think of the preparation of checks and the interpretation of the outputs and “manual preparation” and “manual interpretation”, do they?

    Harry Collins makes a similar argument that before knowledge was made explicit, there was just knowledge. Explicit knowledge is “parasitic” on tacit knowledge; that it, knowledge must be tacit before it can be made explicit. Until people realize there was such a thing as explicit knowledge, no one felt to remark on the idea of tact knowledge. So while explicit knowledge is parasitic on tact knowledge, the idea of tacit knowledge is parasitic on the idea of explicit knowledge!

    Reply
  8. We should do manual testing.

    Poorly tested manuals might be a problem for our users.

    Michael replies: Hate to break it to you, but I did that joke a while back.

    Nonethless, you’re right.

    Reply
  9. I am afraid that “manual testing” (which actually is quite OK if you test a manual of some kind 😉 is a tip of the iceberg. Some time ago I noticed there is another problem that may be hard to solve. You see, many people start their electronic equipment and they see: “Testing”, “Self-test”, “Memory test”, “HDD Test”…

    For these people, “testing” is something which a machine can do (since their machines apparently do it). So, they do not get the idea of testing being creative work.
    I wonder if we can do anything about it? Call ourselves “creative testers” or something?

    Michael replies: I offer “naturalistic”, “direct”, “unmediated”, “humanistic”, “conscious”, “brainual”, “analytical”, “cognitive“, “experimental”, “interactive”, “exploratory”, “adaptive”, “risk-oriented”, “attentive”, “investigative”, or “discovery-focused” as options. But better yet, let’s simply ignore what the machines are saying as a bit of self-aggrandizement, enabled by some BIOS writer somewhere. It’s automated checking, of course. Let’s reclaim the word and call what we do, simply, “testing”.

    Reply
  10. Leonard Haasbroek summed it up in “Then came Automated Regression Testing, in short : Automated Testing.”
    Automated regression testing is not automated testing.

    We still need to test the new features and flows of a product – and a real human (a tester per se) is the best source for getting this done ‘right’.

    Michael replies: A real human tester is the only way for getting this right, because software development, testing, and the value of the product are all human things. Machinery doesn’t do that. Machinery can extend or enable or accelerate humans powerfully as they do that, though.

    Once the feature has shipped, we need something to make sure it does not get broken, enter (automated) regression tests. These are the ripe target for automation.

    Let’s be careful. Automated checks do not make sure that it’s not been broken. They are designed to detect if it has been changed in ways, presumably unwelcome ways, that the automated checks can detect. That’s an important difference.

    But as soon as you try to automate the initial testing of any features where user flows exist – you take away the testing of the ‘feel’ of the product. This cannot be automated. For a successful product, this testing (dare I say manual testing) is crucial.

    Testing is evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, and plenty of other stuff. Machines do none of this stuff, except to the degree that we use “machine learning” as a metaphor for “algorithm refinement”.

    I’m not saying that naturalistic cannot be automated. I’m saying that testing cannot be automated at all. Tasks within testing can be automated, for sure, to great effect. Delivering input to the program can be automated. The comparison between an output and a list (or a calculated value) can be automated. Logging the outcome of the comparison can be automated. Excel doesn’t automate “manual” accounting (it automates complex calculation); AutoCAD doesn’t automate “manual” design (it automates the mechanical bits of drawing and rendering); tools don’t automate “manual” testing (they automate checking).

    Reply
  11. As for hiring practices that prefer people who will make automated checks run smoothly – is it something that you observe everywhere you look, or only in specific locations?

    Michael replies: I see it myself in a lot of places, and hear from testers where I haven’t been.

    This is definitely my experience, but I am located at Central Europe and I thought of this as something specific to this region. The truth is that we are not IT giants or innovators, but rather cheap labor for companies of West. Most of testing jobs here are in contracting companies (who outsource you to Western company) or in local branches of Western companies. Either way, testers are detached from business-people, clients and decision-makers. There are clear expectations from higher ups and there is little opportunity of talking them out of bad testing decisions (when they can simply find another vendor who will comply).

    It’s not a problem exclusive to Central Europe. And it’s not a problem in every organization. It is, however, a worldwide problem that testers—individually and in communities—need to address. We’re all in this together.

    Reply
  12. HI Testing aka Human Intelligence Testing?

    Michael replies: I don’t know of testing (evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, and plenty of other stuff—including using tools to do checking) that doesn’t have human intelligence at the centre.

    Reply
  13. Right it’s not specific enough. Also, HI Testing could connote something like IQ testing which I was thinking about last night on my walk home. But it’s hard to not say “manual testing” without having another term for it. It takes a lot of verbiage to explain why we’re not using the term. I don’t like the term, our team here doesn’t like it either. But we still use it as clients understand it, etc.

    Michael replies: How about saying, simply, testing? People don’t feel obliged to point out that they’re doing manual programming, even though programmers sometimes use tools, sometimes type, and sometimes think (and often all three at once). No one feels obliged to point out that they’re doing manual cooking, even though sometimes they’re using food processors, sometimes using knives, and sometimes pondering whether to use the onions or the shallots—or all three at once, using the blade of the knife to dump onions (or shallots?) into the food processor.

    There’s something very similar to this in Harry Collins’ work. He points out that explicit knowledge is “parasitic” on tacit knowledge,
    since knowledge can’t be made explicit until there’s tacit knowledge there to explicate. But no one noticed that such a thing as tacit knowledge existed until people realized that explicit knowledge existed, so the
    idea of tacit knowledge is parasitic on the idea of explicit knowledge!

    Knowledge is knowledge, whether it has been made explicit or has not. Testing is testing, whether it employs checks or other uses of tools, or does not.

    Reply
  14. Yes, I was listening to a Bolton talk after my last comment and he mentioned that automation *extends* testing. Perhaps explicitly stating that is a simple way to frame automation to stakeholders. The analogy to development being manual, using tools… makes a lot of sense and is likely a good model for presenting this to stakeholders. Thank you for your comments.

    Reply
  15. I use the term “Manual testing” to indicate that the tests are not to be done using automated checks.

    Michael replies: Isn’t that like saying “manual cooking” to indicate that you are not going to use a blender? Like saying “manual auto repair” because you’re not going to use a tire balancing machine?

    If I want a function to be tested without involving automation checks, how do i communicate that – I will have to say test that manually.

    You don’t have to do anything. Just like any professional, you have options over how you can express yourself. Here’s how I communicate that. I don’t see automated checking as testing, but as just one tactic that is embedded in testing, one out of many. The focus on automated checking is like saying that there are two kinds of trees: Christmas trees and all the other kinds. Your options for testing a function with or without automated checks might include inducing variation in the data, stress testing; flow testing; variation of the sequence of actions; focus on specific risks; testing according to vague or specific scenarios that include that function; state modeling the function; visualizing outputs…Some of these approaches can be greatly assisted by tools, others less so. To describe everything other that simple automated functional checks as “manual” seems to leave out a lot, don’t you think?

    If I just tell a tester to test a function – they have an option to run automated checks and/or interact with the system like an end user. But if I am explicitly looking for feedback based on human interaction with the software, I have to use the word “manually”. (for the lack of a better alternative).

    There are lots of ways to interact with the software like a tester, and only one of those is to interact with the software like a user. Why not say “scenario testing” or “user testing”
    if that’s what you mean? Have a look at this, and tell me if “manual” and “automated” testing really helps in describing the things that I was doing using tools.

    Reply

Leave a Reply to Regression Testing a Curry? Cancel reply