One of the reasons that software development and testing are screwed up is because people often name things carelessly.
Jerry Weinberg was fond of pointing out that “floating point” was the kind of math where the decimal point stayed in the same place, where in “fixed point”, the decimal point moves around. People talk about “serverless computing”, when they really mean “computing using someone else’s servers”. “No-code testing tools”… well, there’s always code; it’s just code that you didn’t write.
Here’s a term that’s really poorly considered: “non-functional requirements“.
Imagine referring to almost everything in world’s forests as “non-Christmas trees”. Consider talking about almost everything we could eat as “non-broccoli foods”.
How is it that the requirements that matter to people have come to be lumped together and, to a degree, linguistically dismissed as “non-functional requirements”? I have a theory.
Part of the problem is due to the social structures in software development. Another is that we often treat distinctions between functions, features and requirements as the same, and that can be confusing. Let’s start by trying to untangle that.
People use software products to accomplish tasks, to communicate, to store their data, to learn, to be entertained, to help them solve problems; to get things they need or desire. We tend to call those things people’s requirements.
People’s requirements can be framed through a lens of quality criteria. In Rapid Software Testing, our non-exhaustive checklist of users’ quality criteria includes capability, reliability, usability, charisma, security, scalability, compatibility, performance, and installability. There are quality criteria that might be important to the business and the development organization too, including supportability, testability, maintainability, portability, and localizability. For each of these families of categories, there subcategories; you can find them here.
In every product, there are features, aspects of the product that enable the users to fulfill their requirements, or that help to defend users from loss or harm or trouble of various kinds. Quality criteria apply in varying degrees to people’s perceptions of each feature.
Functions are aspects of the software that enable the features. When something happens or changes in the product, it’s because of functions in some code somewhere. Move your pointing device: some functions process and track the movement, and other functions draw the pointer in a new position on the screen. Click on the “Save” button, and a function in the application framework calls a function in the application, which in turn collects your data, and then calls yet another function supplied by the operating system to write a new version of the file to the disk.
Whatever features the product offers, we need functions to make them happen. The developer’s overarching job is to bring the necessary features into existence. To do that, developers create and maintain functions to process input or events. Those functions invoke other functions, or are invoked by other functions.
The social organization of software development tends — quite reasonably — to put the developers in the spotlight. You can see it right there in the name: software development, and in the gradual replacement over time of “programmer” with “developer”. We tend talk about functional requirements when we’re trying to express things that are important to developers: the need for functions, and for functions to do certain things in certain ways.
The functions are indeed important, because they enable everything that happens or changes about the product. Accordingly, it’s a really good idea to check output from the functions as we build the product. Much of the time, we can check functional output mechanistically, algorithmically, with code to exercise the functions and compare their output to some specified and presumably desirable result.
It tends to be a really, really good idea for developers to do that. Developers are skilled at writing code, both production and check code; they have notions of what they want the production code to do; and creating and maintaining their own checks as they go provides very fast feedback. That’s all to the good.
Functional correctness is obviously important, because if the functions don’t produce the right output, or don’t make the right changes, Bad Things can happen, or Good Things can fail to happen, such that the users’ requirements won’t be met. But there’s a subtle asymmetry here: functional correctness doesn’t mean that the users’ requirements will be met.
While functions tend to be the principal focus of developers’ work, for the most part, the functions themselves are invisible to people using the product. Those people aren’t really interested the functions as such; their interest in the features, the things that the functions deliver. Users want the software to be capable of helping them get their work done; to foster interactions with other people; to protect their data and their privacy, and so forth; all the wonderful things that software can do for them. When it comes to that stuff, functions are only a means to an end.
That is: our customers don’t think in terms of functional or non-functional requirements; they think in terms of requirements. Since we’re in business to provide valuable software and services to people, it seems to me that we should think that way too.
For that reason, testing can’t be limited to checking the output from functions. We must investigate the risk that requirements — expressed in terms of features and quality criteria — might be unfulfilled or unsatisfactory. To do that, we must get experience with the software, explore the hidden corners of it, and experiment with it to find problems. And we must do so to a degree of depth warranted by the risk.
That doesn’t just mean testing like users; it means testing like testers. It includes helping the team to notice quality criteria that are missing or diminished in some sense, every step of the way. It includes performing prospective testing during design and refinement meetings — exercising the product in our minds before we have the product available — to help developers and designers nip problems in the bud. It involves using rich, varied, representative, or pathological data in our experiments to test for reliability. It involves probing the product for security holes, and applying load and stress to shake out performance problems.
And, of course, testing includes testing from the users’ perspectives. That requires immersion in the users’ worlds. It requires interactive, experiential testing to find out when the product is hard to learn, hard to use, or has accessibility problems. That is, it requires seeing what it’s like to use the damned thing — to discover the kinds of problems that functional output checks can easily miss.
Notice the subtle dismissal when we express real things that real people want from the product in terms of “non-functional” requirements. Let’s not do that.
If we need to refer to specific dimensions of quality, we can refer to them directly (capability). When we want to refer to specific families of requirements, we can put an adjective in front (“performance requirements”, or “accessibility requirements”, ). But when we’re speaking more generally, let’s avoid the “non-functional” label. Let’s simply, directly, and clearly call them what they are: requirements.
Want to see a quick video on this? It’s here.