Software development tends to follow a predictable path of chaos.
Full of optimism, your organization embarks on a software project. You put together requirements, start speccing out some design and architecture concepts, and then you get going. For a while, everything seems on track --- ahead of schedule, even.
But then things get weird. You don't get access to stuff you need. And then when you do, it turns out not to be compatible with what you've already done. Rework happens. Before you know it, you're a bit behind schedule.
In the end, you wind up getting to market several months after you planned, having to hustle to even go that quickly. You regretfully cut a few corners, mindful of the fact that you're incurring some technical debt. (Of course, you'll totally take care of that later.) Finally, you wince, ship to production, and watch as defects trickle in --- and then stream in.
Hitting the deadline was supposed to be a relief. But now it just ushers in a period of seemingly endless break/fix work running in parallel to new development. As it turns out, you're never going to pay down that technical debt.
Why does paying off technical debt seem to be the exception, rather than the rule, in software? Why, after all these years, do projects still inevitably seem to work out this way?
Let's Talk About Your Testing Strategy, Shall We?
In the most general sense, we'd expect testing to take care of this. Software developers write the code and then hand it over to the testers, who test in batches as small as they can manage. They create test reports, provide feedback, and kick issues back to the dev team. Seems reasonable enough.
And, in fact, it is reasonable enough. It helps, and this sort of testing is important.
But, alone, it doesn't scale well enough to keep up with modern software practice. Back when software projects would span years and freeze requirements for months, this was enough. But today, companies deliver to market in months, weeks, or even days. Some even do it all day, every day. And manual testing from QA alone simply cannot keep pace. Modern software moves too fast for it to be equal to the task.
So you need comprehensive, soup-to-nuts test automation. And you need it badly.
Automating as much as you can significantly tighten the feedback loop. When developers have made mistakes, they can see it within seconds, rather than waiting a week or two for a defect report in JIRA. This dramatically reduces the time required to fix the problem; the offending code is fresh in their minds.
This soup-to-nuts automation takes on two main forms: unit testing and test automation. Let's take a look at each of those in detail.
You Need to Unit Test More
First, let's consider the unit test. I see a lot of misconceptions about this subject in my travels, so let's be clear on a few points:
- Unit tests are tiny pieces of code (generally a function or method) that developers write, execute, and maintain.
- These tests exercise the code at such a granular level that it would not usually be interesting or understandable to non-developers.
- It's not unit testing to code a feature and then run the software to see if it did what you wanted it to. That's just basic common sense.
- The practice of unit testing mainly prevents regression defects and gives software developers a sense of confidence in changing the code without breaking things.
Having a robust unit testing practice means that the developers incorporate writing unit tests into their normal process. It's not some kind of negotiable add-on that you descope when you get behind. You don't tell them to stop writing unit tests any more than you tell them to stop compiling their code or debugging. It's just part of the deal.
More holistically, a robust unit testing practice means that the developers have a conceptual safety net. Finishing a feature means something new and more powerful. It means that the feature is done, and you have the equivalent of your car's check engine light to tell you if something is going wrong with it.
This tightens the feedback loop on many potential defects from weeks to seconds, letting you get to market more quickly.
And You Need More Test Automation in General
If you hadn't previously heard of the concept, what I've just said about unit testing seems...abstract --- arcane, even. Little programs that developers write to test a bigger program? Or something?
You'd probably have assumed that testing of any sort meant taking what the QA folks do and automating it. Don't worry. That's also on the table. In fact, pretty much anything besides the developer-specific practice of unit testing falls under the heading of test automation, assuming that, well, you're automating it.
Let's consider some of the types of test automation that you might take advantage of:
- Integration and end-to-end testing. This is similar to unit testing, but it involves assembling broader components of the code (broader than units) and testing them in conjunction.
- Acceptance testing. Automated acceptance tests phrase the software's behavior in the language of the business. And you should automate checking that it behaves properly.
- Performance testing. Here, you automate checks to see that the software behaves acceptably vis-a-vis so-called "non-functional requirements." Does it crash after two hours, or can it run indefinitely? Does it run well with many users logged in?
- Stress/load testing. Automate putting stress on your software to find its breaking points and to confirm that it handles those breaking points somewhat gracefully.
These represent standard forms of test automation, but it can really include any sort of test that you can fathom and then automate.
As with unit testing, this tightens the feedback loop as well, while casting a wider net. Generally speaking, developers run unit tests constantly as they work. These other sorts of tests, however, happen less frequently, such as during the team's hourly or daily build. And while it's not instantaneous, it's significantly better in terms of feedback, as well.
So What Does QA Do, Then?
Unit testing and test automation significantly speed up the feedback loop on defect detection. And that's a game changer.
But don't go contemplating a future without QA people just yet. There's still most definitely a place for them. They just occupy a different, and frankly more strategic, role. Freed from brainlessly executing test scripts and composing endless defect reports, they can focus on a couple of important things:
- Exploratory testing, wherein they use their knowledge of the domain and testing strategies to find the sorts of oddball issues that automation won't detect. For instance, they'll be thinking things like, "Well, I know that users of this medical software suffer from neurological conditions, so I should probably try clicking in rapid succession on random screens to see if we handle that well."
- Managing all of the test automation. The fact that it's all automated doesn't magically reduce it to zero operational work. You need intelligent folks to run a lot of it, track the results, make tweaks, and start the right conversations.
The Sanity of a World With a Comprehensive Automated Testing Strategy
Will introducing unit tests and test automation save you from ever encountering setbacks, defects, or crunches? No, of course not. It's not a panacea.
But it will give you a significant advantage, saving you time, money, and stress. Think of the relative merits of, say, writing a bunch of employee paychecks by hand versus having payroll software that does direct deposits. The issues won't go away entirely, but your life will go from an endless high-stress grind to one where you can focus on things other than hand-writing checks and trying not to make mistakes out of boredom.
End-to-end testing is really about that. You automate the drudgery of double checking that you're not breaking things. That way, you can focus on business outcomes, building stuff, and hitting your deadlines.