Heads-up: For the next few weeks, Dev Tester articles will be slightly different from my regular posts while I focus on End-to-End Testing with TestCafe book updates and other projects. These shorter articles intend to be a bit more thought-provoking and opinionated than usual. I'll return to more detailed and longer posts soon!
One of the most common complaints that I hear about test automation is about slow test runs. Besides flaky tests, I'm sure you can find someone who does software development or testing who thinks their test suite takes a little too long to execute. That person might even be you.
It's an understandable source of frustration to have. No one wants to wait around for their test suite to finish, only to see that a test failed along the way, and they have to rerun the tests. If you've had to go through this, or still go through this issue regularly, you know how painful the cycle can become. It feels like a never-ending slog.
It's a tricky issue to deal with, though, because slow tests don't necessarily mean that your tests aren't working. If your tests execute successfully with no failures and keep your application quality high, it's still a working, functional test suite, technically speaking.
However, slowness is also one of the main reasons why teams abandon their testing efforts. Like flakiness, slow tests can become a "broken window". For instance, a team doesn't pay much attention to a couple of slow tests, so they let them slide into your codebase until they become so sluggish that no one wants to deal with them anymore. It creates more work than the value provided, so no one bothers to correct the issue.
Recently, I was talking with a client about their test automation suite, and the conversation turned toward test execution speed. One of the questions he asked was, "What do you consider to be a slow test suite?" At the moment, I didn't have a straight answer. How could anyone answer this question without having tons of information or facts about their particular situation?
A tale of two test suites
Despite not having an immediate response, the question did get me thinking, particularly about what I've worked on in the past. I told my client a story about two projects I helped in other organizations:
When I joined one project a few years ago, the development and testing team already had a robust automated test suite in place for their application. The suite contained hundreds of tests, ranging from unit tests that ran in milliseconds to complex end-to-end tests that took a few minutes to execute.
In total, their tests took about an hour to complete the test run in its entirety. When I asked the project lead about the test suite, she was delighted with its performance and benefits. "Without it, we wouldn't be able to deliver almost anything in a decent amount of time," she said in our first meeting.
On this second project a few years later, I joined just as their QA team began adding end-to-end tests from scratch. The project was having severe quality issues, and the testing team struggled to go through regressions, so the team wanted to add more automation to help bring manual testing times down. After a few weeks, the team built an extensive test suite of about 40 tests covering their application's most important sections.
The full end-to-end test suite took about 20 minutes to run. The improvements were immediately apparent - bugs were caught and fixed early, fewer bugs slid through the cracks, and the testing team improved their output by having more time to focus on exploratory testing. But there was a huge problem - the development team hated the test suite and wanted to stop using it. One developer went so far as to say that he thought the work done was a waste of time.
Why did the team behind Project #2, with the shortest test run time, despise their test suite so much while the other team loved theirs despite it taking three times longer to execute? After much thought, I can sum it up in one word: value.
It's all about value, all the time
In the story of both projects, each team valued entirely different things in their workflow. Each organization had its particular needs and desires to fulfill, and it was reflected in how they felt about their tests.
The team for Project #1 didn't deploy their code to production frequently, despite having a high-quality project. They moved at a slower pace than most modern, agile teams I've worked with. The primary purpose of running their full test suite was to check for regressions sporadically, executing them a few times a day. It didn't matter to them that the test suite took an hour, as long as the tests were stable and provided the results they needed.
The team for Project #2 was entirely different. They wanted to migrate their workflow towards continuous deployment, where most code updates after review would get pushed to their customers. Automation is a critical component to implement continuous deployments since the team could push to production multiple times a day. When the feedback loop between code commit and test results becomes too great, the tests become a blocker.
One team wanted their tests to be stable and robust, no matter the time it took. The other team wanted the tests to run as fast as possible. Each group placed a different value on their test suite.
So, how slow is too slow?
After thinking about these experiences from my career, the question I was asked - "What do you consider to be a slow test suite?" - now had a clear answer: your tests are slow when your team begins complaining that they're too slow.
While that response is mostly tongue-in-cheek, I believe there's a lot of truth behind that statement. The amount of time it takes to execute your tests should only matter when it becomes a hindrance to your needs. You and your team need to determine the right balance of value before deciding how slow is too slow.
Do you value fast feedback and want your tests to run after each code push? You should focus on keeping execution time low. Do you value test coverage and robustness? Then you can spend less time on making your test suite fast. It doesn't mean you should only value one or the other. You can have more than one value, as long as you're willing to put in the effort not to compromise any of them.
Everyone's needs are slightly different, so you can't take my advice or anyone else's. As long as your tests serve you and not the other way around, it doesn't matter how long it takes.
What value does your team need from your test suite?