Today's article is the continuation of a series on Dev Tester. The series covers a few replies to the following question: "How do you know when test automation isn't working?"

The Ministry of Testing posed this question on Twitter, and it seems to have struck a chord with the testing community. I believe it resonated with us because we've all have had moments where our efforts didn't pan out as we wanted. We've all been there before.

The following article covers a topic that might initially sound contradictory. However, it's an area of testing that's easy to miss. It can creep up on us and obliterate our best intentions before we realize what's happening.

Articles in this series

All five articles are also available in a single PDF for free. Just enter your email in the form below to have it sent straight to your inbox.

How Do You Know Test Automation Isn't Working For You Book Cover

Fix some of the most common automated testing issues with this collection of articles from Dev Tester.

Enter your email address to receive a free 37-page PDF with five articles to improve your organization's testing cycle. You will also receive a weekly email with the latest articles posted on Dev Tester.

I know how vital your email privacy is. You will never receive any spam from me, and you can unsubscribe at any time.

All green, all the time


Elizabeth Fiennes (@ElizaFx) mentioned something that might throw testers off guard:

Wait a minute - A test suite that's all green means that all tests are passing. How can that possibly be a bad thing?

We all want our tests to pass. But for a truly healthy test environment, you need a little bit of failure on occasion. To quote a famous (and fictitious) boxer, "the world ain't all sunshine and rainbows."

Trusting your tests doesn't mean it's all green


The reason why you wouldn't want your automated test suite passing all the time boils down to one word: trust.

If your automated tests are always passing, it's difficult to trust that they're doing what you expect. That's because part of the role testing plays is to alert you of potential problems. Tests aren't there to pass and move along in peaceful harmony all the time. You'll want your tests to catch issues.

The only reason your tests should never fail is if the code never changes. If you have an application that's never updated, then you should expect your tests never to fail. But that's rarely the case. Even if it were, it probably doesn't matter that you're testing a dormant application. You're just wasting time.

Most applications are always evolving. The app receives new features, or existing features are improved. It's a work in progress, always under maintenance. If the application changes existing functionality and the test suite remains unmodified, you should expect some failures. If the tests never fail, it's a surefire sign that your tests should not be fully trusted.

Focus on high-risk areas first


One of the most common mistakes teams make in their automated test suites is covering areas of low change or low risk. They write automated tests that are too simple. Usually, the reason is that they want to write more tests. It places the focus on quantity instead of quality.

One example of a low change test is checking a simple, static web page. It might be a critical page to check that it's working. But you shouldn't spend time checking for content on these pages. Static pages are likely to remain unchanged. Instead of writing a test case for this, set up an uptime monitoring service.

Another example can be a view containing a form with a few fields. Again, it might be an essential part of the application. If the form requires special functionality or logic to submit, then it's an excellent candidate to test correctly. But usually, these forms take some basic input for submission. An automated test case would verify this, but lighter unit tests should suffice.

These two examples aren't to say that you shouldn't test areas like these. The problem is when teams base their testing strategy on these low-value tests.

There are a few reasons why teams bypass higher value tests when building their automated test suite. Often, it's because those areas are complex, or because these areas continuously change. But that's precisely what your team should test first. These are the areas that break first and need the most attention.

In my experience, I've noticed a correlation between complexity, change, and test coverage in any application. The more complex a section is, and the more it's modified, the less test coverage exists. It never fails.

Focusing on these areas of high risk and change increases the stability of your project. It increases the number of failures your test suite has, but that's not a bad thing. It'll give you the advantage of keeping your application running smoothly.

How to determine if your tests are trustworthy


If you suspect that your tests are not doing the right thing, validate or disprove your theory. There are a couple of things you can try to smoke out low-value tests.

Change the data inputs in your tests


A quick and easy way to determine the validity of your tests is to change the data you're using in the test runs. For example, you can:

  • Change a form that should expect an email address to a string of numbers.
  • Submit a form with empty values.
  • Delete a few records from the seed data inserted in the database before test execution.

You should aim for changes that you believe should create failed tests. If you still have passing tests, you either uncovered a bug or found a pointless test case.

Introduce deliberate change to the application under test


If you have access to the application under test, change things a bit. A few good ways to smoke out issues in your test suite can be to:

  • Add new fields to an existing form, or remove fields from the same form.
  • Remove entire sections of the application.
  • Introduce bugs or raise errors intentionally during tested flows.

Again, if you run your tests and they're all green, something's going on in your test suite. Go through the changes you made and what you expected to fail, and start cleaning things up.

Implement elements of Chaos Engineering


Chaos engineering refers to the practice of testing applications in production under unstable and unexpected situations. While this kind of testing is more beneficial for large-scale systems, you can take a few elements of Chaos engineering and introduce them to applications of any size.

Depending on your project and test environment, you can take a few approaches to introduce unexpected scenarios:

  • Temporarily block access to third-party services that your application relies on to work properly.
  • Throttle your network connectivity.
  • Introduce mutation testing elements to both the application under test and the test cases.

If your tests remain green after implementing some chaos engineering principles, question the effectiveness of your tests. Performing these tests takes considerable time and effort for the team. But they're handy in building trust in your test suite.

Summary: Failure is the only path to success


If your test suite always shows passing green results, be quick to suspect its usefulness. Chances are the tests are not doing an excellent job at keeping your application healthy. Take a good, hard look at your tests. Are you fully confident in the value they provide?

If you don't have full trust in your test suite, test your assumptions. Introduce deliberate and unexpected changes that should trigger failures. If they don't, eliminate all useless and low-value tests. Replace them with tests covering areas of high risk and high value.

Failure isn't a bad thing. You want your tests to fail when there's an unexpected outcome. That's the only way to ensure that you have a stable application for a long time.

Have you ever suspected your test suite is not checking the right things? Did your team do something to fix the issue? Share your story in the comments below!


Elizabeth Fiennes has written many wonderful articles about testing and automation, such as The A word. The BAD A word. You can read more of her writing at https://blog.scottlogic.com/efiennes/.

Photo credit: Tom Coe on Unsplash