We all need to come up with ways to measure our work. Unless you work in a vacuum, metrics are the window to your world. You'll need to show off your hard work, whether it be your boss, your peers, or for your self-improvement. You'll need to keep your finger on the pulse of your performance regardless of your goals.

When the time comes to provide a summarized view of your work, it's tough to choose what to evaluate. Do you come up with cold, hard facts and actual numbers from your test suite? Do you offer more general information from the team that stats can't provide? Like any metric provided in a business setting, it's challenging to know what's valuable to you and your team or what's pointless to everyone.

What I've observed from testers is that they tend to seek out what's simpler to demonstrate that they're doing their job well. However, these facts that they deem as indicators of success frequently tend to lack any significant meaning behind them. The metrics might paint an impressive picture of performance on the surface. Digging deeper, beyond the shiny exterior of those metrics, there's a possibility it covers what matters the most.

Often, showing the wrong metrics is done unintentionally. We all need to seek validation in any way, shape, or form available to us. If we need to scrape numbers to show that we're doing well, we'll seek whatever makes us look good at the moment. We do this without malicious intent, and this is what makes seeking feel-good numbers - so-called "vanity metrics" - so dangerous.

If your selected metrics are pure fluff with the primary purpose of making someone look good, it can lead to disastrous results in the long run. At best, they'll be ignored and nothing will come out of it besides wasted time. At worse, it can mislead you and your team into thinking that things are great when there's a ticking time bomb waiting to go off underneath the fluff. And it will go off when you least expect it.

There are plenty of metrics that testers can provide which don't offer any actionable value to your team. Here are three of the most common metrics that test automation teams offer that you should avoid, along with better alternatives.

Useless Metric #1: Don't measure the number of test cases your test suite has


Typically, the first metric testing teams reach out to is the total number of test cases they've automated. It's understandable - the number is easy to measure and provide, and it gives a sense of how much work the team has put in. It's easy to calculate say something like "The team has a total of 275 test cases, up 10% from two weeks ago", and call it a day. More tests mean people are doing their job, right?

The problem is that these numbers are particularly harmful because they're a big fat lie and can be deceiving to the team. The number of test cases is a horrible metric to keep track of because they don't provide any real meat behind it. When you assert how many more tests your team created, all you're doing is saying that your team is writing more tests. That doesn't mean that your team is testing the right thing or that the tests are providing value. Just like having more lines of code doesn't mean better code, the number of tests you have doesn't indicate a valuable test suite.

Instead, a better metric to provide is how much time your test suite is saving the entire team. By time savings, I mean how your project's test suite is making everyone's lives easier. Is the development team able to release new features faster with fewer defects? Are testers spending more time doing higher-value tasks like exploratory testing instead of doing repetitive work? Are defects being caught and fixed quicker than before? Your team might not be measuring these numbers directly, but they can be measured in some form depending on your team's workflow. These numbers are a juicier piece of data that have actionable insight behind them, so it's best to start keeping track of these.

Be careful about how to measure these metrics. For example, measuring time saved just by comparing how you've reduced manual testing time is still a useless metric. It still doesn't tell if the team is testing the right things. Seek numbers that prove that the time and money that's invested in testing is returning significant value, not merely savings of any kind.

Useless Metric #2: Don't measure the number of defects caught by tests


Another number that teams love to measure is how many problems their tests have uncovered. In theory, this isn't a horrible metric to have. After all, one of the principal purposes of testing is to discover defects early and often. As testers, we want to know if our tests are doing the job.

The issue with automated test suites is that they're not designed to seek defects solely. While the tests should catch regressions, the main benefit of automation is what the term itself says - to automate. It's to free you and your team from doing repetitive tasks and do something that a robot can't do as well as you. It's not to solely done to catch bugs.

By reporting the number of defects your test suite uncovered, you're not reinforcing the purpose of test automation. Worse yet, this metric can be incredibly discouraging to teams because it inadvertently places blame on someone on the team. If there are too many defects, either developers are singled out for poor quality code, or testers are blamed for building a flaky test suite. Whether one or both of those reasons are true, no one should be made to look bad because of metrics.

In place of reporting defects caught, measure the reliability of the test suite. This metric is easy to track. Is the test suite stable enough to serve the purpose of the application under test? Are there too many false positives or false negatives popping up? When there are defects, is the test suite helpful in uncovering issues and helping the team fix it quickly? Are testers spending too much time fixing flaky tests? These questions help point out how to improve your test suite for the betterment of the project.

Useless Metric #3: Don't measure the percentage of test cases covered


If your team has a list of test cases to automate for a project, chances are you're keeping track of how many are fully automated. You might keep track the percentage of how much of the application under test is covered by your tests. Some teams even use this number as the primary driver of their work - write tests to have as much coverage as possible.

Test coverage is a nice to have and can serve to measure specific parts of your work. However, using the percentage as an indicator of the performance of your test suite is inaccurate. Just as measuring the total number of test cases written, the amount of coverage doesn't show the full picture. It doesn't provide a clear indicator of whether or not the team tests the right things.

The only purpose of looking at your test coverage should be to tell you of which areas need better testing. Teams should never use test coverage as a measure of quality. Many teams mistakenly covet test coverage at the expense of a robust test suite. Seeking coverage is a complete waste of time. You could have 100% test coverage and still have plenty of bugs sneak by unnoticed. That's why using test coverage as a metric is useless.

What you should be demonstrating instead is how effective your test cases are for your project. You can do this in several ways. You can measure how the test suite has affected the team's regression testing phase. You can point out how your test suite gave the development team confidence to implement a continuous delivery pipeline to deliver new features almost instantly. No matter what your project is, there's something that you can point out to show that your test suite is helping a higher purpose.

Avoid metrics for the sake of metrics


Many companies love gathering metrics because it feels like something tangible. Unfortunately, metrics are often gathered just for the sake of gathering metrics. Bosses and teams ask for them, yet nothing can be done with these metrics when collected with this intention.

The main thing you should think when coming up with ways to is to avoid the "vanity metrics." These are the metrics that are solely done to make you feel or look good but are entirely void of meaning. It can be tricky since it's easy to think some measurements are useful when they're not. Don't despair -with a couple of questions, you can sniff out those pointless metrics with little effort.

"What actions can the team take with this metric?"


This question is a sure-fire way to figure out if a metric is useful or useless. If you can't think of a way that a metric leads to an action or decision to improve something, it's most likely a useless metric. "10% more test cases written this week" is pointless. It's not actionable or can be improved upon unless your sole purpose is to write the most tests possible - which is pointless in itself. "Execution time of the test suite decreased to 30 minutes" is much better. This metric shows improvement and can help the team decide to improve upon it further. Metrics that can help your team move forward or improve something eventually become more useful.

"Is this metric showing the whole truth, or is it hiding something beneath the surface?"


Many people take numbers and spruce them up to make themselves or their team look great, yet hide an ugly truth. For example, you might say that you have 500 automated test cases covering 100% of the project. That metric sounds amazing - except you omit that the tests are often flaky and take hours to run. Those stats don't look amazing now because the numbers don't matter when your team is not confident in the test suite. Asking yourself if there's something underneath a metric often uncover the things that matter the most, even if it's painful to put it on display.

"Is this metric aligned with our goals?"


Every project has its own unique set of goals. Your metrics should reflect those goals to help improve the project. For instance, if one of your goals is to give testers more time to do exploratory testing, measure how you're reducing regression test times. If your team is seeking to reduce the number of false positives, measure the stability of the test suite. If you can't directly align a metric with the purpose of your testing, you should re-evaluate the metric to something more useful.

Always seek to make everyone better


The common thread here is that you should always keep track of how automated testing is making you and your team's lives better. If your automated test suite isn't making your team's lives better, it's a clear sign that something is wrong. It's up to you to figure out how to fix it.

Gathering metrics shouldn't be a pain. Every testing team and every test automation project has different needs. Being mindful about what's useful and what's empty and ineffective is the key to harnessing the real power of metrics.

What other metrics do you consider useless? Add to the list by leaving a comment below!


Photo Credit: Stephen Dawson on Unsplash