Recently, I came across an article on SD Times, a software development blog, titled Test automation: Tools don't work. As someone who's always seeking the latest news and thoughts in the test automation world, the title immediately grabbed my attention. Admittedly, it wasn't attention-grabbing in a positive way.

Before I read the article, I had already formed a lot of opinions about the author, Wayne Ariola. "This person doesn't know what they're talking about," I thought. "Tools are important - what else would drive the efforts that many people are taking? Maybe the tools he or she is using aren't the best. Instead of figuring it out, they're just writing an undeservingly scathing article."

When my mind gets going, it gets going.

I realized that I had been forming an argument in my head without even reading the article, which I try never to do. Everyone has a right to an opinion, and everyone deserves a chance to be heard. I clicked on the article, trying to push back the preconceived notions that had already formed.

As I read the article, I found that my snap judgments were invalid. While the author did reiterate that tools don't work, I can agree with the message behind the arguably clickbaity headline. The article contains many valid points about how organizations treat test automation tools and how they can harm testing efforts if we're not careful.

Test automation efforts are increasing everywhere


In recent times, many organizations are adopting test automation at an increasing rate. Surveys such as QASymphony and TechWell's Industry Research Report: The Evolution of Test Automation show that many organizations expect growth in automating their testing efforts.

An increased focus on implementing test automation at a higher level inevitably brings an equal increase in focus on tools. After all, automated tests don't pop out of thin air - you need tools to set up successful automation efforts. Any organization that is looking to bolster their testing will have to spend some time checking out the tools they have at their disposal.

Analyzing test automation tools is fine, but the issue is that many organizations get fixated with tools as the center of their testing efforts. They only focus on finding either the best, fastest, cheapest, most accessible tool that they can find without looking at the bigger picture. When this happens, automation efforts sputter and burn down down the road. It leaves a sour taste in people's mouths, and organizations become wary of future automation efforts.

This point is what the author of the blog post I was reading was trying to make. When the tools are the main driver for automation efforts, other pieces of the puzzle get left behind. We set aside people and processes while focusing on other areas. Arguably, those are the most critical parts, more important than any tool can be.

A real-life story about choosing tools that almost got us stuck


Last year, my current organization was a bit stuck with test automation efforts. The company maintains a portfolio of multiple web applications, and our test automation was all over the place. There was a jumble of tools used throughout the company. Some projects implemented an in-house framework based on Gherkin. Other projects used the included Ruby on Rails support for integration and system tests.

The effectiveness of the automation test suites for each project varied. The main issue was the context switching needed when someone on the QA team shifted from one project to the next. It significantly slowed down testing efforts, especially if the tester wasn't familiar with the tool used at the time. The company decided to find a more standard way to run our automation testing efforts. That way, we could unify our learning and test suites across the board.

I was tasked to help evaluate different testing frameworks to use across our projects. Since all of our applications are web-based, I focused on Javascript-based testing frameworks. Most of our testers weren't developers, so they would eventually need to learn how to code. The organization wanted to invest in our testers and train them to become software developers. As a bonus, developers would feel right at home with a Javascript-based tool. We wanted the software engineers on the team to contribute to our test automation efforts: one team, one shared responsibility.

During my research, I came across many different Javascript testing frameworks. There's Puppeteer by Google that looks very useful for automating our tests. Nightwatch.js was another compelling alternative. The venerable Selenium WebDriver has support for writing test cases in Javascript. These days, there's no shortage of well-made test frameworks.

Eventually, I came across Cypress. It ticked most of the boxes that I had been seeking. It's a Javascript-based framework that's easy to set up and use. It didn't use Selenium, which many on the team wanted to avoid for many reasons. The test runner is also one of the best tools I've ever used. It gives testers anything they need to debug and improve existing test cases.

The rest of the team saw Cypress and also had great things to say about the framework. With these glowing reviews, it should have been the end of the discussion. Just choose Cypress and live happily ever after. Unfortunately, making that choice wasn't as simple as it seems.

Great choice, wrong reasons


One of the main things we needed as part of our testing process was the ability to support automated testing in different browsers. As of today (October 2019), Cypress does not have support for running tests on multiple browsers, only on Chrome.

Had we ignored our desired process and continued to move along with Cypress, we would eventually hit a roadblock. Whenever the organization or one of the project's partners needed testing on anything other than Chrome, we would have gotten stuck in manual testing mode. Manual testing is acceptable if it doesn't take much time. However, when a complex project takes multiple testers a few days to go through the test suite, it becomes an issue.

As expected, this was a bummer for us. The team really liked Cypress, but we still had a requirement as part of our process. We went back to the drawing board and had further discussion. After a while, I realized we were slowly falling into the trap of focusing on which tool to choose. We hadn't been paying much attention to our people and the process of testing.

Bringing back the focus to both of these things, we were able to get back on track. We got clear about what our testers and organization wanted, and it helped guide us towards a tool. In the end, we landed on TestCafe. It's a great testing tool that also checked off most of the list of things we were looking to get, especially multiple browser support.

We decided to use TestCafe earlier this year and have been implementing our end-to-end tests using the framework for over six months. The tool is working great and meets our requirements. More importantly, it's helped the people in the company and our process in many ways.

All of our company's test engineers are not full-fledged software engineers. With training and mentoring, they learned to code and have been building our test automation suites with little developer help. Some former testers have even transitioned from test automation work to directly helping on the development of projects.

The projects that began automating their regression tests with the new framework have seen drastic time savings. Most projects have seen almost a 50% reduction in time spent during the regression testing phase. One project in particular shortened their cycle from a few weeks to a couple of days, and expect to shrink that time further.

Tools are easily replaceable - you can't easily replace people and process


The lesson to be learned with the story above is that it didn't matter which tools we chose.

Our purposes were straightforward once we understood them. We wanted to use the same framework across projects, have developers be part of the testing process, and convert our testers to software engineers. Had we spent too much time dealing with tool selection, we would have missed the mark on one or all of those things. We worked together with our people and the desired process and found something that worked for the organization. In the end, that's what matters the most.

Tools are easily replaceable. You can add, remove, and swap any tool at any time with minimal disruption if done correctly. People and processes, on the other hand, are much more difficult to change. Everything will grind to a halt if you try to change how your organization's people work or how everyone does their work. That's why you need to focus on these things before anything else.

Don't get fixated on tools. This message is important and bears repeating. Remember that testing tools are merely a means to an end. If your organization doesn't have a definite purpose in mind, the choice of tool is irrelevant. The same if the organization isn't thinking about how this would affect the people or processes in place. Get clear on your organization's goals for people and processes, because that's the most important thing you need to have a successful automation path.

Have you experienced a situation where the focus on tools took all the attention? Post a comment below about your experience and how it turned out!


Photo Credit: You X Ventures on Unsplash