In September 2020, a pre-launch startup hired me to work as a part-time backend software engineer. The company had been working on their web-based product for a few months with other freelancers to get the initial version of the web application up and running. The project had only one other backend software engineer who was wrapping up his engagement, so I would pick up his pending tasks and help push the product towards the finish line.
I was eager to onboard and start contributing. One of the very first things that I do when starting work on an existing software project is set up the local development environment and run any automated tests in the project. A decent automated test suite would provide some documentation about the ins and outs of the codebase and gives me a safety net for the inevitable mistakes I’d make while getting familiar with the project.
After setting up my local development environment, I went to run the automated test suites. To my surprise, there were less than a dozen test cases, and none of them worked. I reached out to the current engineer, who was trying to finish his work before leaving the project in a few days, and asked him about the tests. Our conversation went something like this:
Me: "So I noticed that the tests directory only has a couple of files, and none of them seem to work. Is there something I need to set up before I can run those tests?
Engineer: "Oh, those files were written by the previous freelancer when the project kicked off earlier this year. I never bothered to check them and didn’t think about adding any since I’ve been heads down adding new features. Whoops, LOL.
It was just my first day on the new job, and I was already dealt with what I consider a severe blow to my effectiveness as a developer. While the web application wasn’t overly complicated, the past and current engineers had already delivered a lot of work, so there was plenty to figure out. Also, it should come as no surprise that the app was very unstable. In the rush to deliver new features for launch, testing was an afterthought (or completely ignored, depending on who you asked) that led to a brittle codebase that seemingly broke even if all you did was open a file.
Without any clear documentation — most of the completed development work was in the heads of the freelancers who no longer worked on the project and the one who was about to leave — I felt like I had a steep hill to climb. I took a deep breath to calm down and got to work…
Fast-forward to September 2021. I’m still working as a part-time backend software engineer for the same project — the only one who’s contributed to the project since then. However, I’m pleased to report that the situation with the automated tests has been corrected. The backend application went from no automated tests to over 1300 automated tests.
The test suite, mainly consisting of unit and functional tests and API and end-to-end tests sprinkled in, has helped us successfully launch the product. Best of all, a small team accomplished all of this — a team of one, in this case — without sacrificing the time to add new functionality and improve the underlying architecture of the application. The days of an unstable codebase breaking at all times are long gone, replaced by rapid iteration and progress.
How did we accomplish this seemingly impossible feat with just one person actively working on the project? Here are the main actions I took that helped the most.
Make time to strategize and build the test suite
When you start working on an existing project with no automated tests, it’s usually because no one took the time to think about testing as part of the project. There are no dedicated team members to help with any testing strategy for small teams like the one I joined. Often, the existing team feels pressured to deliver tangible results and bypass all forms of automated testing, thinking that they’re useless.
It’s not always the case, however. I’ve been a part of many software projects where team members are interested in building and improving their automated tests. They know it’ll help them, both now and in the long haul. But a pervasive mindset for smaller teams is that they’ll opt to defer testing for a future time when they have some free time to dedicate to the cause.
Guess what? They never, ever get that free time to do any testing.
Free time won’t suddenly drop on your lap or fall from the sky. If you have spare time with your work and nothing is planned for it, someone or something will come your way and gladly fill that time for you whether you like it or not. If you’re serious about improving the long-term quality and health of your software project, you need to take the time for it. It’s the only way you’re going to get time to write tests.
For the project I joined, I decided to spend my introductory period strategizing how to get test automation going for the project alongside the work I was hired to do. I took the first few days implementing the necessary tooling for testing while getting familiar with the codebase. I began understanding the application’s inner workings and took notes on where the project could get the most out of testing.
In most projects, you can get the most benefit in test automation from a handful of sections with the most reported bugs. Almost all software applications have one or two areas that cause the bulk of the reported issues in the project. Unsurprisingly, these sections are also usually the messiest parts of the codebase and in most need of consistent testing. Planning to start automating tests for these sections can yield quick wins early in the process.
Once the project had tooling in place and a clear path to focus upon, it was easy to get started with the first few tests and establish how automated testing would become part of the development cycle. I realize I was fortunate to have the space to put these actions into play — many developers and tests aren’t as lucky. Still, anyone wanting to start an automation process for their project has to take the time to start somewhere.
Include automated tests in your definition of done
Many teams that want to get serious with automated testing begin with a sudden blast of energy and interest. They get excited with the prospect of test automation and start setting up tools and strategies, maybe even build their first couple of tests. Then that surge collapses, the test suite quickly becomes obsolete, and any testing strategy the team attempted to set up evaporates into thin air. That’s what happened in the project I joined, where the original freelancer set up a few initial tests that never received any care from subsequent team members.
Plenty of software projects, particularly for smaller teams and startups, don’t have dedicated people who only focus on writing test cases and setting up test automation. If this is the case for you, the best way to tackle the problem is to make sure testing is part of your completed work. Set a standard — either with your team or by yourself — to ensure that any changes you make to the codebase have accompanying automated tests when it makes sense.
The team I joined in September 2020 was a skeleton crew of just a single backend engineer and a handful of part-time frontend engineers with no dedicated testing resources. Since there’s no one to follow through on a testing strategy, my plan was that for any work I did, I needed to do my best to write at least one related test before it got merged into the codebase. I’d make sure it had test coverage for the most critical parts if I built a new feature. If I fixed a bug someone else did, I’d write a regression test to ensure it didn’t surface again.
When working on a team with other developers and testers, it’s best to agree on setting automated testing as part of the “definition of done”. Some teams make sure to officially include it as part of the development process, while others leave it up to each team member. Whatever choice the organization makes is fine as long as it leads to a better product at the end of the day.
I’ve become known on most development teams as the person who always tries to keep others accountable with testing. If I saw new commits to the codebase that could benefit from automated tests, I’d point it out. I’m sure it didn’t make me the most popular person on the team, but that’s the price I chose to pay for the long-term well-being of the project and the people involved with it.
Keep in mind that this doesn’t mean that absolutely every change in a codebase must have tests. You also have to be pragmatic. You’ll have times where you absolutely can’t spare the time to write tests for various reasons, like a hard deadline. Also, some tests aren’t worth the effort to spend development time on. Every situation is different, and knowing when not to test is equally as beneficial as knowing when testing is necessary.
Don’t be cheap - pay for valuable tools and services
Thanks to the wonders of open source, it’s easier than ever these days to find all the tools needed to build a fully-fledged automated test suite without spending a dime. Regardless of the programming languages, frameworks, or operating systems you use for development and testing your applications, you’ll find plenty of tools that fit your needs. For startups that have a limited budget, these tools and services can be a godsend.
Even with all the benefits of open source and free tools, teams relying on them need to be aware of the potential downsides. While these tools are functional, many can provide a less-than-ideal solution that can hinder a team with subpar performance or missing functionality. And although the tools don’t cost a penny, there are many other costs involved in the process that aren’t clear up front.
For instance, you may need someone to build and maintain a server to run an open-source CI system, only to discover it doesn’t support mobile application testing when the organization creates one. Another typical example is when developers and testers spend countless hours wading through mediocre documentation because the free tool of their choosing has no official customer support.
When the application and test suite grew for the company that hired me, we began to experience some limitations. Our test runs began slowing down, and a lot of UI bugs slipped through the cracks. Instead of doing everything ourselves, the organization paid for a CI service that scales our tests to run faster and in parallel, along with a subscription for a visual testing tool. Both of these tools help us tremendously every day, and they’re well worth the money paid each month.
Investing in tooling and services isn’t the only way to get the most out of your money. A good investment can also come in the form of external help, like hiring a consultant or freelancer to get you up and running quickly and efficiently. Earlier this year, a company hired me to spend a week to get their test automation setup running as efficiently as possible after their attempts weren’t getting the results they wanted. The project was a success, and the CTO of the company later told me that while I wasn’t cheap, I saved them tons of time and money in the long term.
If your organization always attempts to go the free or cheap route, it may be wasting more money than if they invested in a paid product or hired someone to do the work for them. Paying for tools, services, and people will save you time and frustration now and for the duration of your business.
Take advantage of downtime
Even in the most fast-paced companies, team members will have at least some moments of downtime throughout their projects. For example, many organizations take a few days after deploying or releasing for bug fixing and maintenance. Other times, you may finish your work for the week or the month a few days early, leaving you with little to do during that time. These pockets of time are crucial for getting some quick wins for your testing.
Almost every software development project has small tasks and enhancements that aren’t high-priority enough to schedule during a sprint but are important enough to spend the time doing. Lots of testing-related tasks fall into this category. Tackle these small items whenever you find yourself with some spare time and no plans in place. Some excellent ways to improve testing during these brief moments are adding or experimenting with new tooling, improving slow and flaky tests, or searching for areas that need more testing love.
As an example, the organization I’m working with uses a weekly or bi-weekly deployment cadence. Every week or two, depending on the functionality worked on during that time, we’d deploy our work to production. Our schedule left us with a few days before the deployment date for QA and handling any unforeseen issues we encountered during the sprint. That often meant that I would have nothing to do for a day or two while the current sprint wrapped up and I had more details on the next sprint.
Since the development schedule didn’t explicitly set new testing-related functionality to work upon, I would use this time to knock out a few issues I kept track of as I worked on my tasks. One thing I did was introduce performance testing using Artillery to detect some areas where I could improve our API load times. Another task I helped with was launching and setting up some frontend tooling for the frontend engineer, who isn’t familiar with current automated testing practices.
In fast-paced environments, you might not feel you have the luxury to do this. One challenge I give others who say they don’t have time is to keep track of what they do daily. Most of the time, they realize that they have more pockets of free time available than expected. These are the best moments you have to boost test automation for your project. No one is ever busy 100% of the time — it’s all a matter of prioritizing.
Keep track of progress and celebrate your wins
One of the realities of building test automation when working on an active software project is that it seems like a never-ending grind no matter how much time you spend on it. There’s always something to test, something to fix, or something to improve. At times, it may feel like whatever you’re doing isn’t worth the effort. In my experience, these thoughts are what lead most teams to drop their automated testing plans.
To keep yourself and your team from getting demotivated by the grind of testing, you’ll need to take the time to see how far you’ve come and measure the improvements the work provides. Everyone’s process of measuring their test efficiency varies wildly from team to team. For some, it’s counting the number of test cases or test runs completed during a period of time. For others, it’s keeping track of bugs and defects week over week and seeing if automation is keeping that number low. It helps to find what works best for your specific situation.
When I began working on the project with no automated tests mentioned above, I felt I had to set personal milestones to motivate me with what I was doing. At first, I started small by implementing the testing libraries I wanted. Then, my goal was to write 10 unit tests. I followed it up by scheduling a time to set up a continuous integration service. I also added test code coverage goals, doing my best to reach 80% overall code coverage in the automated test suite. I set service level objectives (SLOs) for critical API endpoints to have a specific median response time when introducing performance testing. All these small goals added up over time.
The other key component of keeping myself motivated was to keep track of these metrics every week. Every Monday, I’d check how I did with these numbers week over week. I’d give myself a mental “high five” whenever there were noticeable improvements and kept going my best to keep the progress moving forward. Even if the metrics didn’t feel like they improved much in the previous week, they reminded me that I started from nothing. That was enough to keep me looking towards the future.
The purpose of setting measurable goals and keeping track of metrics wasn’t to make myself look good or try to boast about the progress with the rest of the team. I’d argue that these metrics are often not valuable and frequently tend to lack any significant meaning behind them. Their purpose was to see how far the automated testing efforts had come since the first day. It’s a fantastic feeling when your test coverage went from 0% to 80% in a few months, or you finally crack 1,000 passing test cases in your test suite for the first time. When you’re motivated, you’ll want to keep that motivation going strong, so find whatever works to keep you going.
Desmond Tutu, a human rights activist from South Africa and Nobel Peace Prize winner, is credited with the following quote you may have heard plenty of times before: “There is only one way to eat an elephant: a bite at a time.” My interpretation of this quote is that anything that seems impossible can become possible if you take things one step at a time, no matter how long it takes, as long as you’re consistent.
When it comes to automated testing in an existing software project with nothing in place, it may seem impossible to start testing or maintain a long-term test suite without dedicating tons of time to it. This way of thinking is a self-fulfilling prophecy. However, if you take a bite at a time — taking a few moments to strategize, set up a few tools and test cases during downtime, and so on — that seemingly hopeless task of having solid test automation will eventually become a reality over time.
While the work I helped with to reach over 1,300 automated tests cases is a far cry from where we started, there’s still plenty of room for improvement. Most of the test cases right now are unit and functional tests, and I’d like to increase automated coverage for other types like performance, load, and end-to-end testing. I’d also like to take the time to help the other freelancers improve their testing experience. As mentioned earlier, the frontend engineer has little experience with automated testing, so our test coverage is severely lacking in that area. All of these tasks will happen as time goes on.
The purpose of this article is to show that for any project, setting up a healthy testing strategy with test automation is possible, regardless of the current state of the project, team, or organization. It’s not an easy road, especially if there’s little support or interest in dedicating time for automated testing, but it’s doable, and I hope this journey inspires you to do the same.
Have you ever joined a mature software project with no automated tests? Share your experiences with others in the comments section below!