There's a ton of action going on at any given time on smaller tech companies, particularly in pre-launch startups. Between planning, marketing, product positioning, looking for potential investors and partners, development, and so much more, there's always something requiring attention. It's a flurry of activity and energy, non-stop, for months and months on end. Having worked exclusively for early-stage companies throughout my entire career, I know all too well how much fun and frustrating it is—usually at the same time.
With everything happening throughout a tech startup's average day, testing is unfortunately one of the last things to get any sliver of attention. From a pragmatic point of view, it's understandable. Your company has more important things than testing a product when trying to gain any traction. After all, if your business doesn't have a product that customers want to use and pay for, there's no point in testing since the company probably won't exist for long. Everything you do when time is scarce is a trade-off between current and future needs.
Still, most developers and testers know that testing is vital to any company. Test automation is especially beneficial in smaller organizations like a tiny startup as progress comes in fast and furious with little room to breathe. Sure, the team needs to get the product developed and out the door as soon as possible. But at the same time, pushing out a poorly-constructed application is a sure-fire way to push away potential customers. Learning when to get serious with testing isn't a simple, straightforward path.
"When should I start automating tests?"
My short answer to this question is often "Take the time to start your test automation processes right now." Previously, I would have said "as soon as possible", but I realized that it leaves the question open to interpretation. Does that mean the team should drop all they're doing and work on testing? Does it mean to wait until an opportunity arises later so someone can work on it? When something's not clear, it'll get pushed aside and eventually forgotten.
Still, "right now" doesn't mean right at this minute either, so it's still not perfectly clear. While it's preferable to get testing going from the start of product development (or as close to the start as possible), it's not feasible in most companies. As mentioned earlier, startups will often have more pressing matters to deal with that suck up tons of resources. Getting test automation up and running requires time and effort at the beginning—both of which aren't in abundance at a startup when trying to grow a company.
However, an early start on testing can make or break a product in the long haul. Any startup will benefit from some level of automation. It catches issues much earlier and prevents them from becoming massive future headaches as the application grows. With the rapid product development cycle as most startups have, it's surprising how quickly an early mistake becomes a painful problem to untangle. Waiting to get started with test automation makes the process more challenging when you do start. The early momentum provided by automation will carry you much farther than you'd expect.
So when should you begin with test automation?
The issue with asking a question like "When should I begin implementing test automation?" is that it assumes you'll have to begin doing all kinds of different tests from the very start. In my work across different teams, this mindset is what trips most organizations up when it comes to test automation. They look at all that automated testing can do for them and become so overwhelmed with the possibilities that they don't start. They think it'll require an enormous commitment and will push it to a later time when there are enough resources.
Here's a secret: there will never be time to begin with test automation if you don't make it.
To make time in a fast-moving environment like a startup, you need to start small. Don't view the goal as having a fully-fledged test automation process in place as early as possible. That plan takes lots of effort and seems like a never-ending process. Instead, focus on what you can do now. An ideal strategy for test automation isn't to look at it as a whole but to divide it into smaller pieces of a complete puzzle.
Not all automated tests are equal
The nice thing about automated tests is that not all are created equal, and getting started with a testing strategy doesn't mean you have to do all of them at once. Automated tests come in various forms, and some benefit from early implementation, while others are most effective later in the product's life. Focusing on which tests help your organization the most at this time—not in the future—will prevent the paralysis around when and where to begin.
Let's go through some of the most common forms of automated testing that startups do to see when's the best time to begin implementing them.
When to implement: As soon as development starts
Unit testing is one of the first things to look at for test automation in any application. A unit test is one of the smallest—if not the smallest—parts of a product the team can test. These tests are simple to write, fast to execute, and easy to maintain. Running these tests won't slow the team down drastically, and fixing tests that break due to internal changes won't take too much of a developer's time. The value a product gets out of unit tests is much greater than the small effort needed to write and execute them.
I call them the first line of defense in any software application because they're quick to set up and will catch issues much earlier than expected. The development team can dedicate some time to unit testing even in a fast-paced startup environment. You don't have to do test-driven development or scope out massive chunks of time for unit tests (I rarely do). But getting into the habit of including some automated unit tests as you code yields tons of value in the long run.
When to implement: As soon as testers have access to them on a QA/staging environment
Most applications nowadays have some form of API as part of their ecosystem. Sometimes it's an internal API to interface with other applications, while other times, it's a public API available to customers outside your organization. Regardless of the type of access, APIs often have high business value and require long-term stability. Any API will benefit from automated testing as soon as the testing team has access to them in a QA or staging environment.
Why wait until an API is available in a testing environment before testing? In my experience, it's because at this point in the development process they're stable enough for the team to build a test suite without worrying about constant changes. API testing covers a more significant portion of a tech stack, so these tests won't run as quickly or be as small as unit tests. However, they're still small and quick enough to run and maintain with less effort than other tests.
When to implement: As soon the application is stable enough to deploy to production or already deployed to production
While other forms of testing like unit, functional, and API tests benefit from early implementation, end-to-end tests are the opposite. These tests are extensive and much slower to execute than most API tests. That means end-to-end tests are expensive to update and maintain, not to mention a burden on the testing team when things go wrong. When your application changes often like it does in early-stage companies, you'll drive yourself crazy trying to keep up with an end-to-end test suite.
It's best to leave end-to-end testing to the end of an iteration cycle when the application is stable enough to deploy to production. In most instances, it's probably better to build an end-to-end test suite after the application is deployed and your customers are already using them. It may sound like a waste of time to write these tests after shipping, but the focus of these tests should be mainly on avoiding regressions as new functionality gets developed.
When to implement: Only when your organization needs it
Lately, I've seen more attention towards load or performance testing, which is good since it's a valuable tool. However, I've also observed many testers introducing these tests into their workflows too early. The reality is that most startups don't need to worry about speed or load at the beginning of their journey—if ever. We'd all like our startups to have massive success out of the gate and require this kind of testing, but most companies might never reach a point where they need to focus on it.
The best time to introduce load or performance testing for an application is when it's needed, and not a moment sooner. Some examples are if you have service-level objectives to keep or if speed is a crucial selling point for your product. If this is the case, start implementing performance and load tests to meet those needs. Avoid premature performance and load testing if you don't have any specific necessity. Be careful of mistaking sluggishness or unreliability with needing performance or load testing, though. These might be symptoms of a deeper issue that needs attention, like bottlenecks caused by sub-optimal code or your service running on underpowered servers.
When to implement: Before development begins
Although this article focuses on automated testing, I believe that manual testing needs to occur at some level before any automation takes place. Although manual testing often occurs in the form of exploratory or scripted testing, I'm stretching the term here to focus on the type of work testers need to do before the product development cycle kicks off. Testing at the earliest stages of a project is crucial for success. Unfortunately, it's also an overlooked detail in most small tech companies. Most early-stage startups I've worked on don't involve testers until the middle or end of the development cycle.
Testers have unique insights to help during the start of a project, along with gaining important context that will help build a long-term strategy to help execute the plan with few surprises. Some examples are identifying flaws in requirements and designs, estimating which areas will require more attention due to increased risk, and learning how to best execute on testing to benefit the rest of the team. Even if you don't have a QA team—almost all tech startups won't have dedicated testers at their inception—having a testing mindset at the beginning of something new goes a long way.
Small startups and early-stage companies spend most of their days trying to get the most out of their scarce available resources. They barely have enough time or money to handle everything needed to have a successful product or to hire people to help. Because of this seemingly never-ending crunch, trade-offs must happen for them to survive. In most cases, one of those trade-offs is reduced time for testing and quality or deferring it until much later.
Still, my experience is that most startups want to do testing, especially automated testing. They know the benefits of a well-placed test automation strategy but don't know when to start. Ideally, these companies would start their automation as soon as possible, but it's impossible to pinpoint a good time given the possibilities. Instead, a reasonable solution is to focus on which tests will help the organization where they are instead of trying to do too much testing from the outset.
Automated tests come in different forms, each with its distinct benefits. For example, unit testing provides quick execution and feedback during development, while end-to-end tests best work to verify regressions for a completed product. The types of testing covered in this article aren't the only kinds of testing to do in smaller companies. However, the idea is that not all tests are created equal. Working on the right tests at any given time will improve your application's quality more effectively than attempting to start with everything at once.
If you work at a startup or small company with limited resources, how do you focus on testing? Share your stories with others in the comments section below!