Software companies these days, particularly startups, lean towards a fast-paced environment where the focus is on launching their products as quickly as possible. Technological advancements have led to an increased demand for rapid development and frequent shipping to customers. Unfortunately, this furious pace also leads teams to leave certain things on the table to meet their delivery goals. Guess what's one of the first areas placed on the chopping block?

If you work or have worked at these companies, you know that testing is usually one of the first things to get dropped from the development schedule. My entire career as a developer and tester has been working with startups—the largest with fewer than 50 employees in total. In over 19 years of professional experience, I've seen too many project schedules push QA and testing aside, typically at the first sign of slippage or when the organization notices a competitor working on a similar product in the same space.

It's not just small startups or companies that fall victim to this trap. Organizations of all sizes have dealt with the aftermath of skipped testing at some point. Some examples that come to mind are Microsoft's Xbox 360 video game console having an early failure rate of 30% in 2008 or Samsung's Galaxy Note 7 cell phone with its exploding batteries due to unnoticed defects. It's one thing to rush a buggy web application. Still, when your company loses billions of dollars because you sped through the QA process, it reminds everyone of the importance of sufficient testing.

Sometimes, it's alright to skip testing

As someone admittedly obsessed with software quality, this habit of pushing new features as quickly as possible worries me since I notice it's on the rise. In most cases, deferring the testing that a product needs is an excuse for poor project management, inefficient work processes, or other reasons that don't justify skipping QA. It's an easy out to attempt to save time now, although that saved time usually comes back to haunt the team at the most inopportune time. We've all had moments where we let a small thing slide to save a couple of minutes, only to need to spend hours dealing with it down the road.

However, thinking pragmatically, I can understand why a company occasionally opts to dedicate less time to testing. For smaller organizations struggling to keep their business afloat, time is one of the most valuable—if not the most valuable—resources they have. Getting one's product to market quicker than the competition can become the difference between a thriving company and a dead one. If bypassing some testing today ensures the company can survive tomorrow, it's a no-brainer.

That tradeoff, however, has a caveat. It makes sense to bypass testing only if it's temporary and you'll get back to regular testing practices sooner rather than later. That's where most organizations stumble when they let testing slide for one or two iterations of their product development. There's a chance that skipping testing once or twice causes nothing to break, and the application hums along without a hitch. It lulls the development team into a false sense of security—"Hey, things still work even though we didn't test. Maybe we don't need to spend too much time on QA after all!" Then one day... BAM, a massive bug brings the production system down to its knees, leaving everyone scrambling to patch something the team could have detected before it became a headache.

How to deal with deferred testing

Suppose you find yourself in a situation where the team pushes out QA to meet deadlines, get ahead of the competition, or some other justifiable scenario. In that case, you can still get back into it eventually. Your organization can adopt different strategies to mitigate the short-term risk of not testing as thoroughly as needed before deployments. Here are some practical tips you can establish across development and testing teams:

Treat testing like a post-deploy project

Just because you couldn't exhaustively test your application before release doesn't mean you can't test it after it's out in the world. Testing is not a "now-or-never" situation, but a continuous process. Stakeholders typically take some time to verify the state of the project during the development phase to ensure that the company is spending its time building the right things for the product. Ideally, that should happen not just before launch but after as well.

A recurring issue I see in software development teams is that after a big launch, all testing comes to a screeching halt until the next big thing comes down the pike. Even in the most frantic of startups, the days or weeks after deployment tend to slow down before gradually ramping up toward the next cycle. It makes sense since the weeks leading up to a big launch are stressful, and the team should have time to unwind and prepare for what's next. However, slowing or halting all testing after launch is a huge missed opportunity. At times, testing during this downtime works better and can improve quality since the pressures leading up to the deadlines are effectively gone.

A good practice to adopt is to treat testing as a standalone project within your team to tackle after launch. Just like project managers create tickets to keep track of upcoming work for the team, QA can create its own set of issues to remind themselves of the areas they overlooked and need further testing after deployment. This practice will help you schedule the necessary time to do what you couldn't during the development rush and give the rest of the team visibility into areas that might have issues down the road due to the lack of thorough testing.

Observe and gather feedback from the real world

You've likely heard the famous quote "No battle plan survives contact with the enemy." In software development and testing, you can change this phrase to say "No test process survives contact with real-world usage." No matter how much you plan or test before release, there will always be unexpected issues that crop up in production environments. You'll likely have more defects pop up when skipping adequate testing during the development process, but that doesn't mean you've lost your opportunity to fix them after the product's launch.

There's no better way to know that your application is working reliably than when it's in production and real people are using it. To get that information, however, you'll need to have systems in place that will yield the insight you need to address any issues due to less testing before release. Some example tools and services that will help are:

  • Exception-handling services like Sentry and Honeybadger that will instantly alert developers about errors in the application code.
  • Monitoring tools such as Datadog and New Relic that show a real-time picture of application performance.
  • Observability tools like Grafana and Prometheus which collect data across your infrastructure to provide further insight into the system as a whole.

With systems and tools in place, your team will be in a better position to tackle those bugs that will inevitably appear, especially after reducing or skipping the testing phase. Even with little to no testing occurring while development was underway, you'll still have opportunities to detect and correct any problems that happen after the work is out there. It's not the ideal situation, of course, but it's better than launching something and going in blind with no idea of the stability of your systems.

Test what you can during the rush

Most of us testers know the consequences of rushing through the development process with little to no actual testing done in an effort to push things out the door. When staring at a looming deadline, it's challenging to believe you can carve out even a tiny sliver of additional time to dedicate to improving quality. Still, even when pressures are mounting to get things done in the middle or towards the end of a sprint, it's critical to remember that you shouldn't forgo testing entirely, even when you're in a hurry.

In most organizations, the team won't have much time for test automation when things are in a hurry due to its additional challenges that may compromise the project's schedule even further. That means we should emphasize manual and exploratory testing more during these periods. It allows the rest of the team outside of QA to do some quick validations as they go through their day-to-day work, which, more often than not, yields incredibly valuable insights that lead to a higher-quality product at the end of the day.

I know that trying to do sufficient testing when there's a deadline looming is easier said than done. However, even with a lack of dedicated time in the schedule, teams can and should encourage to do whatever testing they can as they go through their day. When testing happens as an ongoing activity across the entire team, it drastically increases your chances of detecting issues early and keeping the product stable even in the middle of the chaos towards the end of a cycle. Even if you can't run through your full battery of tests, doing something is better than doing nothing at all.

Summary

Every company nowadays wants to develop and deploy its products as quickly as possible. However, the emphasis on having as rapid a release cycle as possible often leads to poor testing practices since QA is one of the first things out the window. It seems more common these days that software testing is the first thing that gets axed when a project's schedule is in danger of slipping. Developers and testers are usually under constant pressure and aggressive timelines and won't have the time or focus to verify the work that's getting done at a high standard of quality.

If there isn't enough time for testing before launch, don't worry. You and your team can still help with quality after the product is released. Consider setting up a post-deployment project focusing on QA to prioritize skipped testing. Use exception handling, monitoring, and observability tools to gather information from production systems and alert your team about potential problems. And remember, even if you're in a rush, it's better to do some manual or exploratory testing than no testing whatsoever.

The current "fail fast" mentality can sometimes surface as an excuse to defer or bypass testing altogether. Still, it can sometimes make sense to skip testing until later, especially if it keeps your business going. The key, however, is to make that a one-off decision instead of the default behavior in the organization. By maintaining healthy testing behaviors in the organization and having plans in place to test after deployment, you'll be able to keep a quality product even in the most frantic of environments.