Any software project that's serious about having acceptable testing practices will have a reliable test strategy. On a basic level, a test strategy describes how the organization approaches testing during the project's development cycle. This strategy can then help establish objectives and guidelines for the work that the team will perform, from the project's test plan to the individual performance for each tester.
A good test strategy will help avoid confusion and problems with your team through the project's life cycle by clarifying what the organization wants to accomplish regarding quality. The strategy keeps the project moving along smoothly with the team aligned with the process. It serves as a guiding light, especially when you're in the middle of the project and working hard to get it out the door.
However, not all strategies are right. A bad test strategy is a massive detriment to the project, the organization, and the team. Many projects have died a slow death due to a lack of clarity and alignment on the direction to take when it comes to testing. I'd argue it's worse than having no strategy at all because it's driving people in the wrong direction and makes it more challenging to course-correct when the project is going off the rails.
The problem with a bad test strategy is that most organizations won't realize they have one until it's too late to fix it. Most test strategies begin with improvement in mind, but those improvements might not be the correct thing your team should focus on. This article will cover a few bad strategies that I've encountered and what you can do to avoid falling into the same traps.
Bad Strategy #1: Not focusing on your customers
Lots of test strategies place their attention on improving the development process for the team. For example, a team may decide to strategize around making the development and testing workflows better, like setting up continuous integration and continuous deployment. It allows the organization to give the team a better environment for their daily work.
While it's necessary to have an efficient and enjoyable workflow to get quality products out there in less time, this strategy has a flaw. It focuses more on the organization and team and less on the customer - the ones paying you money for your product. It's important to let your team do their work with few issues, but it's more important to ensure your product serves your customers since they're the ones that allow you to run your business.
For instance, at one company I worked at, the organization had a well-organized workflow for development and testing. They prided themselves on making life easier for their team. However, their product was riddled with issues - poor user experience, accessibility issues galore, and so on. It goes to show that if you pay more attention to your development process instead of your customers, there's a good chance you'll miss critical issues that will drive them away from your product.
How to avoid this bad strategy
Your test strategy should place a stronger emphasis on the customers you plan to serve. Your strategy can also include ways of improving your team's daily workflow, but it shouldn't be the focal point of your planning. Prioritize your testing on making the product better for the customer, and your product will come out better for your efforts.
Bad Strategy #2: Only focusing on one form of testing
A common mistake companies make is deciding solely on one form of testing and only doing that throughout the project's entire life cycle. Typically, it happens with automated testing - someone discovers the power of automation and wants to automate everything. However, I've also seen organizations do only manual or exploratory testing for a project.
The problem with this approach is that no single form of testing covers all the bases for your product. An example of this is test automation. Automated tests are great at speeding up regression testing and repetitive processes, but they're horrible for uncovering previously unknown issues. Manual and exploratory testing cover those shortcomings, but at the expense of time and keeping testers tied up doing the same things over and over again.
There's no silver bullet with any individual testing approach. By focusing on only one form of testing, you're placing huge blind spots during the testing process. If you're lucky, you'll spot the shortcomings and cover them appropriately. Unfortunately, most of the time, you'll remain so focused on what you're doing that by the time you figure out why your QA isn't working well, you'll be far behind on quality.
How to avoid this bad strategy
A good QA environment contains a multi-faceted approach covering automated and non-automated solutions for testing. You can avoid the problems of one form of testing by balancing it out with other ways of testing. Give equal time to automation, manual testing, and exploratory testing in your workflow to handle most scenarios and keep your application in top shape.
Bad Strategy #3: Leaving testing only to QA
Many teams with a QA department rely on them to find bugs and discover issues in the projects, whether it's a large group of multiple testers or a team of one. These organizations think that since that's what testers get paid for, the rest of the team shouldn't bother doing any testing beyond the scope of their work.
Sadly, it's not an uncommon thing to see developers churn out features and proverbially toss them over the wall to QA. I've seen development and QA teams in the same office never interact with each other outside of bug trackers. These days, it happens with increased frequency, thanks to outsourced QA and the rise of remote work. But even with all the tools for remote collaboration, many places still have that invisible barrier between roles.
Shoving all the testing work to QA limits your organization's abilities because you can't expect testers to find everything. Testers are great at uncovering issues that other roles can't spot. However, as testers, we must recognize that we also have plenty of blind spots and limitations of our own. For instance, most testers might not know how to check for security issues like cross-site request forgery (CSRF) or cross-site scripting (XSS) attacks, but a developer can help pick up the slack. These complementary skills provide more coverage for your project.
How to avoid this bad strategy
Testing is not a one-team job. It's the responsibility of the organization as a whole. Make testing and quality a part of your team culture. Everyone from QA to development to product roles needs to account for the result. That's not to say a product manager or developer needs to spend hours doing dedicated testing. Still, they should acknowledge the role they play in the quality of the delivered product.
Bad Strategy #4: Not making the strategy your own
The Internet has unlimited amounts of information to learn anything. Thanks to blogs, newsletters, videos, and podcasts, we have access to insights into how other organizations implement their test strategies. Emulating how others build their products provides guidance and can help see how others handle issues similar to what you're facing.
However, you might not become aware that you're taking someone else's successful strategy and applying it without a second thought. There's a risk that you'll implement something that's not ideal for your situation. For example, you see a large organization handling multiple established products with a sophisticated continuous integration system. It works for them, so you want to implement something similar for your single-product startup that launched three months ago. If you proceed with that strategy, you'll likely waste time and effort that you can use better elsewhere.
Even if you find someone in a situation similar to yours, using their successful testing strategy won't automatically mean it'll be successful for you. Every little difference in your project's size and scope affects how that strategy will work for you. Other factors such as having the resources at the right time - money, right skilled employees, and so on - come into play. No one's situation will be exactly like your own.
How to avoid this bad strategy
Take bits and pieces from other people's testing strategies, as long as you're not copying them blindly. Learn from them while taking into account what you have. Think about your current situation and where your project is heading, and use what you believe will work best. Some teams might worry that you'll put together a "Frankenstein" strategy that no one will understand. But because you'll use the right pieces that fit with what your team needs, everyone will get on board quickly.
Bad Strategy #5: Splitting development and QA schedules
Another frequent mistake I've seen in product teams is establishing separate schedules for development and testing. For instance, an organization decides they want to add a new feature to their product, and they schedule one month to release it. The development team has three weeks to build it, and then the last week is dedicated to testing and bug fixing.
Because of the distinction between development and testing, QA often doesn't see the new work until the final week of the cycle. This action usually leads to massive delays for the project. Testers will inevitably uncover bugs and other issues for the work done during the three previous weeks. Development gets the baton handed back to them after they believed they were through with their part. After a round of debugging and bug fixing, the project gets shuffled over to QA again, where the cycle is likely to repeat.
Developers feel the pressure to rush through the bug-fixing process, which always adds more bugs to the mix. Testers also feel the pressure of finding as many bugs as possible since it's the end of the release schedule, leading to missing bugs or ignoring them altogether. This constant back-and-forth at the end of the schedule kills projects and obliterates trust in these teams.
How to avoid this bad strategy
A good test strategy won't have a set schedule for testing baked into the calendar. This process needs to be part of the daily workflow for both development and QA. Developers need to make their work available to testers as it's ready, so QA can have the time and space to do their work instead of having it thrown at them all at once. Your strategy will become the most effective when testing is in play at all times during the development cycle.
To have a good chance of delivering a high-quality product to your customers, you must have a sound testing strategy in place from the start. It will help your team have the alignment and clarity they need throughout the project's life cycle and lets everyone know what you expect in terms of quality for what you're building.
On the other hand, a lousy testing strategy can negatively affect your project much more than no strategy at all. Unfortunately, lots of teams start with a bad strategy in place, and it puts them at a significant disadvantage over any competitors in their space. These kinds of strategies are also a quick way to burn through the team's morale since it'll feel nothing is going right for them.
This article discusses a couple of bad strategies that I've seen affect real-world projects and ways you can avoid them from being a part of your test strategy. Some strategies like not focusing on the customer, primarily doing one form of testing, and solely relying on QA for testing will sink your project from the start. You can avoid these by ensuring you test on customer value, balancing your testing, and making sure testing is the entire team's responsibility.
No one begins their project thinking about intentionally creating a test strategy that won't serve their team. However, it's easy for these less than ideal ways to sneak in without the team noticing until they have a mess on their hands. Make sure you get off on the right foot with a reliable test strategy that will help your team, your project, and your customers.
What test strategies have worked the best for you and your team? Leave your tips in the comments section below!