Earlier this year, I worked with a small startup that had been struggling with quality issues in their SaaS application. Their application was brittle and unstable, especially in production. New bugs constantly slipped through the cracks in their bi-weekly sprints, and regressions occurred frequently. When I joined their team as a consultant, the development team spent weeks in pure firefighting mode, fixing problems and stabilizing performance issues instead of creating new functionality. As you might imagine, morale among the group was pretty low.
The application had a few automated tests, but their test coverage was disturbingly low—unit tests covered less than 40% of the codebase, and they had no other forms of testing. Due to the constant bug-fixing sessions, their schedule often slipped, and testing was often the first thing out the window. Also, since they were a small startup, they didn't have a dedicated QA person on the team. The responsibility of the application's testing fell on a customer service representative who was adept at this task, and he wanted to pitch in since it helped him reduce the number of customers he had to deal with because of the buggy application.
As a result of their processes, the development cycles kept getting slower and slower. The work that used to take a few days now took weeks to complete due to the constant disruptions. They couldn't seem to dig themselves out of the hole they were in to get the application to a state where they could focus on features instead of bugs. It got to the point where they decided to enlist external help, which is where I came into the picture.
Focusing on the Technical Side of Testing: The Right Choice?
The organization hired me primarily to help the team refine their test automation processes. Since my brain is more wired toward the software development side of things, I began assisting them in chipping away at the technical side of their problems. Part of the tasks we worked on was figuring out where to focus their automated testing efforts and how to manage testing at a later date since the project manager usually kicked these tasks out of their schedule.
As a startup, it was vital for them to have tight deadlines and frequent releases to remain competitive in their space. However, they did this at the risk of quality, and as I've seen countless times already, the price you pay for skipping quality now is much, much greater later—usually sooner than you think. The technical work was a necessity in their current state at that time. Although it was a slow start, I felt this would put them on the right track to reduce risks and increase velocity.
After a week, the team didn't seem to be making much progress. The developers were a brilliant group, and the CEO was highly competent, so it wasn't that they lacked the ability to get these processes implemented well. The developers were also given some leeway to have someone on the team more focused on the automated testing work than the never-ending bug-fixing cycle most of them went through weekly. Yet, there was something off about how the work was going, and I couldn't put my finger on it.
Testing Problems Aren't Always on the Technical Side
Eventually, I realized I made a common mistake that many engineers make. I focused solely on the technical side of the work at the expense of other sides of the software development lifecycle. I didn't spend enough time chatting with other stakeholders. Mainly, I barely had spoken with the customer service rep who did most of the manual testing. I spent half a day shadowing him during his QA time in between customer support to figure out his process, and that's where the problem became crystal clear.
The customer service representative (I'll call him "the tester" from now on) would get called in by the project manager as the development team neared the end of their sprint to let him know they needed some help with QA. Sometimes, he would receive instructions on what to test. Most of the time, the developers had nothing specific to validate, so he was often asked to poke around to ensure no regressions occurred. The tester had a checklist of things to test, so he went through them one by one on their staging server. Whenever he spotted something wrong, he'd log a bug in the team's issue-tracking system.
That process sounds like what you expect from a tester, and it's certainly better than nothing. However, digging deeper into the issue tracker, many problems began to surface. The bug reports lacked detail, especially on how to reproduce the issue, and it felt like the tester rapidly logged in a note and threw it over a wall at development so they could deal with it. The issue tracker was also littered with dozens of bug reports containing no comments or indication that anyone on the development team saw the report.
Lack of Communication: The Quality Killer in Testing
The status of their issue tracker told me that there was an evident lack of communication between testing and development. I held separate meetings with both the tester and the tech lead on the development side to investigate more about my findings.
First, the tester acknowledged the poorly written bug reports, saying it was because he initially spent a lot of time writing them and including screenshots, videos, and other supporting material. The developers would then either dismiss the report and mention it wasn't important to fix, or they would outright ignore the issue without leaving any reason why. Frustrated, the tester decided not to waste time writing thorough reports, assuming a developer would follow up with him if needed. I asked him if he had ever brought these frustrations up with the development team, and he sheepishly admitted that he hadn't.
Later, I sat with the tech lead of the application to ask them how they handled the QA process at the end of their sprints. The developer proceeded with a long rant about how the tester constantly brought up bugs that weren't important to fix or submit bug reports without explaining how to reproduce. Since the team was slammed with work, they never responded because the back and forth would eat away at their time. I asked the same question as I did with the tester: did they ever discuss this with the tester? The answer was the same—the development team never did.
Unfortunately, this isn't an uncommon problem in the software development world. I've seen far too many organizations where the silos between development and QA always lead to quality issues. It wasn't evident if you looked at how the team interacted with each other during their meetings, but the issue tracker told another story.
Getting Development and Testing on the Same Page
After discovering this communication barrier, I organized a mini-workshop between the tester and some of the developers to resolve this issue. After talking to the tester and tech lead, I explained what I found out. Both sides seemed surprised at the revelation, even though both sides were equally responsible for this lack of communication. My intent with this workshop was to help them develop a plan where communication and accountability were more open between development and QA.
Together, we came up with a short-term solution to start. The development team agreed to respond to all bug reports, whether to ask for more information, address when they would resolve the bug, or leave a simple acknowledgment that the team would triage the bug later. For his part, the tester would attempt to provide as much information as he thought necessary for the development team and would also be responsible for keeping the development team accountable in responding to any issues in their tracker. The idea was to keep the information flowing between both sides and not let anything get stuck in limbo as it was before.
It was rough initially since the developers and the tester had some deep trust issues between them after months of communication gaps. I made sure the plan they had established remained in play in the middle of their work, and I even had to intervene a couple of times when it seemed like one side or the other began to pass blame or was slow to respond. After a few weeks, the situation started to improve slightly, with more timely communication in the issue tracker and far less blaming done on either side.
Between the improvements done on the technical side, where the team improved their automated code coverage, and the communication side, where development and QA weren't as siloed as before, the application's quality began to improve slowly. Over time, the team noticed fewer regressions in production, the issue tracker began clearing up, the tester had fewer customer complaints, and the developers finally felt they could focus on building new features.
Summary: Testing Isn't One-Dimensional
The moral of this story isn't about how I was able to serve as a guide to help the team overcome their quality issues. The core message is that the technical aspects of software testing are a fraction of the entire quality picture. Communication between development and testing is critical to any project's success. Without communication, no amount of technical work, whether brilliant programming or rock-solid test automation, will make your product the best it can be.
Is this a perfect, one-size-fits-all solution you can adopt on your team? Not at all. Many organizations don't have the luxury of time or money to implement these processes. Maintaining new strategies and not falling back into old habits takes significant work. But taking the time to bring development and testing closer instead of keeping them in their own separate bubbles will always lead to high-quality work.