In most software applications these days, you'll likely find it integrating with all kinds of external services. You'd be hard-pressed to find an app that doesn't connect to another system for payment processing, analytics, social media integration, and more. The ability to enhance your application with someone else's service can help you build your vision quicker and with less hassle.

It's an excellent time to build software as a developer because you don't have to do everything from scratch. Outside of your project's unique business value, you can find a suitable service that does something you need and plug it in with little work involved. Unfortunately, testers don't have it as easy.

Each added piece to an application provides another failure point that complicates even the most stable testing environments. Usually, you won't have any control over these systems beyond what their interfaces allow. This limitation causes you to have minimal insight into what's happening on their end. At best, you'll have some status codes and a subset of data that may or may not give you the full story.

Automating tests that involve third-party integrations can become a nightmare. If you automate anything that touches these integrations, you're likely to encounter problems when things don't work as expected. With the little information these services provide to you as a consumer, you'll likely spend lots of time scratching your head, wondering what went wrong.

Not all is lost when dealing with automation and third-party services. You can work around some of these issues and limitations with some strategies that fit your project's needs and goals.

Two strategies to deal with automating tests with third-party integrations

For any automated tests involving external services, it often boils down to two different strategies:

  • Live testing, where you use any integrations during your tests as under normal real-world usage.
  • Mocking, where you intercept and simulate any responses these integrations typically provide.

These strategies aren't the only ones you can use, but they're the most common ones you'll encounter in test automation projects. Each method has pros and cons, including when it's best to use one over the other. Here's some information on where each strategy excels and where it can hinder your progress.

Live testing

When you create automated tests that access the application's integrations, such as end-to-end testing, your tests will benefit from going through a typical user's flow. Your automation checks that your system works well with any other service that your application relies on to work correctly. It's especially important if your application doesn't work without the integration and doesn't have a fallback.

The main drawback of using live testing is the reliability of those integrations. Unfortunately, no third-party service is bulletproof with 100% uptime and reliability, especially those involving services outside of your infrastructure. Many issues can arise between your system and theirs, and you won't be able to do anything besides re-running the tests. It's one of the main reasons why end-to-end tests tend to fail sporadically.

Live testing can also significantly slow down your tests since they will have to make a request and wait for the response. In most applications nowadays, this involves making HTTP requests through an API. Even with a speedy network connection, you'll have to deal with latency. The number of service calls you make can add significant time to your test execution time.

Mocking

Mocking handles most of live testing's shortcomings by producing a fast, reliable response that you need for your test case. When setting a mocked response to an external service, your test bypasses that interaction, returning a pre-determined response that your test can use as if it's accessing the service. Mocks are quick and always return the same results, so you won't have to worry about intermittent network issues or third-party service problems.

However, mocking can lead to a less-trustworthy test suite if not used correctly. Since your mocks avoid hitting the external service, your tests won't have any clue whether the integration has changed somehow. This lack of awareness can lead to false positives. Your tests pass, but in reality, your application stops working under normal circumstances.

Imagine you have an integration that you expect to return a JSON object with a key called "data". You can create a mocked response in your test that always returns the same structure for the object. But if the third-party service changes the response, it no longer contains the "data" key, and your automation won't know about it. Your automated tests will continue to pass with the mock, but your application will blow up in real usage.

Which strategy should you use?

By themselves, both strategies can work well as long as you're aware and prepared to handle the issues each one possesses. However, the ideal approach is to use both in your workflow. Depending on your existing project and infrastructure, you have a few ways to set this up.

One standard solution to using both live testing and mocking strategies is to allow your automated tests to switch between either at any given time. For instance, you can add a command-line flag that activates or deactivates any mocks in your tests on execution, or you can use a mocking library that has this function built-in. This technique requires some up-front setup, but it will allow anyone to use whatever strategy they need once it's in place.

Some teams use another method of having different sets of tests - some with live testing, some with mocks - and run them at other points of their development and testing workflow. For example, one team I worked with ran tests using mocks when the automated test suite ran after new commits to the code repository during the workday to keep the feedback loop speedy. At night, they had a scheduled job that ran the test suite against the live integrations. They also used the live testing strategy at the end of their sprint's testing cycle.

Using both live testing and mocking sets your project up to cover each other's weaknesses while allowing you to take advantage of their strengths. It's the best of both worlds.

Your priorities are more important than strategies

Regardless of the strategy you choose or how you set them up in your workflow, always keep in mind your priorities for testing in the project. Don't choose one over the other because it sounds better, or give equal time to both. Know what's more important to test for your application and adjust your strategy for your needs.

Suppose you notice that a decent amount of bugs in your application come from issues caused by a third-party integration - unexpected data gets returned, responses are slow and flaky, and so on. In that case, you should focus more on live testing. This strategy can help you smoke out these problems and fix how the application under test handles these situations when they inevitably occur.

On the other hand, if your third-party integration is reliable, but your application often has problems processing the responses from those services, you don't need to execute your tests directly against the integration often. You can probably get away with a minimal amount of live testing during off-peak work hours, and use mocks to catch bugs as changes get introduced into the application.

Think about what problems you're trying to solve with your automated testing and third-party integrations. Which areas does QA find the most bugs? Where do your application's users report issues? Keeping track of this information and spotting patterns will help you formulate your automation strategy better.

Summary

In this day and age of plugging in any number of services to your applications, it's getting more challenging to ensure these integrated pieces work together in harmony. As a tester, it can feel almost impossible to do any stable automation testing around these services.

You have different strategies for building tests involving third-party integrations, but they often boil down to two: testing the real integration as it is or mocking the service's responses. Each strategy has its strengths, but they also have disadvantages you need to keep in mind when using them in your test suite.

Doing live testing with third-party integrations will exercise them as they work in the real world. Unfortunately, they can be slow and flaky. Mocking these integrations corrects the issue by sidestepping the integration and keeping tests fast and stable. However, it can lead to false positives since your test won't know if anything changes on the other side.

Ideally, you'll use a combination of both of these strategies as needed. You can set up an environment that makes it easy for testers to switch between live testing and mocking when running the test suite. Being able to choose either strategy when appropriate allows you to take advantage of the strength each other provides while minimizing their weaknesses.

Don't treat either strategy equally, though. It's up to your team's requirements and your project to determine how to set up your process. If the way your integrations behave is the primary source of bugs, focus on live testing. If your application's handling of these services' responses fails often, you can minimize live testing and use mocks more frequently.

Choosing what's most effective for your needs will ensure your automation continues to run smoothly, even when you have to deal with other systems that you can't control.

What approach do you and your team employ for testing applications that integrate with other services? Share your strategies with others in the comments section below!