One of the main issues with running end-to-end tests for your web application is that they're slow compared to other forms of testing. It comes with the territory since these tests run the full stack of your application. Other forms of automated testing, such as unit or functional tests, only verify a small segment of your code.

While most test suites should keep end-to-end tests at a minimum due to the time they take to process, these tests are valuable for testing what your users experience. Unfortunately, I've seen many teams abandon their end-to-end testing efforts because of delays in their development and deployment process. It's understandable, but they miss out on the tremendous benefits of running these robust tests.

Like all end-to-end test frameworks, TestCafe is not immune to these issues. If you're not careful, you'll find yourself with a suite of slow-running tests clogging up your pipeline. But it doesn't have to be that way. You can take a few steps to make your tests faster - significantly faster, in some cases. This article covers five ways that you can leverage TestCafe's functionality to keep your tests snappy.

1) Manage test setup and teardown efficiently

For many test scenarios, you need to set the application's initial state before executing the steps in the test. The main reason is to ensure that your tests run under the same state every time. Usually, it means setting up your data stores with the appropriate date to run your test consistently. Most testing tools provide a setup step to make it easy for testers to perform these actions before executing tests.

Depending on your application and how your tests are structured, you will most likely need to clean up after your test runs. End-to-end tests usually don't roll back any changes in the application that occur while executing your tests. There might be a chance of a test leaving stale data behind and causing subsequent tests to fail. Just like the setup step mentioned above, most testing tools also have a teardown step where you can deal with the cleanup.

The setup and teardown steps are useful for maintaining consistency between test runs, but they're also one of the main culprits for slowing down tests. It often comes in the form of performing long-running tasks in these steps, like setting up a giant test database with thousands of records or going through a multi-step login process.

TestCafe has a couple of ways to help you deal with this problem. TestCafe uses hook functions or hooks for the setup and teardown process. There are two different kinds of hooks: test hooks and fixture hooks.

Test hooks are functions that allow you to define setup and teardown steps before and after running each test. On the other hand, fixture hooks are similar in the sense that they're also functions for defining setup and teardown steps. However, the main difference is that fixture hooks only run once per fixture, while test hooks run before and after each test in the fixture.

Please note that TestCafe fixtures can use both types of hooks. It's an important distinction to make because it can have a massive performance impact if you misuse these functions.

Here's an overview of how both test and fixture hooks work with TestCafe fixtures:

fixture.before(async t => {
  // Setup code that only runs once, before executing any tests.
});

fixture.beforeEach(async t => {
  // Setup code that runs before every test in the fixture.
});

fixture.after(async t => {
  // Teardown code that only runs once, after all tests are executed.
});

fixture.afterEach(async t => {
  // Teardown code that runs after every test in the fixture.
});

You can also define test hooks for individual tests:

test.before(async t => {
  // Setup code that runs before executing this specific test.
})("My Test", async t => {
  // Your test code goes here.
}).after(async t => {
  // Teardown code that runs after executing this specific test.
});

One of the most common mistakes I've seen teams do here is that they stuff their setup and teardown code in test hooks for the fixtures - the beforeEach and afterEach functions - instead of defining these steps where necessary. Since the code defined in test hooks run before and after every test, unnecessary work slows down the test run. Besides, it also creates maintenance issues in the long run with testers who are not familiar with the application under test. They'll assume every test needs all those steps, so no one bothers to refactor the code.

To improve setup and teardown performance and avoid long-term maintainability issues, you need to determine the appropriate steps needed for each test in your fixtures. Once you have a clear idea of what your tests need, you can organize what goes in a fixture hook and what goes in a test hook.

For example, let's say a fixture contains ten test scenarios. A tester sets up the test data by using test hooks, adding expensive database operations in the beforeEach and afterEach functions. However, someone notices that only five of the tests in the fixture need data inserted in the test hook. A proper solution here is to split the fixture into two separate fixtures. The fixture containing the tests that need the data would have the appropriate test hook while the rest of the tests don't. This step alone cuts testing time for these ten tests in half.

Defining the appropriate amount of setup and teardown functionality takes a bit of work. But when it's done right, it can significantly speed up your tests and help with long-term maintainability by having your tests perform only what is necessary.

2) Keep a handle on long-running tasks and network connections

Most modern web applications do lots of work on a single page. For example, a page can pull in data from third-party services asynchronously on page load or when the user performs an action. Another example is rendering specific elements dynamically, sometimes with animations and special effects.

All of these actions take up precious time during testing. If your application performs multiple network requests or animations, your tests have to wait for them to complete. The issue is that you usually won't have total control over the performance of these actions. Delays can occur, from slow network connectivity between servers to a poor-performing JavaScript library running an animation.

Thankfully, TestCafe can also help you handle these issues. For HTTP requests from a website to an external service, TestCafe has built-in mocking functionality that intercepts any of these requests. Not only do these mocks prevent any potential delays from the network request, but it also allows you to define what you want to test. It's handy for ensuring your tests aren't affected by a service you can't manage. I previously covered how mocks can help you with your TestCafe tests.

For other issues, such as animations or dynamic elements rendering slowly, it's usually a sign that the application under test needs some optimization. If your tests take too long to execute because of a slow-running frontend, there's only so much your testing tool can do.

However, there are a few exciting ways around these issues with TestCafe, depending on your application. There's a nifty function provided by TestCafe that allows you to inject scripts into your application. If you have some particular JavaScript code that you can manipulate through a web browser, you can use this ability to tame low-performing frontend functionality.

An excellent use case where I've seen this function used successfully is with the jQuery JavaScript library. The jQuery library provides different page effects for adding animations to web pages. They look lovely, but if a specific element or page has too many effects going on, it can have a detrimental effect on page speed.

While optimizing the page is a better long-term solution - more on that later - you can manage this through TestCafe if necessary. jQuery provides a property that allows you to turn off all animations. The functionality of the site remains the same, but it removes all delays caused by slow animations, speeding up your tests along the way.

There are a couple of ways to inject scripts on a page during a TestCafe test run. For the example situation described above, you can inject the necessary JavaScript code to turn off jQuery animations during tests using the clientScripts function on a fixture:

fixture("Page with animations")
	.page("https://example.com/")
	.clientScripts({ "content": "jQuery.fx.off = true;" });

In some cases, you might want or need to call external services for testing real functionality. Maybe you need to keep animations or certain slow-rendering elements because it's an essential part of your application. But if you don't need them during testing, using mocks and script injection cuts down the time needed to perform these activities.

3) Run your tests in headless mode

When developing end-to-end tests for web apps, you need to see how each step runs to ensure everything's working correctly. One of the cool things with many end-to-end testing tools is the ability to execute tests and see the tool carry out the sequence automatically and in real-time on a browser. Whenever I show this to someone who doesn't know much about automation testing, they're always amazed.

However, running your tests in a browser has its price, since it launches an instance of the browser. Some browsers eat up plenty of CPU and memory from your machine - I'm looking at you, Google Chrome. Depending on your hardware, it can add significant time to your test run by taking away valuable resources from your development environment.

TestCafe can help you here with its excellent built-in functionality to run tests in headless mode for Google Chrome and Mozilla Firefox. Headless mode runs your tests using the same rendering engine for these browsers, but without the interface. Tests start quicker in headless mode since there's no browser UI to load, and they'll run snappier since your system has to use fewer resources.

You can run your tests in headless mode by appending the :headless parameter to the name of the browser in your TestCafe command, as shown in the following commands. Keep in mind this may not work on older operating systems containing outdated versions of these browsers.

# Run your test using Google Chrome in headless mode.
npx testcafe chrome:headless tests/*.js
# Run your test using Mozilla Firefox in headless mode.
npx testcafe firefox:headless tests/*.js

Note that running your tests in headless mode won't significantly speed up your tests. You'll see marginal gains in most situations. But if you're developing your tests in a low-end machine, every little bit helps to improve your testing workflow. Even on higher-end systems, headless mode can give a decent boost. I ran a few tests on my development system and saw ~10% faster test runs on average. Those time saves can add up on larger test suites.

When you're in development mode, you should still run your tests on the non-headless browser so you can adequately debug your tests. But once your tests are running as you expect, you should run your tests in headless mode to avoid using unnecessary processing power that your tests can use instead.

4) Run your tests concurrently

By default, most testing tools - TestCafe included - run your tests sequentially. It goes through each test scenario, one by one, proceeding to the next when the test ends in success or failure. If you have dozens of end-to-end tests, running your test suite serially like this takes a while to complete.

Fortunately, TestCafe comes to the rescue again. TestCafe allows you to run more than one test at a time. By turning on the concurrent test execution mode, TestCafe opens multiple instances of your browser at the same time. Each open browser can then run a test at the same time as other tests. This option can help speed up your total test execution times quite a bit.

For instance, if you want to run TestCafe tests using Mozilla Firefox, you would typically execute your test suite with the following command in your terminal:

npx testcafe firefox tests/*.js

The above command opens one instance of Firefox and runs your tests in order, one at a time.

To use the concurrent test execution mode, add the -c <num> or --concurrency <num> flag to your TestCafe command, where <num> is the number of browsers you want TestCafe to open. For example, if you want to run three tests at a time, you can execute your test suite with the following command:

npx testcafe -c 3 firefox tests/*.js

Instead of running one test at a time, TestCafe opens three instances of Firefox, executing three tests simultaneously. When a test finishes on one of the browsers, TestCafe knows the browser is available and fires up another test, so you'll always have tests running on all browsers.

Running multiple tests in parallel shortens the total execution time of your test suite. However, you can reach a point where running more tests concurrently slows down the total execution time. If your application is heavy or your system is not powerful enough, your environment might struggle to keep up with TestCafe. In my experience, running more than three concurrent tests in TestCafe makes the test suite slower than with 2-3 tests simultaneously. Experiment with different numbers to find what's the "magic number" for you.

Also, note that if your tests require a specific order to run successfully, running tests concurrently won't work since you can't control the order of your tests this way. As a general rule, I would recommend having independent end-to-end tests that don't rely on other tests to pass. But if that isn't an option for any reason, you can't take advantage of TestCafe's concurrent test execution mode.

5) Offload your tests to different hardware

End-to-end tests are bulky by nature, requiring plenty of computing power for the test framework and the browser engine where the tests run. Also, if you're running the application under test on the same environment where you're executing your test suite, it drains further resources on your system.

Sometimes, our hardware isn't enough to run our end-to-end tests fast enough. If you have a low-end development machine and can't upgrade at the moment, the tweaks described in this article won't seem like they're enough. It might still feel painful to wait for your test suite to finish.

If this is the case, you have an option with TestCafe. You can offload your tests and run them on someone else's hardware. There are services such as BrowserStack and Sauce Labs that allow you to run automated end-to-end tests on their servers. The main benefit of these services is to run your tests in different environments and browsers that you may not have available. But they're still useful for processing your test suite if you have low-spec equipment.

The TestCafe team provides official plugins for integrating TestCafe with BrowserStack and Sauce Labs. These plugins make the process of running your tests on these services effortless. If you want to learn more, I previously covered how to run your TestCafe test suite using BrowserStack.

While offloading your tests to a remote server helps keep usage of your system at a minimum, keep in mind that you might not experience any improvements using these services. Most of these cloud services run your tests in virtualized environments, which often aren't as quick as real hardware. You also might experience other issues like network congestion or a backup in their queue at the time you trigger your tests.

If you have the means to upgrade your computer or work at a company that can provide you with better hardware, it's the ideal solution over using a cloud service. To be an effective automation tester, you need to keep your feedback loops short, and better hardware is the way to go for that. But if you can't get better equipment, consider this option as a temporary solution.

Other ways to speed up your test suite

Besides taking advantage of TestCafe's functionality to keep your tests running as swiftly as possible, there are a few more general guidelines you can follow to speed up your tests.

Don't fall into the temptation of automating everything

When testers learn the power of automated end-to-end testing, they tend to want to automate everything they can. In theory, automating their testing duties frees them up for other valuable work. But in practice, it rarely works that way.

More tests mean slower execution time and more time spent in maintaining those tests. Focus on high-risk, critical, and time-consuming areas for automation. Leave the rest to other forms of testing.

Don't run too many steps and assertions in a single test

One common mistake I've seen in end-to-end tests are scenarios that contain too many steps and assertions. These tests take too much time to run successfully and are problematic to maintain.

One of the worst offenders I saw at a previous job was a single test that took over two minutes to complete. Besides its slowness, the other problem was that no one wanted to touch the test when we needed to, like when the application under test changed. Keep your tests short with fewer steps and assertions when possible.

Get together with the team to talk about application performance

If you already spent a significant amount of time tweaking your tests and they still take forever to execute, maybe the problem is the application itself. It's not uncommon for applications to have various performance issues that surface during testing, like memory leaks or too many database connections occurring in certain areas.

Talk with your team, especially with developers, and discuss these issues in detail. Come up with a plan to address them in upcoming iterations. If necessary, implement performance testing so you can back up your claims with evidence. As long as the team communicates and has the space to work on these problems, you'll knock out the slowness in no time.

Summary

No one likes a slow automated test suite. It's a sure-fire way to have your team not run your tests or maintain them in the future, negating the benefits of having these tests in the first place. Part of automation testing is keeping a smooth-running test suite, and that includes the performance of the tests. It's up to us as testers and developers to ensure our tests run quickly.

TestCafe has lots of built-in functionality to help us structure our code correctly and avoid unnecessary waste of time and resources. Using tactics like setting up your initial test state efficiently, intercepting slow network requests, and running your tests concurrently keep your test run times low. There are also plugins to help you push your tests to the cloud if your hardware isn't the best.

Even if you don't use TestCafe as your testing tool of choice, there are a few additional paths you can take to speed up your tests. Some examples are avoiding the temptation to automate everything you can, and writing tests with too many steps.

A slow test suite can get very expensive in the long run, and your application's quality will undoubtedly suffer if you don't tame your testing times. It's well worth the effort to speed up your tests and keep them that way, so take the time to do it as soon as you can. It's up to us as testers and developers to ensure our tests run quickly.

What other tips for speeding up your tests can you share with the testing community? Leave a comment below, and I'll add them to the article!