When discussing load testing, two immediate thoughts come to mind for most developers and testers: validating application performance and putting systems under immense pressure. Testing for both of these use cases is vital for any modern software development workflow. Your company's customers want your services to work quickly and reliably at all times. You don't want your apps or websites to slow down to a crawl or grind to a complete halt when lots of traffic comes your way.

However, if you've established a load testing practice in your organization and have only been using it for checking how fast your application behaves and making sure it can sustain high traffic, you might not be taking full advantage of what it can offer. Load testing can go beyond measuring basic performance metrics or determining the number of concurrent requests your site can handle without collapsing.

In this article, we'll discuss three uses for load testing that you may not have thought about previously. Often, the focus on this kind of testing is on the application side, but it will also provide plenty of benefits to your application's infrastructure. Many testers or developers don't handle these responsibilities, which usually belong to site reliability engineers. However, many small teams and startups don't have anyone in charge of these tasks. It's worth knowing how load testing can help keep your applications up and running smoothly, so you don't have to worry about your site or app going down due to traffic spikes.

1) Use load testing to ensure your infrastructure configuration works as expected

Cloud platforms like AWS, Google Cloud Platform, and Microsoft Azure have drastically improved how we handle our application's infrastructure. Gone are the days where needing a new server to scale your software services meant calling your service provider and having to wait weeks to prepare and set up a server in a data center. Nowadays, you can scale your architecture almost infinitely with a click of a button and have it done automatically within minutes.

However, this convenience has also introduced some complexity for setting up your infrastructure correctly. Most cloud platforms have countless options to automate most of the work of adding and scaling servers, along with balancing the load between all of your resources. While it's helpful to have a "set it and forget it" approach to handling the systems that power your applications, it's also easy to misconfigure a setting that difficult to notice until much later, usually at the cost of poorly-running services or high usage charges.

I've encountered difficulties with this approach at an organization that uses AWS to power its web application's backend. Part of their infrastructure consisted of an auto-scaling group with EC2 instances configured to scale up or down automatically, depending on CPU usage. However, we accidentally configured the auto-scaling setup where scale-outs (automatically spinning up more EC2 servers) occurred too quickly, leading to excess resources and wasted money for the company.

Load testing can help you spot incorrect configuration settings that lie hidden deep in your infrastructure's environment by making sure it works as intended. For instance, you can craft a load test that can send a sudden spike of traffic to verify your load balancers or DDoS protection services are handling things appropriately. In my example above, a load test could have uncovered their issue when the team set up the auto-scaling group and saved the company a lot of cash. These kinds of tests can ensure that your systems work as you expect and save you a lot of time, money, and headaches.

2) Use load testing to test the resiliency of your hardware

Even with most modern software companies moving to cloud computing for its ease of use, bare metal servers and computers still have their place in the market. Some organizations prefer or need to use dedicated hardware for their infrastructure due to different purposes. Sometimes it's for performance reasons, like a service or application needing direct access to the system's operating system or GPU. Other organizations have strict privacy and auditing requirements that require them to tightly control physical access to their machines.

One of the drawbacks of managing dedicated servers is that the company is responsible for ensuring the system functions correctly at the hardware level. Most server and PC manufacturers usually run tools to stress-test the primary components like the CPU and memory. These tools work well to detect any potential problems by throwing massive amounts of computations for these parts. However, these tools typically don't run real-world scenarios, especially those surrounding the hardware's specific use case. The other issue is that often no one bothers to perform additional load or stress testing on the operating system or applications running on the system, leading to bugs and other issues going undiscovered until the system gets put to use.

A situation that comes to mind is an organization I worked with set up an in-house server to manage a non-critical web application. The server would completely freeze every couple of days. We tested the hardware successfully, but we still got the occasional lock-up that required a manual restart. Just before scrapping the whole system and buying new hardware, we realized the problem — the network card. It had a rare issue where the card would lock up if it received a sudden amount of high-enough network traffic, and it would bring the entire operating system down with it. Once we installed a different NIC, the problem disappeared altogether.

If we had done some load testing on the application after purchasing the hardware and setting up our application, we would have discovered this problem almost immediately. Load tests can also expose other hardware-related issues that the typical stress testing tools won't uncover. Some possible trouble spots could be an application performing too many writes that can significantly reduce the lifespan of a hard drive and overheating components due to the poor performance of an application. Although these situations are uncommon nowadays, they can still occur, and load testing helps in revealing them early.

3) Use load testing to verify your serverless applications

Serverless computing is all the rage in software development these days and with good reason. A serverless application provides many advantages in several situations by allowing teams to quickly develop flexible and scalable solutions at a fraction of the cost of a typical cloud service platform. Services like AWS Lambda, Google Cloud Functions, and Cloudflare Workers let developers deploy and run their application code without managing infrastructure or resources.

Despite the attractive offering, serverless architecture has its unique set of challenges. A common mistake for teams moving to serverless architectures is the belief that these services will automatically handle any amount of requests thrown their way. You may think that load testing your serverless infrastructure is entirely pointless because your functions run independently from each invocation and can scale automatically. But it turns out that load testing is essential for serverless architecture because you still need to consider plenty of options when setting up and invocating your functions.

While one of the main selling points of serverless architecture is scalability, it doesn't mean your code has unlimited access. For example, AWS Lambda provides 1,000 concurrent function executions across all of your functions in a region, and Google Cloud Functions have a maximum request size of 10MB. Serverless functions require other considerations, like setting how much memory to use and how long it can run before timing out. It's not uncommon to run into these limitations quickly, especially as you ramp up your usage of this functionality in your organization. If you're unaware of any limits your service imposes on your serverless functionality, you'll find yourself scrambling to handle these issues when they eventually occur.

As someone who's recently been using serverless architecture more these days, I've run into plenty of problems as I learn more about how each service works. A recent issue I had was with an AWS Lambda function I created that would sporadically time out. The root cause was that I left my function settings at their defaults, naively thinking my code would run fine. The function used the default memory limit of 128MB, but it required more memory to run efficiently before hitting the default timeout of 3 seconds. Tweaking both those settings to higher numbers solved the issue.

Load testing can help you prevent trouble by validating your serverless configuration settings and underlying services, from handling memory restrictions to concurrency limits to throttling. To learn more about the importance of load testing your serverless architecture, check out the following video by serverless advocate Lee James Gilmore, titled Serverless Load Testing with Artillery. The video provides an excellent explanation of why load testing serverless functionality is essential, along with a hands-on example using the Artillery load testing tool to test an AWS serverless application.

Summary

Load testing can also become a powerful tool for developers, testers, and anyone responsible for your application's resiliency. Applications are becoming highly complex with more moving parts, and we need to do our best to deliver a high-quality solution from top to bottom. Teams usually employ load testing for obtaining performance metrics or stress-test an application's capacity, but as this article discusses, it can assist with a lot more.

Nowadays, most cloud-based systems consist of multiple parts that need proper configuration to avoid creating bottlenecks as customers use your applications. If you run your own hardware, you also need to make sure all the physical parts inside your system work and won't cause any issues. Finally, as many teams move towards serverless functionality to avoid the hassles in managing servers and hardware as described above, you still need to verify your architecture's settings and test against any limitations from your platform.

You don't have to limit yourself to using load tests only for checking out your application code. Modern systems require more than just checking if your service responds quickly or can withstand a lot of traffic. It also needs a stable, resilient foundation to perform at its best. Whether it's to ensure that your systems work as intended, your hardware is dependable, or your serverless settings won't create any problems in the future, load testing will help you smoke out countless potential issues in your entire infrastructure.

What other uses for load testing can you use to make your entire system — not just your application — more robust? Please share your valuable advice with others in the comments section below!