For years, Heroku has been one of the best ways to deploy your web applications. It completely removes the friction of managing infrastructure, which was a huge pain in the days before cloud computing and easy-to-deploy tools were widely adopted. Instead of struggling with server setup and configuration, developers could focus on building new features and leave the "dirty work" to Heroku. Even though the benefits came at a cost, we were happy to pay for it since there weren't many simpler alternatives at the time. However, things have changed with Heroku's value proposition.
Since Salesforce acquired Heroku over 15 years ago, the platform has stagnated. You could argue that they don't need to innovate much as they once did—as the saying goes, if it ain't broke, don't fix it. But beyond the slow adoption of new functionality, it's getting increasingly difficult to choose Heroku as your application's home. The removal of the free tier in 2022 pushed away small projects and raised the barrier to entry. Pricing also continued to climb for those who stayed, especially when there's a need to scale. Moreover, there's very little flexibility in the type of servers you can use, where they're located, and their pricing. For those who used Heroku for a long time, the value that we once got out of it isn't there anymore.
Fortunately, there are better alternatives today. We have plenty of choices when it comes to hosting your web applications. You have many cloud hosting providers that fit any project type, team size, and organization budget that can have your app up and running in minutes. While these services are typically not as hands-off as Heroku, they do make an attractive replacement for those who want more flexibility and lower monthly costs.
The Paradox of Choice
Whenever I speak with clients who are looking to migrate off Heroku (typically due to increased costs), one of their initial hurdles is knowing which cloud service provider to choose. With so many new options out in the market, it's a challenge to know which service is the right one for your needs. There are hundreds of potential options out there, from the major players like Amazon Web Services and Digital Ocean to lesser-used companies like RackNerd and Scaleway. Considering the financial and time investment needed to move from one provider to another, it's a choice that companies can't take lightly.
So, how do you know which service is the right one for you? I was curious about this myself, so I set out to do some exploration and test some of the more well-known cloud service providers today. My intent behind this experiment was to do a baseline test of a real-world application on Heroku and compare the same application running on other servers. Since cost is one of the primary factors of teams wanting to move away from Heroku, the focus of the results will be more on price than raw performance.
Testing Out the Major Players
For my tests, I chose seven cloud service providers to use for my experiment:
- Hetzner Cloud
- Digital Ocean
- Vultr
- Linode
- Amazon Web Services (EC2)
- Google Cloud (Compute Engine)
- Microsoft Azure (Virtual Machines)
These providers were selected since they're the most well-known places to spin up the infrastructure needed to run web applications nowadays. They also offer tons of options around server types, networking, and locations, allowing me to choose similar specs for the instances used in the tests to maintain consistency and keep the comparisons between each other as close as possible.
Application and infrastructure
On each of these services, I set up a Ruby on Rails application that I built called Airport Gap. It's a standard Rails application that uses a PostgreSQL database and Redis for handling caching and asynchronous jobs, which is a fairly common setup for a web application nowadays. Caching is disabled for the tests since I want to check the performance of the app itself through a few user flows.
To set up the application's infrastructure, I used Terraform to spin up the following servers on each provider to mimic how Heroku uses separate instances of each:
- 1 server for running the web application
- 1 server for running a worker for asynchronous jobs
- 1 server running PostgreSQL
- 1 server running Redis
In addition, each of these services allows me to set up firewalls and private networks for added security. I set up firewalls to only allow public traffic as needed—port 22 to all servers for deployments and server management through SSH, and ports 80 and 443 to access the application on the web server. I also set up a private network for the web and worker servers to access the PostgreSQL and Redis servers on a non-public subnet and avoid exposing these services online.
For location, all of these services have data centers in or near Frankfurt, Germany, so I chose this region to deploy the infrastructure mentioned above for consistency and easier testing. The only outlier is Hetzner Cloud, which doesn't have a data center in Frankfurt but does have a peering point from Frankfurt to their nearest location in Nuremberg, so the impact in my testing should be minimal.
Heroku baseline setup
In addition to spinning up the Airport Gap application on the mentioned cloud service providers, I also set up the application on Heroku using a baseline setup that reflects a commonly used configuration that I've seen for Rails applications:
standard-2xdynos for the web and worker servers (current price: $50/month per dyno)postgresql:standard-0add-on for the application's PostgreSQL database (current price: $50/month)heroku-redis:miniadd-on for handling asynchronous jobs (current price: $3/month)
The application was set up using the Cedar stack and deployed in Dublin, Ireland. Heroku doesn't provide many options on where to deploy your application unless you're using Private Spaces, which further ramps up the cost, and I have personally never seen them in use for these types of web applications.
Server instance types used
For testing purposes, I used two different kinds of server types for each cloud service provider. The first type focuses more on "entry-level" instance types. While they're considered to be on the lower tier of each provider's range of services, they can comfortably run a Ruby on Rails application with low or medium traffic, depending on how well they're built.
These instances are typically running on hosts with shared resources, which can create some variations due to "noisy neighbors". Some services like EC2 and Azure VMs offer burstable instances, which allocate a baseline CPU level that can burst to higher frequencies depending on the time the server is idle. These servers provide a cost-effective solution but can produce inconsistent performance.
I used servers with 2 vCPUs and 4 GB of memory as my target, which is a step above Heroku's standard-2x dynos (1 GB of RAM, although we can't compare CPU allocation as they're fundamentally different). Rails apps can run on servers with less memory, and the cloud providers I'm testing out have cheaper instances than the ones I chose for this experiment, but it's nice to be able to have more headroom even at the lower tiers.
| Service | Server Type | vCPUs | RAM (GB) |
|---|---|---|---|
| Hetzner Cloud | cpx22 |
2 | 4 |
| Digital Ocean | s-2vcpu-4gb |
2 | 4 |
| Vultr | vc2-2c-4gb |
2 | 4 |
| Linode | g6-standard-2 |
2 | 4 |
| AWS EC2 | t3.medium |
2 | 4 |
| Google Cloud Compute Engine | e2-medium |
2 | 4 |
| Microsoft Azure VM | Standard_B2s |
2 | 4 |
The second group of servers I used for testing can be considered a step above the entry-level instances I chose. They're more powerful, with a target of 4 vCPUs and 16 GB of memory for most of the cloud service providers to see how the Rails application reacts with the added horsepower. More importantly, these are dedicated, non-burstable instances, which means we'll have a guaranteed base level of CPU performance for each without worrying about the variations that exist in the cheaper servers with shared or burstable resources.
Some services like Digital Ocean and Vultr didn't have dedicated resources matching the vCPU/RAM targets I set, so I opted for the closest one available without going overboard on costs. It won't give a full "apples to apples" comparison between each service, but it will be interesting to see how these small changes affect the application.
| Service | Server Type | vCPUs | RAM (GB) |
|---|---|---|---|
| Hetzner Cloud | ccx23 |
4 | 16 |
| Digital Ocean | g-2vcpu-8gb |
2 | 8 |
| Vultr | vhp-8c-16gb-intel |
8 | 16 |
| Linode | g8-dedicated-8-4 |
4 | 8 |
| AWS EC2 | m6i.xlarge |
4 | 16 |
| GCP | n2-standard-4 |
4 | 16 |
| Azure | Standard_D4ds_v6 |
4 | 16 |
Server configuration
I wanted to keep things as consistent as possible throughout each provider. I used Ubuntu 24.04 as the operating system for all the providers, using their available images for spinning up the VPS. After spinning up each server instance, I used Ansible to run a playbook that handled some initial server setup for all servers equally:
- Update all installed packages and the kernel to the latest available version.
- Set up a
deployuser to use for application deployments, since each provider sets up the primary user differently. - Install Docker for deployments.
- Update the server's
sshdconfiguration to allow TCP port forwarding (AllowTcpForwarding) and allow listening to those ports publicly (GatewayPorts).
Application deployment and tuning
Deployment of the Ruby on Rails application was handled by Kamal, which is my preferred deployment tool for web applications. Kamal handles bundling up your application using Docker and sets it up on the server through SSH, allowing me to use the same method to deploy the app on all providers with minimal changes (primarily the IP addresses/hostnames).
I used the same deployment configuration for all servers used in this test, which deploys the Rails app to the web and worker servers and PostgreSQL and Redis as accessories to separate servers. In addition, I did some performance tuning on both the Rails application and the PostgreSQL database via the Kamal configuration on deploy.
For the Rails app on both Heroku and the cloud providers, I set the environment variables WEB_CONCURRENCY (for the number of Puma workers to spawn) and RAILS_MAX_THREADS (for how many threads each Puma worker can handle simultaneously). The values set for the servers are higher than the defaults so we can use the additional resources available, but not too high that it would overwhelm the servers. The numbers used here can be tweaked further depending on the environment, but for this experiment I wanted to keep them manageable:
- Entry-level servers:
WEB_CONCURRENCY=6,RAILS_MAX_THREADS=15 - Higher-tier servers:
WEB_CONCURRENCY=8,RAILS_MAX_THREADS=15
For the PostgreSQL database, I overrode the default postgres command to pass a few command-line flags that better utilize the available memory on the server. One note here is that we can't tweak any settings from the managed Heroku PostgreSQL add-on, so these flags were only set up when deploying to the other cloud service providers. I'm not an expert on PostgreSQL performance tuning and these values can be tweaked with more testing, but they were sufficient for this experiment.
shared_buffers: PostgreSQL's cache for data pages, set to 25% of the server's RAM as a good starting point.effective_cache_size: Available memory for caching, set to 75% of the server's RAM.maintenance_work_mem: Speeds up maintenance operations, set to 25256 MB (entry-level) or 512 MB (higher-tier).work_mem: Memory to use per query operation, kept conservative at 16 MB to prevent going overboard during performance testing spikes.max_connections: Max concurrent connections, set conservatively based on web/worker usage (100 for entry level, 150 for higher-tier).wal_buffers: Buffer for PostgreSQL's write-ahead log, explicitly set to 16 MB.checkpoint_completion_target: Smooths out the I/O load for checkpoint writes, set to 90% (0.9) of the checkpoint interval.checkpoint_timeout: Time for PostgreSQL to perform a checkpoint, set to 15 minutes to reduce I/O overhead.random_page_cost: Cost estimates for random disk reads, set to 1.1, which is significantly lower than the default since the servers have relatively fast SSDs.effective_io_concurrency: Handles concurrent I/O operations for storage, set to 200 to take advantage of the server's faster SSD storage.
Performance tests
I wanted to test how the different cloud providers perform through typical workflows in a web application like Airport Gap, using the excellent Artillery load testing tool. I created two performance tests: one test will focus on read-heavy workloads, hitting API endpoints that simply retrieve data from the database and return it to the user. The other test will be write-heavy, hitting multiple endpoints that will write data into PostgreSQL.
These tests aren't exhaustive and only run for a few minutes in total, since the purpose is to have a basic idea of the performance these servers provide with the same throughput and not how much load they can handle. Having both of these flows will give a good idea of how you can expect a typical Rails app to perform under these different scenarios.
Artillery provides a lot of data for each test run, but for this experiment I'll focus on three data points for comparison:
- Number of requests per second to measure the server's capacity of processing the test requests.
- p95 to highlight the response times of 95% of requests, which better reflects the average user experience.
- Timeout errors to find out where the servers can't keep up with the rate of requests.
One of the best features of Artillery is that it allows you to run performance and load tests using your AWS account. I took advantage of this to run my tests using AWS Lambda in the eu-central-1 region, which is their Frankfurt, Germany region. This way I was able to keep test runs consistent and reduce latency issues as much as possible, especially since I'm in Japan (which adds about 200 milliseconds of latency when connecting to servers in Europe). The only exception was using the eu-west-1 region when performing the baseline test on Heroku due to the application being deployed in the same location.
Testing Read-Heavy Workloads
The first batch of tests I ran targeted read-only API requests, not writing anything to the database. Typically, web applications are significantly faster in reads since we don't have to deal with server-side validations, database transactions, disk speeds, and all the other things that make write requests more demanding on the server. We can expect a decent level of performance for the API endpoints without running into capacity issues.
Test results for entry-level servers
| Service | Server Type | req/sec | p95 | Timeouts |
|---|---|---|---|---|
| Heroku (Baseline) | standard-2x |
40 | 30.9 | 0 |
| Hetzner Cloud | cpx22 |
39 | 37.7 | 0 |
| Digital Ocean | s-2vcpu-4gb |
38 | 37 | 0 |
| Vultr | vc2-2c-4gb |
38 | 36.2 | 0 |
| Linode | g6-standard-2 |
39 | 27.9 | 0 |
| AWS EC2 | t3.medium |
39 | 27.9 | 0 |
| GCP | e2-medium |
39 | 41.7 | 0 |
| Azure | Standard_B2s |
39 | 40 | 0 |
The entry-level servers chosen for this experiment could handle the performance tests without breaking a sweat. None of the services had any timeouts and showed similar response times compared to Heroku. The number of requests is also virtually the same, but this is because it's the amount of traffic that Artillery sent during the performance test. In other words, all the services were able to handle all the requests thrown at them without dropping any.
While I didn't explicitly measure memory usage or other stats directly on the servers, most of them were consuming around 50% to 60% of the server's resources at the peak of the performance test (ramping up to 50 virtual users per second). Heroku, on the other hand, was well over 80%, so it was already showing signs of reaching its limit if I threw more virtual users in the performance test.
Given an already-promising result, let's see how the application handled the performance tests on the higher-tier servers:
Test results for higher-tier servers
| Service | Server Type | req/sec | p95 | Timeouts |
|---|---|---|---|---|
| Heroku (Baseline) | standard-2x |
40 | 30.9 | 0 |
| Hetzner Cloud | ccx23 |
41 | 32.8 | 0 |
| Digital Ocean | g-2vcpu-8gb |
39 | 23.8 | 0 |
| Vultr | vhp-8c-16gb-intel |
39 | 25.8 | 0 |
| Linode | g8-dedicated-8-4 |
39 | 19.9 | 0 |
| AWS EC2 | m6i.xlarge |
39 | 23.8 | 0 |
| GCP | n2-standard-4 |
39 | 25.8 | 0 |
| Azure | Standard_D4ds_v6 |
40 | 22.9 | 0 |
We can see better p95 numbers across the board for the more powerful servers, with most of them performing better than Heroku's baseline due to the extra processing power they have. As mentioned in the previous set of test results, the number of requests remained the same since it's the max that Artillery threw at the servers. If I were to ramp up the number of virtual users in the performance test, I'm sure these higher-tier servers would handle a lot more with ease.
For read-only workloads, there's no issue with even the lower-tier servers. Your choice on using a lower-cost instance type versus using a more powerful one for a read-heavy app depends on how many users you expect and how well-optimized your database queries are for the data you have. Airport Gap doesn't have a ton of data, and the queries are well-optimized for their data retrieval, which shows through the low response times.
Testing Write-Heavy Workloads
The situation gets a lot more interesting when we begin throwing virtual users who write to the database simultaneously. In my experience, this is where the type of server you choose matters, since most web applications need to save information in a database. For this test, I scaled down the number of users at its peak, with the maximum point reaching 20 virtual users per second.
Test results for entry-level servers
| Service | Server Type | req/sec | p95 | Timeouts |
|---|---|---|---|---|
| Heroku (Baseline) | standard-2x |
27 | 4231.1 | 21 |
| Hetzner Cloud | cpx22 |
21 | 7117 | 319 |
| Digital Ocean | s-2vcpu-4gb |
12 | 4231.1 | 786 |
| Vultr | vc2-2c-4gb |
16 | 4770.6 | 586 |
| Linode | g6-standard-2 |
18 | 6976.1 | 462 |
| AWS EC2 | t3.medium |
16 | 6312.2 | 578 |
| GCP | e2-medium |
13 | 3605.5 | 729 |
| Azure | Standard_B2s |
17 | 6702.6 | 530 |
We can clearly see the limitations of lower-powered hardware when it comes to writes in a database and to disk. Where the read-only tests ran without any hiccups, every single server started timing out once the performance test began to ramp up. Some servers were more affected than others, which was interesting to see.
Heroku surprisingly handled the test much better than I expected. The p95 performance wasn't great, but it did have the highest throughput and fewest timeouts by far compared to the other entry-level servers on this list. My guess is that it's due to the Heroku PostgreSQL add-on service being tuned much better than my database tuning for these services.
There were some other interesting results. Hetzner Cloud had the worst p95 performance but had the fewest timeouts besides Heroku, while Digital Ocean had more timeouts but a relatively better p95 number. Considering all the servers were configured equally, it appears that each service handles requests differently on shared/burstable server instances.
From what I could uncover, shared instances of Hetzner Cloud servers throttle CPU access during heavy loads, slowing everything down on the server but still letting requests come in. On the other hand, Digital Ocean and others look like they just drop requests if there's too much going on in the server. This can explain why Hetzner Cloud was much slower on average but had fewer timeouts, while Digital Ocean had more timeouts but better performance on the requests that did go through.
Doing tests like these shows how each service can vary not only in raw computing power but also in its underlying configuration. These settings are often in the hands of the service provider and cannot be changed, so it's important to do real-world evaluations to uncover these types of problems before they occur in production. With that in mind, would the more powerful server also suffer the same issue?
Higher-Tier Servers
| Service | Server Type | req/sec | p95 | Timeouts |
|---|---|---|---|---|
| Heroku (Baseline) | standard-2x |
27 | 4231.1 | 21 |
| Hetzner Cloud | ccx23 |
32 | 354.3 | 0 |
| Digital Ocean | g-2vcpu-8gb |
28 | 608 | 0 |
| Vultr | vhp-8c-16gb-intel |
29 | 278.7 | 0 |
| Linode | g8-dedicated-8-4 |
29 | 273.2 | 0 |
| AWS EC2 | m6i.xlarge |
31 | 820.7 | 0 |
| GCP | n2-standard-4 |
28 | 1200.1 | 0 |
| Azure | Standard_D4ds_v6 |
30 | 596 | 0 |
We can see a different story emerge when bumping up the available computing power on the servers. The same performance tests that choked on the entry-level servers now breezed through them without a single timeout and much improved response times. Once the app had more headroom, the problem that existed on lower-end hardware completely evaporated.
Looking at the raw numbers, there are some takeaways to get from this. Hetzner Cloud's p95 performance went down drastically (a 95% drop) by merely scaling up to a dedicated instance. Seeing some providers like Vultr and Linode go through the testing with an average p95 under 300 milliseconds is also impressive after the previous tests took multiple seconds to complete the workflows that didn't time out.
Another story that seems to surface from this experiment is that the hyperscalers (AWS, GCP, and Azure) lag behind the VPS services when it comes to response times, despite the cost per instance type for these services being higher. In terms of pure raw performance, it seems like a VPS is the better choice, but this highly depends on your organizational needs since hyperscalers offer much more than the rest.
Pricing
One of the main reasons to leave Heroku for another service is costs. Heroku is typically pricier than a do-it-yourself VPS service, although you're paying for the convenience of not having to manage your infrastructure. Still, the lack of flexibility between tiers as you grow can really put a strain on a team's budget, and that's where alternatives shine. You can get servers for all kinds of workflows at a cheaper cost, or you can get pricier servers with significantly more resources when needed, with plenty of choices in between. Heroku's offerings are limited, and going from a lower tier to the next will likely shock you when you receive your next monthly bill.
For this experiment, I wanted to get a mix of servers that are either cheaper or pricier than Heroku. In most cases, you can get more cost-effective infrastructure for your web apps, but I also wanted to include more powerful servers to show how prices can vary between providers, especially on the higher end of the spectrum.
The prices below are approximations for the regions tested in this article at the time of writing in January 2026. Prices and availability can vary by region and can change at any time, so please verify the pricing pages for each provider to get the most up-to-date pricing information.
| Service | Entry Level: Approximate Cost Per Server (Monthly) | Higher Tier: Approximate Cost Per Server (Monthly) |
|---|---|---|
| Hetzner Cloud | €6.49 (cpx22) |
€24.49 (ccx23) |
| Digital Ocean | $24 (s-2vcpu-4gb) |
$63 (g-2vcpu-8gb) |
| Vultr | $14 (vc2-2c-4gb) |
$96 (vhp-8c-16gb-intel) |
| Linode | $24 | $90 (g8-dedicated-8-4) |
| AWS EC2 | ~$40 (t3.medium) |
~$150 (m6i.xlarge) |
| GCP | ~$28 (e2-medium) |
~$145 (n2-standard-4) |
| Azure | ~$34 (Standard_B2s) |
~$185 (Standard_D4ds_v6) |
On the lower end, we can see that all the tested cloud service providers offer cheaper alternatives to Heroku's standard-2x dynos, which currently stand at $50/month per dyno. Seeing that these servers perform great for workloads that aren't write-heavy, they'll do an excellent job while saving a lot of money with comparable or sometimes even better performance.
At the higher tier, we can see that almost all the servers used in this experiment are more expensive. However, the performance test results indicate that they're significantly more powerful than what Heroku has with the standard-2x dyno, meaning that you'll have a lot more headroom to scale without needing to change servers. In the long haul, this can come out to be cheaper since you won't need to jump to pricier servers as your app grows.
One note to make here is that spinning up instances on AWS, GCP GCP, Azure carries additional costs. Pricing on a typical VPS includes fixed storage for the operating system and a public IPv4 address at no additional charge, but you'll need to pay for these on hyperscalers. EC2 instances, GCP instances, and Azure VMs require separate paid storage for the OS, and you also need to pay for allocating public IPv4 addresses to your servers. These charges are relatively minimal, but they're often a forgotten part of the total cost of infrastructure when using these services.
This pricing table is just a small fraction of what each provider has available. The great thing about using one of these services is that they have tiers for different types of applications without jacking up the price too much between them. On Heroku, going one level above the standard-2x dyno to Performance-M dynos gives you a lot more power but at five times the cost at $250/month. That's a lot pricier than even the highest-priced one tested here, and it also has significantly less RAM than all of them.
Beyond Price: Other Considerations You Need To Make
You might be looking to leave Heroku simply due to price, but there are other factors you should consider when seeking an alternative service for your web applications.
Customer support
When evaluating alternatives, the quality of available customer support is something to keep in mind as it varies across providers. Services like DigitalOcean and Linode have human support that often resolves tickets within minutes, regardless of what you're paying. Cheaper services like Hetzner Cloud can be hit-or-miss and often take longer to receive a response. For hyperscalers, customer support is virtually non-existent unless you pay for a support plan, with costs varying according to the level of service you need.
Reliability and uptime
We don't want our applications to suffer from sudden downtime due to provider troubles, so it's a key factor to consider when switching from Heroku. Budget and mid-tier providers generally have solid reliability, especially considering their price point. AWS, GCP, and Azure have enterprise-grade reliability, although it's not uncommon for these services to go down. Something else to consider is if the provider has Service Level Agreements (SLAs) that clearly define what to expect and how they remedy any issues if they don't meet it. Hyperscalers often have clear SLAs, while others like Hetzner Cloud don't have any.
Location
Unless you use the costlier Private Spaces service on Heroku, you're limited to two locations (the United States and Europe). On the other hand, the services explored here offer locations across the globe, so you can set up your application closest to your primary user base. It's worth noting that some providers still have limited locations to launch your server (for example, Hetzner Cloud has 4, while Vultr has over 30) and may also have different availability depending on the region, such as different server configurations.
Ease of use
It's important to learn how each provider spins up resources to make the transition from Heroku easier. Some providers used in this article make it easy to provision and manage your servers through various methods. Services like Hetzner Cloud and Digital Ocean have polished dashboards and simple-to-use APIs that can get you up and running in seconds, while hyperscalers like AWS and Azure require some understanding of their ecosystem to ensure you're setting up your infrastructure in the right way.
Recommendations
If your primary concern is getting the most performance for your dollar, there's no better choice than Hetzner Cloud. Hetzner Cloud outperforms the rest of the field in price-performance, and their powerful hardware configurations make them possibly the best choice for cost-conscious teams.
For a mix of self-managed and managed services, DigitalOcean and Vultr offer a sweet spot between both. These providers offer fully managed databases so you can delegate the complexity of these essential components while still taking advantage of their less expensive self-managed servers.
If you need a larger ecosystem of additional services, consider hyperscalers like AWS, Azure, and Google Cloud. These providers aren't the cheapest, but they go beyond offering simple servers, with dozens of other services from data analytics and content delivery to machine learning and AI. For organizations that have strict security requirements, they also give you all the tools you need to ensure compliance.
If you're happy with Heroku and don't need to worry about costs, stay on Heroku. Heroku still offers high value for teams that prioritize developer productivity and simplicity over price. If your costs are within your budget and your application doesn't need more resources than what you currently have, remaining on Heroku is still a valid choice.
Ready to Make the Switch?
If you and your team are ready to transition from Heroku to another cloud service provider, I offer a done-for-you Heroku Exit Plan service that handles the entire transition. I cover choosing the best infrastructure setup for your needs, easy deployment configuration, database migration, and ongoing support to make the switch easier for you.
Book a free consultation so we can review your current Heroku setup and see how much you could save and the benefits you'll get from the switch.