Dragonfly Cloud is now available on the AWS Marketplace - Learn More

What You Need to Know About ElastiCache Serverless

In the blog post, the emerging trend of serverless solutions is discussed, focusing on AWS ElastiCache Serverless. While acknowledging its innovative approach, our calculation suggests that the pricing model may not be practical for medium to large workloads.

December 12, 2023

What You Need to Know About ElastiCache Serverless

Introduction

At AWS re:Invent 2023, ElastiCache Serverless was announced. This continues the trend of databases offering a serverless service, so understandably, there is excitement from both the Redis and serverless communities. It's not surprising that in-memory data stores are the last ones to go serverless. Creating a true serverless experience for sub-millisecond data stores is extremely complicated, as data must remain in memory even when there is no traffic at all.

So, how did AWS pull it off? How did they create a serverless offering for a service that must remain up at all times? And which use cases is ElastiCache Serverless a good fit for? The answer to those questions can be found in the service pricing model.

What is a Serverless Database?

A serverless database is a managed database service that eliminates the need for infrastructure management, scaling, and provisioning. This means developers can focus on building and deploying applications without worrying about the underlying data infrastructure they are using.

Serverless databases are extremely easy to start with, as there is almost no setup involved. Developers do not need to think about data requirements like capacity, throughput, scaling, durability, etc. They simply register for the service to get an endpoint, and from that point on, they can store and retrieve data at any scale.

A good serverless database infrastructure is able to provide the experience of infinite scalability for your application without degradation of the service. If you have hundreds of entities or billions of entities in the database, you would get the same performance.

But the most appealing feature of serverless offerings is that you pay only for what you use. The serverless pricing model is built to scale with your workload. As scaling is elastic, so is your billing. If you consume only a small amount of resources, your bill will be small. If you consume more storage, more throughput, and more data, your bill will grow accordingly.

The most advanced concept that is used to provide a serverless database is to decouple compute from storage. In this technique:

  1. Most data remains in cloud storage, which basically offers infinite scalability of the data at relatively attractive pricing.
  2. Computational resources are multi-tenant and fully elastic. They scale with query traffic.

In this architecture, a client with a lot of data and a low query throughput will pay mostly for storage, while a client with high volumes of traffic will pay mostly for compute resources (and networking). When there is no traffic, no compute resources are needed. This configuration answers both scale and resource efficiency requirements.

What is ElastiCache Serverless?

ElastiCache Serverless is a serverless version of ElastiCache, which is a managed Redis service. It gives AWS users the ability to set up a Redis service that is provisioned, scaled, and managed by AWS. Developers using ElastiCache Serverless do not need to think about provisioning for peak traffic, sharding, rebalancing, snapshotting, and all other operations of managing a Redis deployment.

The challenge of delivering a serverless in-memory database is far more complicated than with traditional databases. To start with, compute and storage (in this case, memory) cannot be decoupled. There is no low-priced, infinite-memory service available in the cloud. While Amazon has not disclosed how they solved this challenge, a common approach is to add a proxy that will handle most of the compute and have a second layer of instances holding the memory. The downsides of this architecture, when compared to a traditional in-memory offering, are increased latency, limited scalability, and since memory is expensive and needs to be always on - it gets expensive really quickly.

How is ElastiCache Serverless Priced?

Since ElastiCache Serverless must always keep the data in memory even when you do not query the service, memory becomes the predominant component in your billing. ElastiCache Serverless is priced based on the GB of data stored, which happens to be the same way we price Dragonfly Cloud. They charge $0.125 per GB per hour (i.e., $90 per GB per month) as well as $0.0034 per million ECPU (1 ECPU = 1 KB of data transfer for both reads and writes). What does this look like in practice when compared to ElastiCache On-Demand?

For small workloads, the price difference is not dramatic. Let's take an example workload with 4GB of memory and 100,000 max queries per second. Using ElastiCache On-Demand, you could run this on a cache.m7g.xlarge (this is a 13.88GB instance, but you must over-provision for ElastiCache with Redis, so this is your best option) and pay ~$230/month. This same workload on ElastiCache Serverless would cost you $360/month for storage plus ECPU costs (this can vary wildly). An extra $130 a month to not have to think about provisioning any infrastructure might still make sense here.

Now let's take a slightly larger example workload of 100GB of memory and 100,000 max queries per second. In this case, on ElastiCache On-Demand, you could run on a cluster with 3 instances of cache.r6g.2xlarge (170.16GB of total memory) for ~$1,800/month or pay more than $9,000/month for ElastiCache Serverless. Is the provisioning pain worth an extra $7,200/month?

For truly large workloads of, say, 800GB, ElastiCache Serverless would cost you more than $72,000/month. With ElastiCache On-Demand, you could operate a cluster with 16 instances of cache.m7g.4xlarge (897.76GB of total memory) for ~$15,000/month.

WorkloadElastiCache On-DemandElastiCache Serverless
4GB~$230/month$360/month (excluding ECPU cost)
100GB~$1,800/month$9,000/month (excluding ECPU cost)
800GB~$15,000/month$72,000/month (excluding ECPU cost)

Conclusion

ElastiCache Serverless is an exciting product offering, but it's clear that AWS has not designed this service for heavy data workloads, the pricing simply makes no sense. If you are running a small cache and want to get up and running quickly, ElastiCache Serverless is probably a really great option for you. It's also a great solution for applications with little data but spiky traffic. However, if your application requires more than 10GB or 20GB of memory, serverless may not be the best fit, and you might want to explore alternatives like Dragonfly Cloud.


Appendix - Pricing Details

  • ElastiCache pricing can be found here.
  • Our calculation is based on the AWS US East (Ohio) region as of December 2023.

ElastiCache Serverless as of December 2023

Pricing DimensionPrice
Data Stored$0.125 / GB-hour = $90 / GB-month
ElastiCache Processing Units (ECPUs)$0.0034 / million ECPUs

ElastiCache On-Demand as of December 2023

Cache Node TypevCPUs & MemoryPrice
cache.m7g.xlarge4 vCPUs, 12.93GiB = 13.88GB$0.230/hour = $229.95/month
cache.m7g.4xlarge16 vCPUs, 52.26GiB = 56.11GB$1.257/hour = $917.61/month
cache.r6g.2xlarge8 vCPUs, 52.82GiB = 56.72GB$0.821/hour = $599.33/month

Stay up to date on all things Dragonfly

Join our community for unparalleled support and insights

Join

Switch & save up to 80% 

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost