Dragonfly Cloud is now available in the AWS Marketplace - learn more

Question: How can you scale out Memcached to handle growing demand?

Answer

Scaling out Memcached is primarily done by adding more servers to your Memcached cluster and then redistributing the keys among the available servers. This operation is usually referred to as "sharding".

Here is a basic example of how this works in practice:

  1. When you start, you might have one server:

    servers = ["192.0.2.1"] memcache_client = memcache.Client(servers)
  2. As you need to scale, you add more servers to your list:

    servers = ["192.0.2.1", "192.0.2.2", "192.0.2.3"] memcache_client = memcache.Client(servers)

How does Memcached redistribute keys when you add or remove servers? It uses a concept called "consistent hashing". Essentially, each key is hashed and distributed around a consistent hash ring. When a new node is added, it only requires shifting keys between it and its immediate neighbor, minimizing the number of keys that need to be moved.

Note that while Memcached itself does not replicate data (i.e., store the same data on multiple servers), some client libraries provide support for replication, which can improve read performance and provide some degree of failover capability.

To efficiently distribute memory and compute resources, use a load balancing technique. Consistent hashing is commonly used with Memcached to ensure even distribution of data and minimize re-distribution when nodes are added or removed.

Keep in mind that scaling out doesn't increase the total capacity of data that can reside in Memcached since it's an in-memory cache. The total capacity is bound by the RAM across all Memcached instances. When you reach the limit, old data will be evicted.

Lastly, remember that scaling comes with increased complexity. For example, debugging can become more intricate as data might need to be traced across several servers. Also, network latency could potentially become an issue as you add more servers. Therefore, always monitor performance as you scale out.

Was this content helpful?

White Paper

Free System Design on AWS E-Book

Download this early release of O'Reilly's latest cloud infrastructure e-book: System Design on AWS.

Free System Design on AWS E-Book

Switch & save up to 80% 

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost