Boost performance, simplify production.

Dragonfly is a drop-in Redis replacement that scales vertically to support millions of operations per second and terabyte sized workloads, all on a single instance.

Benchmark on AWS r6gn.16xlarge. Snapshot benchmark on AWS c6gn.16xlarge. Source.

Featured Posts

Dragonfly Is Production Ready (and we raised $21m)

Dragonfly Is Production Ready (and we raised $21m)

March 21, 2023

We are pleased to announce that Dragonfly 1.0, the most performant in-memory datastore for cloud workloads, is now generally available.

Top 5 Reasons Why Your Redis Instance Might Fail

Top 5 Reasons Why Your Redis Instance Might Fail

March 13, 2023

In this article, we will explain the main reasons why your Redis instance might fail, and provide advice to avoid this.

Redis vs. Dragonfly Scalability and Performance

Redis vs. Dragonfly Scalability and Performance

February 27, 2023

A thorough benchmark comparison of throughput, latency, and memory utilization between Redis and Dragonfly.

Fully compatible with Redis

Dragonfly gives you so much more with a complete, modern engine architecture that’s fully compatible with the Redis and Memcached APIs. See why it’s the fastest memory store in the universe.

  • Redis API compatible
  • Snapshotting speed
  • Lua
  • QPS per instance
  • Async core
  • LRFU eviction
  • Memcached API compatible
  • Native Open Telemetry


  • 1260MB/s
  • 5.4.4
  • 3.9M


  • 107MB/s
  • 5.1
  • 150K

Ultra performant

25X the throughput of Redis

With non-contending, multi-threaded processes, Dragonfly is architected to deliver the performance that modern applications require: millions of operations per second, all from a single instance.

View the benchmarks
Throughput (QPS)
3,970,000 QPS
148,000 QPS
Snapshotting speed (MB/ Sec)
1260 MB/S
107 MB/S

QPS benchmark on AWS r6gn.16xlarge. Snapshot benchmark on AWS r6gd.16xlarge.Source


More QPS than Redis


Faster snapshotting than Redis

Highly Scalable

Simple Vertical Scaling

Dragonfly is architected to scale vertically on a single machine, saving teams the cost and complexity of managing a multi-node cluster. For in-memory datasets up to 1TB, Dragonfly offers the simplest and most reliable scale on the market.

1 TB

In-memory datasets on a single instance


Less memory usage

Unparalleled efficiency

30-60% better memory utilization than Redis

Dragonfly utilizes an innovative hash table structure called dashtable to minimize memory overhead and tail latency. Dragonfly also utilizes bitpacking and denseSet techniques to compress the in-memory data, making it on average 30% more memory efficient than Redis. Lastly, Dragonfly uses consistent memory during the snapshotting, eliminating the need to over-provision memory that is typical with Redis.


Memory usage under BGSAVE. Filling with 5GB of data using debug populate 5000000 key 1024, sending the update traffic with memtier, and snapshotting with bgsave. Source

All-new architecture

A new in-memory data store, rearchitected for today

Memory Efficient

While classic chaining hash-tables are built upon a dynamic array of linked-lists, Dragonfly's dashtable is a dynamic array of flat hash-tables of constant size. This design allows for much better memory efficiency.

High Hit Ratio

Dragonfly utilizes a unique 'least frequenty recently used' cache policy. When compared to Redis' LRU cache policy, LFRU is resistant to fluctuations in traffic, does not require random sampling, has zero memory overhead per item, and has a very small run-time overhead.

High Throughput

Dragonfly's new in-memory engine, optimized for throughput, uses a thread-per-core architecture without locks to deliver stable and low latencies. By implementing true async interfaces, Dragonfly takes full advantage of the underlying hardware to deliver maximum performance.

Start building today

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement.