Dragonfly is a drop-in Redis replacement that scales vertically to support millions of operations per second and terabyte sized workloads, all on a single instance.
→ ~/ docker run --network=host --ulimit memlock=-1 docker.dragonflydb.io/dragonflydb/dragonfly
Dragonfly Is Production Ready (and we raised $21m)
March 21, 2023
We are pleased to announce that Dragonfly 1.0, the most performant in-memory datastore for cloud workloads, is now generally available.
Top 5 Reasons Why Your Redis Instance Might Fail
March 13, 2023
In this article, we will explain the main reasons why your Redis instance might fail, and provide advice to avoid this.
Redis vs. Dragonfly Scalability and Performance
February 27, 2023
A thorough benchmark comparison of throughput, latency, and memory utilization between Redis and Dragonfly.
Dragonfly gives you so much more with a complete, modern engine architecture that’s fully compatible with the Redis and Memcached APIs. See why it’s the fastest memory store in the universe.
Ultra performant
With non-contending, multi-threaded processes, Dragonfly is architected to deliver the performance that modern applications require: millions of operations per second, all from a single instance.
View the benchmarksQPS benchmark on AWS r6gn.16xlarge. Snapshot benchmark on AWS r6gd.16xlarge.Source
25x
More QPS than Redis
12x
Faster snapshotting than Redis
Highly Scalable
Dragonfly is architected to scale vertically on a single machine, saving teams the cost and complexity of managing a multi-node cluster. For in-memory datasets up to 1TB, Dragonfly offers the simplest and most reliable scale on the market.
1 TB
In-memory datasets on a single instance
30%
Less memory usage
Unparalleled efficiency
Dragonfly utilizes an innovative hash table structure called dashtable to minimize memory overhead and tail latency. Dragonfly also utilizes bitpacking and denseSet techniques to compress the in-memory data, making it on average 30% more memory efficient than Redis. Lastly, Dragonfly uses consistent memory during the snapshotting, eliminating the need to over-provision memory that is typical with Redis.
Memory usage under BGSAVE. Filling with 5GB of data using debug populate 5000000 key 1024, sending the update traffic with memtier, and snapshotting with bgsave. Source
All-new architecture
Memory Efficient
While classic chaining hash-tables are built upon a dynamic array of linked-lists, Dragonfly's dashtable is a dynamic array of flat hash-tables of constant size. This design allows for much better memory efficiency.
High Hit Ratio
Dragonfly utilizes a unique 'least frequenty recently used' cache policy. When compared to Redis' LRU cache policy, LFRU is resistant to fluctuations in traffic, does not require random sampling, has zero memory overhead per item, and has a very small run-time overhead.
High Throughput
Dragonfly's new in-memory engine, optimized for throughput, uses a thread-per-core architecture without locks to deliver stable and low latencies. By implementing true async interfaces, Dragonfly takes full advantage of the underlying hardware to deliver maximum performance.
Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement.