Redis API, Dragonfly Performance

Dragonfly is an in-memory datastore designed to scale vertically and make extremely efficient use of the underlying hardware. It is fully compatible with the Redis ecosystem and requires no code changes to implement.

Benchmark on AWS r6gn.16xlarge. Snapshot benchmark on AWS c6gn.16xlarge. Source.

Featured Posts

Redis Analysis - Part 1: Threading model

December 09, 2021

Following my previous post, we are going start with the “hottest potato” - single-threaded vs multi-threaded argument.

Dragonfly Cache Design

June 23, 2022

I talked in my previous post about Redis eviction policies. In this post, I would like to describe the design behind Dragonfly cache.

Infrastructure should be boring

October 21, 2022

Infrastructure should be boring. Boring is good. Boring means that it just works, and you don’t have to worry about it. A year ago, we went on a quest to build a boring in-memory store.

Fully compatible with Redis

Dragonfly gives you so much more with a complete, modern engine architecture that’s fully compatible with the Redis and Memcached APIs. See why it’s the fastest memory store in the universe.

  • Redis API compatible
  • Snapshotting speed
  • Lua
  • QPS per instance
  • Async core
  • LRFU eviction
  • Memcached API compatible
  • Native Open Telemetry

Dragonfly

  • 1260MB/s
  • 5.4.4
  • 3.9M

Redis

  • 107MB/s
  • 5.1
  • 150K

Ultra performant

25X the throughput of Redis

With non-contending, multi-threaded processes, Dragonfly is architected to deliver the performance that modern applications require: millions of operations per second, all from a single instance.

View the benchmarks
Throughput (QPS)
3,970,000 QPS
148,000 QPS
Snapshotting speed (MB/ Sec)
1260 MB/S
107 MB/S
Dragonfly
Redis

QPS benchmark on AWS r6gn.16xlarge. Snapshot benchmark on AWS r6gd.16xlarge. Source

25x

More QPS than Redis

12x

Faster snapshotting than Redis

Highly Scalable

Simple Vertical Scaling

Dragonfly is architected to scale vertically on a single machine, saving teams the cost and complexity of managing a multi-node cluster. For in-memory datasets up to 1TB, Dragonfly offers the simplest and most reliable scale on the market.

1 TB

In-memory datasets on a single instance

30%

Less memory usage

Unparalleled efficiency

30-60% better memory utilization than Redis

Dragonfly utilizes an innovative hash table structure called dashtable to minimize memory overhead and tail latency. Dragonfly also utilizes bitpacking and denseSet techniques to compress the in-memory data, making it on average 30% more memory efficient than Redis. Lastly, Dragonfly uses consistent memory during the snapshotting, eliminating the need to over-provision memory that is typical with Redis.

Dragonfly
Redis

Memory usage under BGSAVE. Filling with 5GB of data using debug populate 5000000 key 1024, sending the update traffic with memtier, and snapshotting with bgsave. Source

All-new architecture

A new in-memory data store, rearchitected for today

Memory Efficient

While classic chaining hash-tables are built upon a dynamic array of linked-lists, Dragonfly's dashtable is a dynamic array of flat hash-tables of constant size. This design allows for much better memory efficiency.

High Hit Ratio

Dragonfly utilizes a unique 'least frequenty recently used' cache policy. When compared to Redis' LRU cache policy, LFRU is resistant to fluctuations in traffic, does not require random sampling, has zero memory overhead per item, and has a very small run-time overhead.

High Throughput

Dragonfly's new in-memory engine, optimized for throughput, uses a thread-per-core architecture without locks to deliver stable and low latencies. By implementing true async interfaces, Dragonfly takes full advantage of the underlying hardware to deliver maximum performance.

Start building today

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement.