Dragonfly Cloud is now available on the AWS Marketplace - Learn More

Rueidis with Dragonfly: Auto-Pipelining & Client-Side Caching

Explore how our new integration with Rueidis enables auto-pipelining, client-side caching, and distributed locking for Go applications.

July 3, 2024

Rueidis with Dragonfly: Auto-Pipelining & Client-Side Caching

Introduction

Hi everyone! This is Kostas, a software engineer at Dragonfly. I am here today with some exciting news! We're thrilled to announce our latest integration with Rueidis, a comprehensive Redis client library for Go. This integration aligns with our mission at Dragonfly to offer a modern, scalable, high-throughput in-memory data store that's fully compatible with the Redis wire protocol and command APIs.

Rueidis with Auto-Pipelining & Client-Side Caching

Rueidis is a relatively new Redis client library for the Go programming language, designed to enhance performance through auto-pipelining and client-side caching. Auto-pipelining optimizes performance by reducing network round-trips and batching server responses to client commands. You can read more about pipelining and other batch operations available in Dragonfly in this blog post.

Client-side caching further boosts performance by avoiding unnecessary server calls and minimizing network bandwidth usage. It achieves this through a server-assisted, opt-in notification mechanism for key changes. This means that a client can cache a key/value pair locally and will only need to invalidate and update this cache when notified by the server of any modifications. This approach eliminates the need for the client to constantly poll the server for key changes. Moreover, if a key remains valid and available on the client side, the client can respond to requests without reaching out to the Redis server at all, significantly reducing latency. By leveraging client-side caching, you can ensure more efficient data handling and faster response times, enhancing the overall performance of your application.

Client Tracking Advancements in Dragonfly

In a previous blog post, we discussed how we integrated Dragonfly with Relay, a Redis client library featuring client-side caching capabilities. If you're interested in the implementation details and benchmarks, you can find them there, detailing Dragonfly's original journey to support client-side caching.

Until now, Dragonfly only supported unconditional tracking per client connection, limited to the CLIENT TRACKING ON/OFF command. However, with our latest release, we've expanded this functionality significantly. We now support the OPTIN, OPTOUT, and NOLOOP subcommands for the CLIENT TRACKING command. Additionally, we've introduced the CLIENT CACHING YES/NO command, which provides users with granular control over which keys are being tracked.

These enhancements allow for more flexible and efficient client-side caching, giving users the ability to fine-tune their caching strategy and optimize performance based on their specific needs. In the meantime, these features are what Rueidis uses underneath to provide seamless, configurable server-assisted, client-side caching, as shown in the examples below.

package main

import (
	"context"
	"time"

	"github.com/redis/rueidis"
)

func main() {
	client, err := rueidis.NewClient(
		rueidis.ClientOption{
			InitAddress: []string{"127.0.0.1:6379"}, // Dragonfly server address.
		},
	)
	if err != nil {
		panic(err)
	}
	defer client.Close()

	ctx := context.Background()

	// The opt-in mode of server-assisted client-side caching is enabled by default
	// and can be used by calling 'DoCache()' or 'DoMultiCache()'
	// with client-side TTLs specified as shown below.
	client.DoCache(
		ctx,
		client.B().Hmget().Key("hash_key").Field("field_01", "field_02").Cache(),
		time.Minute*1,
	).ToArray()

	client.DoMultiCache(
		ctx,
		rueidis.CT(client.B().Get().Key("key_01").Cache(), time.Minute*1),
		rueidis.CT(client.B().Get().Key("key_02").Cache(), time.Minute*2),
	)
}

Distributed Locks with Client-Side Caching

Rueidis is a comprehensive client library with additional features that abstract some complexity away for developers, such as auto-pipelining and client-side caching, as we have discussed above. It's worth noting that client-side caching is particularly useful when implementing distributed locks as well. Even though this is a topic on its own, the gist is that Rueidis contains a module called RueidisLock, which is an implementation of the Redis distributed lock pattern (RedLock) that heavily relies on client-side caching to avoid polling at intervals for key changes, reducing the consumption of server resources.

To Conclude

We believe that the features mentioned above—auto-pipelining, client-side caching, and client-side caching enhanced Redlock—fit naturally into Dragonfly's shared-nothing thread-per-core architecture. These capabilities enable Go users to achieve extraordinary scalability and performance with Dragonfly and Rueidis.

For more information on using Rueidis with Dragonfly, be sure to check out our documentation and GitHub repository, where you'll find a wealth of resources to get started with this powerful combination.

Stay up to date on all things Dragonfly

Join our community for unparalleled support and insights

Join

Switch & save up to 80% 

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost