Dragonfly Cloud is now available on the AWS Marketplace - Learn More

Valkey Installation + Setup Guide

August 15, 2024

Valkey Installation + Setup Guide

Valkey is a high-performance, open-source key-value store designed as an alternative to Redis, offering advanced features like clustering, enhanced observability, and multi-threading. Ideal for both small-scale applications and large distributed systems, Valkey is engineered to handle high-throughput workloads, providing an efficient and scalable solution for fast data access. This comprehensive guide will walk you through the installation and setup of Valkey, including integration with Kubernetes, Docker, and Valkey Operators, covering everything from basic configuration to advanced deployment scenarios.

Prerequisites

Before starting, ensure you have the following:

  • A compatible operating system (Linux, macOS, or BSD).
  • Basic knowledge of command-line operations.
  • Access to a server or virtual machine for installation.
  • Docker installed if you plan to use Docker for deployment.
  • A Kubernetes cluster set up if you plan to use Kubernetes for managing Valkey.

Installation Guide

Installing Valkey on Linux

  1. Clone the Valkey Repository

    git clone https://github.com/valkey-io/valkey.git
    cd valkey
    
  2. Build Valkey from Source

    make
    
  3. Run the Valkey Server

    ./src/valkey-server
    

Installing Valkey on macOS

  1. Clone the Repository

    git clone https://github.com/valkey-io/valkey.git
    cd valkey
    
  2. Build Valkey

    make
    
  3. Start the Valkey Server

    ./src/valkey-server
    

Basic Configuration

After installation, you can configure Valkey by editing the valkey.conf file. This file allows you to set parameters such as port numbers, memory limits, and security settings.

Editing the Configuration File

  1. Open the configuration file:

    # The valkey repository we cloned earlier has a default configuration file.
    # Make sure you modify and use the desired 'valkey.conf' file.
    nano /path/to/your/valkey.conf
    
  2. There are many configuration options available, for example:

    • port: Change the default port if required.
    • maxmemory: Set a memory usage limit to the specified amount of bytes.
  3. Restart the Valkey server to apply the changes from the configuration file:

    ./src/valkey-server /path/to/your/valkey.conf
    

Valkey Cluster Setup

To utilize Valkey in a distributed environment, you can set up a cluster. This ensures data redundancy and improved performance.

Step-by-Step Cluster Setup

  1. Configure cluster mode by editing the configuration file of a node:

    # Cluster configuration in the 'valkey-6380.conf' file.
    
    # Set the port number for the current node to '6380'.
    port 6380
    
    # Run the current node as a cluster node.
    cluster-enabled yes
    
    # Specify the name of the cluster configuration file.
    # The specified file is not intended to be edited by hand.
    # It is created and updated by each node automatically.
    cluster-config-file nodes-6380.conf
    
    # Set the amount of milliseconds a node must be unreachable,
    # so that it is considered in failure state.
    cluster-node-timeout 5000       
    
  2. Start multiple Valkey instances with their corresponding configuration files:

    ./src/valkey-server /path/to/your/valkey-6380.conf
    ./src/valkey-server /path/to/your/valkey-6381.conf
    ./src/valkey-server /path/to/your/valkey-6382.conf
    
  3. Join nodes to form a cluster:

    # Each node has a general format of 'host:port'.
    # In this example, we are creating a cluster with three
    # primary nodes running locally on ports 6380, 6381, and 6382.
    ./src/valkey-cli --cluster create '127.0.0.1:6380' '127.0.0.1:6381' '127.0.0.1:6382'
    
  4. Verify cluster status:

    ./src/valkey-cli --cluster check '127.0.0.1:6380'
    
    # The example output shows the status of the cluster and slot distribution.
    
    # 127.0.0.1:6380 (92febc03...) -> 0 keys | 5461 slots | 0 replicas.
    # 127.0.0.1:6382 (494be2af...) -> 0 keys | 5461 slots | 0 replicas.
    # 127.0.0.1:6381 (629ec8bb...) -> 0 keys | 5462 slots | 0 replicas.
    # [OK] 0 keys in 3 primaries.
    # 0.00 keys per slot on average.
    # >>> Performing Cluster Check (using node 127.0.0.1:6380)
    # M: 92febc03... 127.0.0.1:6380
    # slots:[0-5460] (5461 slots) master
    # M: 494be2af... 127.0.0.1:6382
    # slots:[10923-16383] (5461 slots) master
    # M: 629ec8bb... 127.0.0.1:6381
    # slots:[5461-10922] (5462 slots) master
    # [OK] All nodes agree about slots configuration.
    # >>> Check for open slots...
    # >>> Check slots coverage...
    # [OK] All 16384 slots covered.
    

Valkey Docker Image

Docker provides a convenient way to deploy Valkey, especially in environments where consistency and portability are crucial.

Deploying Valkey with Docker

  1. Pull the Valkey Docker image:

    docker pull valkey/valkey
    
  2. Run Valkey in a Docker container:

    docker run -d --name valkey -p 6379:6379 valkey/valkey
    
  3. Persistent Storage: Use a Docker volume to persist data:

    # For the following example, make sure that the directory is shared with Docker.
    docker run -d --name valkey -p 6379:6379 -v /mydata:/data valkey/valkey
    

Using Valkey with Kubernetes

Kubernetes is ideal for managing Valkey in a scalable, resilient environment. You can deploy Valkey using Helm charts or Kubernetes Operators for automated management.

Installing Valkey using Helm

Bitnami offers a number of secure, up-to-date, and easy to deploy charts for a number of popular open source applications. We can use the Bitnami Valkey Helm chart to deploy Valkey on Kubernetes.

  1. Create a namespace:

    kubectl create namespace valkey
    
  2. Install Valkey:

    helm install my-valkey oci://registry-1.docker.io/bitnamicharts/valkey --namespace valkey
    
  3. Check the deployment:

    kubectl get pods,svc -n valkey
    

Managing Valkey with Kubernetes Operators

Valkey Operators allow you to automate the deployment and management of Valkey clusters in Kubernetes. At the time of editing (November 2024), there is no official Valkey operator available. However, you can try using the third-party Redis Operator by Spotahome or Valkey Operator by Hyperspike if feasible. Let's use the Valkey Operator for demonstration:

  1. Install the Valkey Operator in the Kubernetes cluster:

    # Make sure to use the latest release version, which may differ from 'v0.0.39'.
    curl -sL https://github.com/hyperspike/valkey-operator/releases/download/v0.0.39/install.yaml | kubectl apply -f -
    
    # Use a new namespace for the operator.
    kubectl create namespace valkey-operator
    
  2. Deploy a Valkey instance using a minimal configuration:

    # valkey-operator.yaml
    apiVersion: hyperspike.io/v1
    kind: Valkey
    metadata:
       labels:
          app.kubernetes.io/name: valkey-operator
          app.kubernetes.io/managed-by: kustomize
       name: my-valkey
    spec:
       nodes: 1
    
    # Apply the Valkey instance configuration.
    kubectl apply -f valkey-operator.yaml -n valkey-operator
    
  3. Monitor the deployment:

    kubectl get pods -n valkey-operator
    

Monitoring and Management

To ensure Valkey is running optimally, you should regularly monitor its performance.

Tools for Monitoring

  • Valkey CLI: Use commands like INFO to get real-time data.
  • Prometheus and Grafana: Integrate for advanced monitoring and visualization.

Conclusion

Valkey offers a robust alternative to Redis, with features tailored for modern, scalable applications. Whether you are deploying Valkey using Docker or managing it with Kubernetes, this guide provides you with the necessary steps to get started efficiently. Ensure to monitor your Valkey instances regularly to maintain optimal performance and reliability.

For more detailed documentation and community support, explore the Valkey GitHub repository and official documentation.

Was this content helpful?

Stay up to date on all things Dragonfly

Join our community for unparalleled support and insights

Join

Switch & save up to 80% 

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost