Automating Dragonfly Cloud Private Networks with Terraform
Learn how to configure private networks for Dragonfly Cloud data stores using Terraform—improving security, lowering latency, and optimizing costs.
March 25, 2025

Private Networks in Dragonfly Cloud
When creating data stores in Dragonfly Cloud, you can configure either public or private endpoints for communication. While the platform supports both options, I recommend private endpoints whenever possible—they enhance security, improve performance with lower latency, and often reduce data transfer costs.
In this post, we’re going to take a look at the specific benefits of private networks for Dragonfly data stores and provide a step-by-step guide to implementing them using the Dragonfly Cloud Terraform Provider, allowing you to automate what would otherwise be a complex, manual process.
We will also take this opportunity to learn more about the recently released Dragonfly Cloud Terraform Provider. If you’ve not read the previous Terraform announcement post, which covers the basics of our provider, I’d really recommend checking that out before reading this!
Why Private Networks Are Essential for Your Data Stores
Private networks in cloud environments are secure, isolated networks that allow you to connect your data stores to your applications or other resources from your cloud vendors without exposing them to the public internet. If you want to configure a private endpoint for your Dragonfly data store, you need to create a private network first.
Private Endpoints | Public Endpoints |
---|---|
Accessible only within a private network (e.g., VPC), reducing exposure to attacks. | Accessible over the internet, making them targets for DDoS and unauthorized access attempts. |
Traffic is routed directly within the private network using high-speed connections. Lower latency as traffic stays within the cloud provider’s internal network. | Traffic routed over the public internet must pass through multiple hops and routers, increasing latency and degrading performance due to congestion and inefficient routing. |
TLS is disabled by default for private endpoints, reducing computational overhead. | TLS encryption is recommended for security, but it adds computational overhead and potentially reduces performance. |
Traffic routed through private networks typically avoids internet egress fees, drastically reducing expenses. | Data transferred over the public internet incurs cloud provider bandwidth fees, which can escalate quickly with high-volume workloads. |
Choosing private endpoints over public endpoints provides several benefits:
- Better Security: Public endpoints are accessible over the internet, making them potential targets for attacks. This risk is mitigated by using an endpoint that is only accessible within a private network, like a VPC in AWS.
- Higher Throughput: When using private endpoints, data transfer occurs within the cloud provider’s internal network, which is optimized for high throughput. Additionally, TLS encryption, which is mandatory for public endpoints, adds computational overhead and can reduce the effective throughput, especially for data-intensive workloads.
- Lower Latency: With private endpoints, traffic stays within the cloud provider’s internal network, which is designed for low-latency communication. These networks are also less congested than the public internet.
- Cost Efficiency: Cloud providers often charge for data transfer over the public internet, while internal network traffic is typically free or significantly cheaper. Reduced latency and improved throughput also allow your apps to operate more efficiently, potentially lowering compute and storage costs.
How to Create a Private Network in Dragonfly
I think by now you can start to understand what makes private networks so useful. But configuring them can be a bit of a cumbersome process. Let’s see what the flow looks like when you’re configuring things manually:
- You first create a private network on the Dragonfly Cloud platform.
- Then you create a VPC and all the cloud provider-specific things you need for your network (internet gateway, subnets, security groups, etc.). This can be a really tedious and time-consuming process. And there are a lot of settings you need to make sure you get right each time.
- After that, you create a peering connection. A peering connection is what enables you to connect a private network in Dragonfly Cloud to the VPC in your cloud provider. This peering connection enables the two networks to talk over the private IP space.
- Once all of this is done, you can create a data store with a private endpoint by choosing the private network you created earlier.
As you can already see, this doesn’t make for a friendly developer experience. Solving this problem was one of the motivations behind the Dragonfly Cloud Terraform Provider. Terraform allows you to define all the steps we discussed above in the form of code. We already discussed how to create a data store using the Terraform provider in the previous blog. Now, let’s take a look at how you can configure private endpoints for data stores using it!
Exploring the Dragonfly Cloud Terraform Provider
We’ll configure private endpoints for a Dragonfly Cloud data store by creating a private network and linking it to your AWS VPC. While this guide focuses on the Dragonfly-side setup, since your cloud provider may vary, you can review the complete Terraform code example in our repository for end-to-end implementation.
Setting up the Terraform Provider
The first thing we do is initialize the Dragonfly Cloud provider and specify its source in the Terraform registry:
terraform {
required_providers {
dfcloud = {
source = "registry.terraform.io/dragonflydb/dfcloud"
}
}
}
provider "dfcloud" {}
Creating a Private Network
After that, we create a private network in the Dragonfly Cloud account:
resource "dfcloud_network" "network" {
name = "network"
location = {
region = "us-east-1"
provider = "aws"
}
cidr_block = "192.168.0.0/16"
}
The configuration we pass to the Terraform configuration above is the same as what we see in the UI when creating a network.
Establishing a Peering Connection
After creating our private network, we need to create a peering connection. A peering connection is what will allow us to connect our private network to the VPC in AWS so that other applications and resources in this VPC can communicate with our data store.
resource "dfcloud_connection" "connection" {
depends_on = [aws_vpc.client, dfcloud_network.network]
name = "connection"
peer = {
account_id = data.aws_caller_identity.current.account_id
region = "us-east-1"
vpc_id = aws_vpc.client.id
}
network_id = dfcloud_network.network.id
}
The depends_on
field tells Terraform to pause the creation of this resource until the resources it depends on have been fully created. Here, you see we use the client ID of the VPC that was created in AWS using Terraform as well. If you check in the aws.tf
file in the sample repo, you’ll find this code, which created the VPC:
resource "aws_vpc" "client" {
cidr_block = "172.16.0.0/16"
tags = {
Name = "tf-client-vpc"
}
}
We also use the aws_caller_identity
defined in the aws.tf
file to pass the details about the AWS account ID to our peering connection.
data "aws_caller_identity" "current" {}
This retrieves the current AWS account ID, which can be used later in the configuration.
Accepting the Peering Connection
Just creating this peering connection isn’t enough. We also need to accept the peering connection from AWS. The following code does this with the help of a peering connection accepter:
resource "aws_vpc_peering_connection_accepter" "accepter" {
depends_on = [dfcloud_connection.connection]
vpc_peering_connection_id = dfcloud_connection.connection.peer_connection_id
auto_accept = true
}
You’ll see that we get the peering connection ID from the peering connection resource dfcloud_connection.connection
that we created above.
Configuring Network Routes
The next thing to do is add a route to the AWS route table to allow traffic to the Dragonfly network via the VPC peering connection:
resource "aws_route" "route" {
depends_on = [aws_vpc_peering_connection_accepter.accepter]
route_table_id = aws_route_table.route-public.id
destination_cidr_block = dfcloud_network.network.cidr_block
vpc_peering_connection_id = dfcloud_connection.connection.peer_connection_id
}
Here, other than the Dragonfly Cloud resources we just created, we also refer to the AWS route table, which we create in the aws.tf
file:
resource "aws_route_table" "route-public" {
vpc_id = aws_vpc.client.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.test_env_gw.id
}
tags = {
Name = "public-route-table-demo"
}
}
Setting up Security Groups
Finally, we create a security group in AWS that allows traffic on port 6379
(the default port for Dragonfly, Redis, and Valkey) between the AWS VPC and the Dragonfly network:
resource "aws_security_group" "allow_dfcloud" {
depends_on = [aws_vpc.client]
vpc_id = aws_vpc.client.id
egress {
from_port = 6379
to_port = 6379
protocol = "tcp"
cidr_blocks = [dfcloud_network.network.cidr_block]
}
ingress {
from_port = 6379
to_port = 6379
protocol = "tcp"
cidr_blocks = [dfcloud_network.network.cidr_block]
}
}
Creating a Data Store with a Private Endpoint
After all of this is done, we create a data store using code similar to the one we saw in the previous blog:
resource "dfcloud_datastore" "cache" {
depends_on = [dfcloud_connection.connection]
name = "cache"
location = {
region = "us-east-1"
provider = "aws"
}
network_id = dfcloud_network.network.id
tier = {
max_memory_bytes = 200000000000
performance_tier = "enhanced"
replicas = 1
}
}
Outputting Connection Information
Finally, we output the endpoint of our created data store so that the users of our script can see what URI to use to connect to it:
output "redis-endpoint" {
sensitive = true
value = "redis://default:${dfcloud_datastore.cache.password}@${dfcloud_datastore.cache.addr}"
}
output "instance-ip" {
value = aws_instance.vm.public_ip
}
output "instance-id" {
value = aws_instance.vm.id
}
Since our data store has a private endpoint, there is no way for us to communicate with it using our local machine. So, in this example, we also create an AWS EC2 instance in our VPC, which we can SSH into to check if we can communicate with the data store over the private network. That’s why we output the IP and ID of this EC2 instance. You should be able to check your connection to the data store by using Redis-CLI from within the EC2 instance.
Streamlining Your Data Infra with Private Networks
I hope after reading this blog that you have a much better understanding of the benefits of communicating with your Dragonfly Cloud data stores over private endpoints as opposed to public ones. If you decide to go the route of configuring private endpoints, the initial setup might be a little complicated, but the benefits clearly outweigh that in the long run. Using an infrastructure-as-code approach with the Dragonfly Cloud Terraform Provider is one way to make the process of creating networks easier to manage and reproduce, as we saw in this tutorial.
As next steps, I’d recommend trying out what we learned in this blog and creating your own data stores with private endpoints. If you feel stuck at any point, here is the complete documentation for the Dragonfly Cloud Terraform provider. The code example we discussed in this tutorial is available on GitHub to help you get started!