Dragonfly Cloud is now available on the AWS Marketplace - Learn More

Question: What are the scaling limits of PostgreSQL?

Answer

PostgreSQL, widely known for its robustness and flexibility, serves a broad range of applications from small projects to large-scale enterprise systems. However, like any database system, it has its set of scaling limits which necessitate planning and strategy for growth and performance optimization.

Vertical Scaling Limits

Vertical scaling refers to adding more resources (CPU, RAM, storage) to the existing database server. PostgreSQL can effectively utilize the hardware up to a certain point:

  • CPU Cores: PostgreSQL does well with multi-core processors, but beyond 64 cores, the return on adding more cores diminishes due to contention among backend processes.
  • Memory: Increasing memory helps with caching (shared_buffers) and can significantly improve read operations. Yet, there's an optimal size for shared_buffers (often cited as 25% of total RAM), beyond which additional memory may not result in proportional performance gains.
  • Disk I/O: SSDs greatly enhance performance over HDDs, particularly for write-heavy applications. However, disk throughput and latency eventually become bottlenecks, especially if not using techniques like partitioning to manage large datasets.

Horizontal Scaling Limits

Horizontal scaling involves distributing the load across multiple database servers or instances.

  • Read Replicas: PostgreSQL supports read replicas, allowing you to scale out read operations. The main limitation here is the overhead of maintaining consistency and the delay in replication, which might not be suitable for all real-time applications.
  • Write Scaling: This is more challenging since PostgreSQL uses a single-master architecture, meaning all writes must go through the primary server. Solutions involve sharding data across multiple databases but require application-level changes or third-party tools (e.g., Citus) to manage complexity.

Connection Limitations

Each connection consumes memory and CPU resources. PostgreSQL's default limit is 100 connections, but this can be increased. However, beyond a few hundred connections, performance issues may arise. Pooling solutions, like PgBouncer, can help manage and reuse connections efficiently.

Logical Limitations

  • Database Size: PostgreSQL supports databases up to 32 TB in size, and table size up to 16 TB. For most applications, this is more than enough, but extremely large datasets might require special handling or partitioning.
  • Row Size: The maximum row size is 1.6 TB, though practical considerations usually keep rows much smaller.

Overcoming Limits

To effectively scale PostgreSQL, consider a combination of strategies:

  • Partitioning: Breaks down large tables into smaller, more manageable pieces.
  • Indexing: Proper indexing strategies can significantly reduce query times.
  • Connection Pooling: Use tools like PgBouncer to manage database connections.
  • Read Replicas & Load Balancing: Distribute read queries across multiple servers.
  • Sharding/Citus: For write-heavy applications requiring horizontal scaling, consider using extensions like Citus that enable sharding across multiple PostgreSQL nodes.

In conclusion, while PostgreSQL has its limits, careful architecture planning, resource management, and utilization of available scaling techniques can allow it to handle most workloads efficiently.

Was this content helpful?

White Paper

Free System Design on AWS E-Book

Download this early release of O'Reilly's latest cloud infrastructure e-book: System Design on AWS.

Free System Design on AWS E-Book

Switch & save up to 80% 

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost