Can't Resize your Postgres Kubernetes Volume? No Problem!
You've built an application and are using Postgres to run it. You move it into production. Things are going great. So great that you've accumulated so much data that you need to resize your disk.
Before the cloud, this often involved either expanding your disk partitioning or getting a new disk, both of which are costly operations. Cloud has made this much easier: disk resizes can occur online or transparently to the application, and can be done as simply as clicking a button (such as in Crunchy Bridge).
If you're running your database on Kubernetes, you can also get fairly cheap disk resizes using persistent volumes. While the operation is simple, it does require you to reattach the PVC to a Pod for the expansion to take effect. If uptime is important, you do want to use something like PGO, the open source Postgres Operator from Crunchy Data. PGO uses a rolling update strategy to minimize or eliminate downtime.
There is a catch to the above: not every Kubernetes storage system supports storage resize operations. In order to expand the storage available to your Postgres cluster, you have to create a new cluster and copy data to a larger persistent volume.
Though this is a bit inconvenient, there is still a way to resize your Postgres data volumes while minimizing downtime with PGO. Let's take a look how we can do that!
"Instances Sets": Creating Postgres Cluster That Are Similar But Different
Following the PGO quickstart, let's create a Postgres cluster that looks like this and add an additional replica:
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:centos8-14.0-0
postgresVersion: 14
instances:
- name: inst1
replicas: 2
dataVolumeClaimSpec:
accessModes:
- 'ReadWriteOnce'
resources:
requests:
storage: 1Gi
backups:
pgbackrest:
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:centos8-2.35-0
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- 'ReadWriteOnce'
resources:
requests:
storage: 1Gi
Notice that our disk size is only 1Gi. We can verify the PVC capacity using the following selector:
kubectl -n postgres-operator get pvc \
--selector=postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/instance-set=inst1
which should return something similar to this:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
hippo-inst1-4pt6-pgdata Bound pvc-22f00ce4-6128-4187-ab25-0cfbcac49345 1Gi RWO nfs-client 2m12s
hippo-inst1-8k5m-pgdata Bound pvc-d92498d1-2968-4f58-a0a8-e7579d90ea52 1Gi RWO nfs-client 2m12s
Let's pause and take a look at the postgres-operator.crunchydata.com/instance-set
label. In PGO, an "instance set" is a group of Postgres instances that have similar properties, such as what resource are allocated to them. You can provide a name to an instance set (e.g. in the examples it's instance1
). If you don't provide a name, the instance set name will default to an incrementing sequence, e.g. 00
.
Effectively, PGO instance sets let you create heterogenous Postgres clusters, which can be useful for creating BI/analytics databases, sizing down your PVC (as Kubernetes does not let you automatically resize down) or...yup, the ability to resize our Postgres cluster if PVC resizing is unavailable.
Before we create a new instance set, let's first add some data to our database. While PGO provides many different ways to connect to your Postgres cluster, we will use the kubectl exec
method to quickly populate the database:
kubectl exec -it -n postgres-operator -c database \
$(kubectl get pods -n postgres-operator --selector='postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/role=master' -o name) -- \
psql -c 'CREATE TABLE abc (id int); INSERT INTO abc SELECT * FROM generate_series(1,50000) x; SELECT count(*) FROM abc;'
which should return:
count
-------
50000
(1 row)
Cool. Let's work on resizing this cluster.
When You Can't Resize Your PVC
Before we proceed, note that if your storage class or driver is able to resize your PVC, you should use that method instead.
Let's say we need to resize our Postgres cluster to have 5Gi
of storage available. First, let's add a new instance set. The manifest may look similar to this:
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:centos8-14.0-0
postgresVersion: 14
instances:
- name: inst1
replicas: 2
dataVolumeClaimSpec:
accessModes:
- 'ReadWriteOnce'
resources:
requests:
storage: 1Gi
- name: inst2
replicas: 2
dataVolumeClaimSpec:
accessModes:
- 'ReadWriteOnce'
resources:
requests:
storage: 5Gi
backups:
pgbackrest:
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:centos8-2.35-0
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- 'ReadWriteOnce'
resources:
requests:
storage: 1Gi
Note the addition of this block in the instances
array:
- name: inst2
replicas: 2
dataVolumeClaimSpec:
accessModes:
- 'ReadWriteOnce'
resources:
requests:
storage: 5Gi
We can validate that the new instances have larger PVCs with the following command:
kubectl -n postgres-operator get pvc \
--selector=postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/instance-set=inst2
which should return something similar to:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
hippo-inst2-2fjb-pgdata Bound pvc-f19aee82-88b3-422b-b7f2-a71f0c960c76 5Gi RWO nfs-client 5m38s
hippo-inst2-7mbv-pgdata Bound pvc-cc7a17d7-17e7-40bd-8991-c701e2ddee86 5Gi RWO nfs-client 5m38s
How about our data? Was it copied over to our new instances? We can do a quick test of that. Connect to Postgres in one of the inst2
Pods:
kubectl exec -it -n postgres-operator -c database \
$(kubectl get pods -n postgres-operator --selector='postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/instance-set=inst2' -o name) -- \
psql -c 'SELECT count(*) FROM abc;'
You should see the row count returned:
count
-------
50000
(1 row)
Great, we're now ready to resize. You may want to watch the resize occur. In a separate terminal, you can run the following command
watch kubectl get pods -n postgres-operator \
--selector='postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/instance-set' \
-L postgres-operator.crunchydata.com/instance-set \
-L postgres-operator.crunchydata.com/role
You should see something like:
NAME READY STATUS RESTARTS AGE INSTANCE-SET ROLE
hippo-inst1-cgkc-0 3/3 Running 0 100s inst1 master
hippo-inst1-gzwf-0 3/3 Running 0 100s inst1 replica
hippo-inst2-nqt5-0 3/3 Running 0 24s inst2 replica
hippo-inst2-wfdp-0 3/3 Running 0 24s inst2 replic
Now let's remove the original instance set, leaving only the Postgres instances with larger disks:
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:centos8-14.0-0
postgresVersion: 14
instances:
- name: inst2
replicas: 2
dataVolumeClaimSpec:
accessModes:
- 'ReadWriteOnce'
resources:
requests:
storage: 5Gi
backups:
pgbackrest:
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:centos8-2.35-0
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- 'ReadWriteOnce'
resources:
requests:
storage: 1Gi
Watch what happens: when we remove the inst1
instance set, one of the Postgres instances in inst2
is promoted and becomes the new primary. This means that the application now has access to the larger disks! You can see the changes in the watch view:
NAME READY STATUS RESTARTS AGE INSTANCE-SET ROLE
hippo-inst2-nqt5-0 3/3 Running 0 95s inst2 master
hippo-inst2-wfdp-0 3/3 Running 0 95s inst2 replica
Likewise, you should see only the inst2
set of PVCs available:
kubectl -n postgres-operator get pvc \
--selector=postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/instance-set
which yields only the inst2
PVCs:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
hippo-inst2-nqt5-pgdata Bound pvc-72f84b9f-e9fb-463a-b448-f5db1c956d49 5Gi RWO nfs-client 3m33s
hippo-inst2-wfdp-pgdata Bound pvc-44bfe992-1c88-4166-ba2b-43742069d424 5Gi RWO nfs-client 3m33s
Conclusion
For your typical high availability Postgres setup, you want to ensure that your Postgres instances use the same resources. This ensures a smooth application experience in the event of a failover.
That said, there are cases where you may need to create different size Postgres instances in your HA cluster to accomplish specific goals, such as the disk resizing example above. PGO's grouping of Postgres instances using "instance sets" gives you additional flexibility for how you can build out your Postgres cluster, and even allows for functionality that does not currently exist in Kubernetes itself, such as sizing down a disk.
(Interested in seeing PGO in action? Join us for a webinar on Wednesday, Nov 17th.)
Related Articles
- Crunchy Data Warehouse: Postgres with Iceberg for High Performance Analytics
8 min read
- Loading the World! OpenStreetMap Import In Under 4 Hours
6 min read
- Easy Totals and Subtotals in Postgres with Rollup and Cube
5 min read
- A change to ResultRelInfo - A Near Miss with Postgres 17.1
8 min read
- Accessing Large Language Models from PostgreSQL
5 min read