Introducing Crunchy Data Warehouse: A next-generation Postgres-native data warehouse. Crunchy Data Warehouse Learn more

Kubernetes Operator Meets Fully Managed Postgres

Avatar for Andrew L'Ecuyer

Andrew L'Ecuyer

10 min read

As a team you often get handed a piece of software to deploy and manage, for example Red Hat's Ansible Automation Platform (AAP) or Quay. Red Hat's guide is to run and manage this in OpenShift and great, you're already comfortable with OpenShift and have a decent size deployment. Turns out pretty early on you've got a decision to make you didn't even realize was a decision, what are you going to do about the database? Most software needs a database – and the database of choice is overwhelmingly Postgres. But managing stateful systems is a different commitment than stateless apps.

In fact we have the conversation all the time with customers that need Postgres, want Postgres. While some want to manage and be responsible for their database others don't. Well today you've got another choice. You can still have your database integrated with Crunchy Postgres for Kubernetes, but not actually have to worry about monitoring it. With the latest release of Crunchy Postgres for Kubernetes you can provision fully managed Postgres by Crunchy Bridge from within your kube cluster.

Crunchy Bridge is a fully managed cloud Postgres service, at a high level:

  • Deployable on any public cloud AWS, Azure, or GCP
  • Has plans to fit a variety of needs from hobby projects to large enterprise
  • In place scaling and sizing, read replicas for horizontal scaling
  • High availability, point in time recovery, and managed backups
  • Database monitoring and insights
  • Fully supported by a team of Postgres experts

Our latest release of Crunchy Postgres for Kubernetes now has an API for deploying and managing Postgres clusters in Crunchy Bridge. Today I’ll walk through the steps of getting up a Kubernetes environment that will deploy a fully managed cloud Postgres cluster.

Running Crunchy Postgres for Kubernetes

To start we will want to get the Postgres Operator up and running if we haven’t already. Naturally, we will need to have a Kubernetes cluster provisioned. In your terminal, make sure that kubectl has its context set to your desired Kubernetes cluster. You can then follow the Installation section in our Quickstart guide to get the operator up and running.

To utilize the CrunchyBridgeCluster feature, you will need to be running the Crunchy Postgres for Kubernetes version 5.6 or greater and have the crunchybridgecluster CRD installed. If you have an earlier version installed, follow the upgrade instructions to bring your installation up to 5.6.

Let’s verify the existence of the crunchybridgecluster resource before moving forward:

$ kubectl get crd crunchybridgeclusters.postgres-operator.crunchydata.com
NAME                                                      CREATED AT
crunchybridgeclusters.postgres-operator.crunchydata.com   ...

Connecting to Crunchy Bridge

  1. Create an Account: If you have not used Crunchy Bridge before, create an account. You will also need an active payment method. Note that Crunchy Bridge invoices for prorated fees so you can create a test and destroy it without incurring the full monthly cost.
  2. Create an API key: Each account can create API keys that will allow the operator to authenticate with the Bridge API. You can find API keys inside My Account on the left sidebar.
  3. Get your team ID. The URL in your web browser should take the form of https://crunchybridge.com/teams/<Your Team ID/dashboard where <Your Team ID> should be a string of random characters. Copy your Team ID and keep it handy as we are about to use it.

Set up Kubernetes API for Crunchy Bridge

Create a Secret in our Kubernetes cluster for Bridge API key

We are now ready to create a Secret in our Kubernetes cluster that the operator will use to authenticate with Bridge. Execute the following command, filling in the API Key and the Team ID as specified:

kubectl create secret generic bridge-connection-secret \
  -n postgres-operator \
  --from-literal=key=<your Crunchy Bridge API Key here> \
  --from-literal=team=<your Crunchy Bridge Team ID here>

Note: if the operator is running in a different namespace, you will want to create the secret in that namespace.

Creating a Bridge Cluster CRD

With Crunchy Postgres for Kubernetes now running and our connection secret in place, we are now ready to create a Postgres cluster in Bridge. To do this we will need to create a manifest for creating and manipulating the CrunchyBridgeCluster Custom Resource in Kubernetes. Here is a basic example:

apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: CrunchyBridgeCluster
metadata:
  name: my-test-cluster-cr
  namespace: postgres-operator
spec:
  isHa: false
  clusterName: my-test-bridge-cluster
  plan: standard-8
  majorVersion: 16
  provider: aws
  region: us-west-2
  secret: bridge-connection-secret
  storage: 100Gi
  roles:
    - { name: 'application', secretName: 'application-role-secret' }
    - { name: 'postgres', secretName: 'postgres-role-secret' }

Let’s walk through this manifest. When we kubectl apply this yaml file, we will be creating a CrunchyBridgeCluster custom resource with the name my-test-cluster-cr in the postgres-operator namespace. The postgres-operator will see this CR and begin working on creating a cluster in Bridge with the specifications in the spec:

  • isHa: false will ask that the cluster have the High Availability feature turned off
  • clusterName: my-test-bridge-cluster will request that the name of the cluster in Bridge be my-test-bridge-cluster
  • plan: standard-8 will request a standard-8 plan, which sets the amount memory, cpus, among other things (see the Plans and pricing documentation for details)
  • majorVersion: 16 sets the Postgres version to 16
  • provider: aws asks that the cluster be provisioned in AWS (other options are azure and gcp)
  • region: us-west-2 tells Bridge to provision the cluster in the us-west-2 region
  • secret: bridge-connection-secret tells the operator the name of the secret that we created earlier that should be used to authenticate with Bridge
  • storage: 100Gi will ask that the cluster be provisioned with 100 gibibytes of storage space

Apply the CR to create the cluster

Save the file and then apply it to create the cluster:

$ kubectl apply -f myTestCluster.yaml
crunchybridgecluster.postgres-operator.crunchydata.com/my-test-cluster-cr created

Cluster details

If we do a kubectl describe on the crunchybridgecluster resource type we can look at the details of our cluster:

$ kubectl describe crunchybridgecluster -n postgres-operator
Name:         my-test-cluster-cr
Namespace:    postgres-operator
Labels:       <none>
Annotations:  <none>
API Version:  postgres-operator.crunchydata.com/v1beta1
Kind:         CrunchyBridgeCluster
Metadata:
  Creation Timestamp:  2024-03-19T19:36:13Z
  Finalizers:
    crunchybridgecluster.postgres-operator.crunchydata.com/finalizer
  Generation:        1
  Resource Version:  41244527
  UID:               b40169d0-a3dc-445d-8221-5392c54a0690
Spec:
  Cluster Name:   my-test-bridge-cluster
  Is Ha:          false
  Major Version:  16
  Plan:           standard-8
  Provider:       aws
  Region:         us-west-2
  Secret:         bridge-connection-secret
  Storage:        100Gi
Status:
  Conditions:
    Last Transition Time:  2024-03-19T19:36:14Z
    Message:               Bridge cluster state is creating.
    Observed Generation:   1
    Reason:                creating
    Status:                False
    Type:                  Ready
    Last Transition Time:  2024-03-19T19:36:15Z
    Message:               No upgrades being performed
    Observed Generation:   1
    Reason:                NoUpgradesInProgress
    Status:                False
    Type:                  Upgrading
  Host:                    p.ft36khfslre6hhwykh5m5u2sau.db.postgresbridge.com
  Id:                      ft36khfslre6hhwykh5m5u2sau
  Is Ha:                   false
  Is Protected:            false
  Major Version:           16
  Name:                    my-test-bridge-cluster
  Plan:                    standard-8
  Responses:
    Cluster:
      Cpu:         2
      created_at:  2024-03-19T19:36:14Z
      dashboard_settings:
      Host:                 p.ft36khfslre6hhwykh5m5u2sau.db.postgresbridge.com
      Id:                   ft36khfslre6hhwykh5m5u2sau
      is_ha:                false
      is_protected:         false
      is_suspended:         false
      major_version:        16
      Memory:               8
      Name:                 my-test-bridge-cluster
      network_id:           kdn7pmp2wfbefosfuzrlnl2zve
      plan_id:              standard-8
      postgres_version_id:  h5xjatdwujh4fnfn7ab6uh5xsu
      provider_id:          aws
      region_id:            us-west-2
      reset_stats_weekly:   true
      State:                creating
      Storage:              100
      tailscale_active:     false
      team_id:              kjtsp4kmyrdyfmatpwbt6s2hfq
      updated_at:           2024-03-19T19:36:14Z
    Status:
      disk_available_mb:   0
      disk_total_size_mb:  0
      disk_used_mb:        0
      State:               creating
    Upgrade:
      cluster_id:  ft36khfslre6hhwykh5m5u2sau
      Operations:
      team_id:  kjtsp4kmyrdyfmatpwbt6s2hfq
  State:        creating
  Storage:      100Gi
Events:         <none>

This command gives us a great amount of information about our cluster. We see the things that we supplied in our manifest, namely the settings in the spec and the name and namespace, but also a lot of information in the status which all comes from Bridge. We can see that the status.state is creating. Furthermore, we can see that the ready condition is false because the cluster is in a “creating” state. We can corroborate this fact by going back to the Crunchy Bridge GUI in the web browser and seeing that the cluster state is “creating” there as well. If we periodically do a kubectl describe every few minutes, we should eventually see the ready condition transition to true indicating that the cluster is fully bootstrapped and ready.

Connecting to the cluster

Now that we have a postgres cluster up and running in Bridge, we want to connect to it so that we can start using it. While you can use the Crunchy Bridge dashboard or the CLI to connect to your database, the operator can also retrieve the connection information for us and store it in a Secret in our kube cluster. This provides a convenient interface for any applications running in the kube cluster to get connection details for the database.

To get the credentials, we need to add the desired role(s) to our CrunchyBridgeCluster spec. Bridge has two default roles, postgres and application, and a role for each specific user account. We can get a particular role’s connection information by adding an object to spec.roles that has a name property, where we enter the name of the desired role, and a secretName property, where we can set a Secret name of our choosing. See the below example of how the roles are added.

spec:
  roles:
    - { name: 'application', secretName: 'application-role-secret' }
    - { name: 'postgres', secretName: 'postgres-role-secret' }

We should then see a Secret get created in our namespace called application-role-secret. We can look at its contents by running the following command:

$ kubectl get secret/application-role-secret -n postgres-operator -o yaml

apiVersion: v1
data:
  name: YXBwbGljYXRpb24=
  password: WHBwNnlOeVpDUnFHMlZoeUE4cENFa01CUWZXeUdvNVd0R0tLZW1RWlpMOVVGU3ZCNERXRmdva1E4RWtsbDhUQg==
  uri: cG9zdGdyZXM6Ly9hcHBsaWNhdGlvbjpYcHA2eU55WkNScUcyVmh5QThwQ0VrTUJRZld5R281V3RHS0tlbVFaWkw5VUZTdkI0RFdGZ29rUThFa2xsOFRCQHAuZnQzNmtoZnNscmU2aGh3eWtoNW01dTJzYXUuZGIucG9zdGdyZXNicmlkZ2UuY29tOjU0MzIvcG9zdGdyZXM=
kind: Secret
metadata:
  creationTimestamp: "2024-03-19T21:02:37Z"
  labels:
    postgres-operator.crunchydata.com/cbc-pgrole: application
    postgres-operator.crunchydata.com/cluster: my-test-cluster-cr
    postgres-operator.crunchydata.com/role: cbc-pgrole
  name: application-role-secret
  namespace: postgres-operator
  ownerReferences:
  - apiVersion: postgres-operator.crunchydata.com/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: CrunchyBridgeCluster
    name: my-test-cluster-cr
    uid: b40169d0-a3dc-445d-8221-5392c54a0690
  resourceVersion: "41284965"
  uid: 57fb11be-8c47-44be-a818-236b8a6d970d
type: Opaque

Note that the data is encoded, so in order to get their string values we will have to decode them. The following commands will retrieve, decode, and store each piece of data in a separate environment variable:

$ APPLICATION_USERNAME=$(kubectl -n postgres-operator get secrets application-role-secret -o go-template='{{.data.name | base64decode}}')
$ APPLICATION_PASSWORD=$(kubectl -n postgres-operator get secrets application-role-secret -o go-template='{{.data.password | base64decode}}')
$ APPLICATION_URI=$(kubectl -n postgres-operator get secrets application-role-secret -o go-template='{{.data.uri | base64decode}}')

You can now plug these values into your application and connect to your database!

Updating or resizing your cluster

We’ve created a cluster, connected to it, and maybe even started adding data. But what if we want to make a change to the cluster? Maybe we realized after we created the cluster that 100 gibibytes isn’t going to be enough space and we want to resize to 200Gi? Luckily, making these kinds of changes is as easy as editing our spec:

apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: CrunchyBridgeCluster
metadata:
  name: my-test-cluster-cr
  namespace: postgres-operator
spec:
  isHa: false
  clusterName: my-test-bridge-cluster
  plan: standard-8
  majorVersion: 16
  provider: aws
  region: us-west-2
  secret: bridge-connection-secret
  storage: 200Gi
  roles:
    - { name: 'application', secretName: 'application-role-secret' }
    - { name: 'postgres', secretName: 'postgres-role-secret' }

Once we’ve applied the changes to our manifest, we can do a kubectl describe on the crunchybridgecluster resource and we will see in the status.conditions that an upgrade of type “resize” is in progress:

$ kubectl describe crunchybridgecluster -n postgres-operator

...
	Conditions:
    Message:               Bridge cluster state is ready.
    Observed Generation:   3
    Reason:                ready
    Status:                True
    Type:                  Ready
    Last Transition Time:  2024-03-19T22:54:59Z
    Message:               Performing an upgrade of type resize with a state of creating.
    Observed Generation:   3
    Reason:                resize
    Status:                True
    Type:                  Upgrading
  Host:                    p.ft36khfslre6hhwykh5m5u2sau.db.postgresbridge.com
...

Once the resize is complete, you will see the upgrading status condition go back to false and status.storage will reflect the new value.

Deleting the cluster

While we’ve enjoyed having you and hope you’ll stay for a long time, whenever you need to delete your postgres cluster, it is as simple as deleting the CR from your kube cluster and the operator will take care of the rest:

$ kubectl delete -f myTestCluster.yaml
crunchybridgecluster.postgres-operator.crunchydata.com "my-test-cluster-cr" deleted

Note: If the is_protected setting is true, the cluster will not be deleted. This is a helpful safeguard against accidentally deleting clusters.

Tips for working with Kubernetes & Crunchy Bridge

If you’re a current Crunchy Postgres for Kubernetes user, get familiar with the Crunchy Bridge docs and our CLI. You’ll love how easy our CLI is to use for other management tasks. You’ll probably need to use Crunchy Bridge for now to do things like create a fork for backups or create a replica.

  • If you’re a current Bridge customer, get familiar with the Crunchy Postgres for Kubernetes docs. Also, there’s a great Discord community.
  • Keep the cluster specs updated if you make any changes to your key machine metrics like plan, storage size, or HA settings. While you can do these in the Crunchy Bridge dashboard or CLI, managing them in your Kubernetes will keep the files more closely aligned and minimize drift.

Summary

Next time you need to deploy an app in Kubernetes and it needs a database - now you have more choices. One for managed Crunchy Postgres for Kubernetes and the other, fully managed, Crunchy Bridge.

Co-authored with Drew Sessler.