pgAdmin for All of Your Postgres Kubernetes Clusters
We recently announced the latest update of Crunchy Postgres for Kubernetes 5.5. In this version 5.5 update, we would like to highlight a key feature: the introduction of a new pgAdmin API.
The notable changes in this feature include:
- The ability to manage all Postgres clusters through a single interface
- Automatic cluster detection
- A new custom resource file for pgAdmin
Read on to explore the full range of functionalities and learn how to set up and utilize this new pgAdmin experience. Additionally, make sure to consult our comprehensive documentation, which provides practical how-tos, best practices, and detailed step-by-step guides to maximize your Crunchy Postgres for Kubernetes experience.
Updated pgAdmin deployment
If you are already familiar with our previous method of providing pgAdmin and prefer not to make changes, don't worry! You don't have to. We have not made any modifications to how you request a pgAdmin instance attached to a single PostgresCluster. You can still add it to your PostgresCluster definition like this:
kind: PostgresCluster
spec:
userInterface:
pgAdmin: ...
The most noticeable change is the creation of a new API and a new Custom Resource
specifically for pgAdmin. This pgAdmin instance will be separate and independent from any PostgresCluster. Here’s a sample of the new PGAdmin Custom Resource file.
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PGAdmin
metadata:
name: rhino
spec:
dataVolumeClaimSpec:
accessModes:
- 'ReadWriteOnce'
resources:
requests:
storage: 1Gi
serverGroups:
- name: supply
postgresClusterSelector: {}
One advantage of this new method is that the pgAdmin is not limited to any specific PostgresCluster. In fact, this new approach allows you to easily create a single pgAdmin instance that can manage multiple PostgresClusters within a namespace. We refer to this new approach as "namespace-scoped" because it enables the management of several clusters in a namespace.
In addition to this change, we have also released a new pgAdmin image for this new implementation, which is compatible with the most recent versions of Postgres.
We believe you will appreciate these updates, and we have prepared a walkthrough to demonstrate the flexibility and power of this new experience.
Walkthrough
Let's set up a new namespace-scoped pgAdmin instance.
Checking the CRD Existence
Before we can create a pgAdmin instance from the Custom Resource, we need to ensure that we are running PGO v5.5 with the new pgadmins
Custom Resource installed. Let's verify the existence of the pgadmins
resource:
-> kubectl get crd pgadmins.postgres-operator.crunchydata.com
NAME CREATED AT
pgadmins.postgres-operator.crunchydata.com ...
If we see the pgadmins
Custom Resource, we are ready to proceed with creating a pgAdmin instance by defining a YAML and sending it to the Kubernetes cluster.
Creating a pgAdmin Instance
Let's define a basic pgadmins
instance:
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PGAdmin
metadata:
name: rhino
spec:
dataVolumeClaimSpec:
accessModes:
- 'ReadWriteOnce'
resources:
requests:
storage: 1Gi
serverGroups:
- name: demand
postgresClusterSelector:
matchLabels:
owner: logistics
In this example, we define a pgadmin
named "rhino" with 1Gi of storage. 1Gi of storage is our usual amount in our examples but can be adjusted for your specific needs. We will discuss the serverGroups
section in a moment, but first, let's create this pgAdmin instance and access it.
To create the instance, we can use the kubectl create
command with this YAML, or any other command/infrastructure you typically use to create Kubernetes instances.
Accessing the pgAdmin
To access the pgAdmin instance, we need to access the port and retrieve the user credentials.
Exposing the Port
The pod exposes pgAdmin on port 5050. To access it, we can create a Service and port-forward to that Service.
If you already have a Service or want to port-forward directly to the Pod, you can do that as well.
To create a Service, we can use kubectl expose, which conveniently creates a Service for us if we provide the name of a Pod. First, we need to obtain the name of the Pod by selecting it using a label. Assuming we want to connect to the rhino
pgAdmin example mentioned earlier, we can run the following commands:
# Select the pod using the `postgres-operator.crunchydata.com/pgadmin=rhino` label
# and save the name to the variable PGADMIN_POD for easier reuse.
PGADMIN_POD=$(kubectl get pod -n postgres-operator -l postgres-operator.crunchydata.com/pgadmin=rhino -o name)
# Create a Service with the expose command.
kubectl expose -n postgres-operator ${PGADMIN_POD} --name rhino-pgadmin
# Port-forward to the new service
kubectl port-forward -n postgres-operator svc/rhino-pgadmin 5050:5050
Once we have completed these steps, we can open our browser and navigate to http://localhost:5050
, which will display the pgAdmin login screen. Now we need to obtain our credentials.
Getting the Credentials
In this version, pgAdmin requires an initial administrator user. As part of the deployment, the Operator sets up an administrator user and generates a password, which we can use.
The username we define is hardcoded to prevent issues with username redefinition. The admin username is admin@<pgAdmin_name>.<pgAdmin_namespace>.svc
.
The password for that user is stored in the Secret with the label postgres-operator.crunchydata.com/pgadmin=<pgAdmin_name>
.
For example, in our rhino
example (assuming the pgadmin was created in the postgres-operator
namespace), the user would be admin@rhino.postgres-operator.svc
. To retrieve the password, we can run the following command:
kubectl get secret -n postgres-operator -l postgres-operator.crunchydata.com/pgadmin=rhino -o jsonpath='{.items[0].data.password}' | base64 -d
This command will output the password.
With the port exposed, and with the username and password, we can now log in as the administrator and manage other users.
Customizing Your pgAdmin
Configuring External User Management
But what if we don't want to manage users? Similar to the previous method of deploying pgAdmin, this new namespace-scoped pgAdmin accepts custom configurations, including LDAP configuration.
For instance, if we want to change the configuration to disable Gravatar images for users, we can set the following in the pgadmin
spec:
spec:
config:
settings:
SHOW_GRAVATAR_IMAGE: False
Additionally, we can mount files to /etc/pgadmin/conf.d
inside the pgAdmin container using projected volumes. For example, the following spec mounts the useful.txt
file from the mysecret
Secret to /etc/pgadmin/conf.d/useful.txt
:
spec:
config:
files:
- secret:
name: mysecret
items:
- key: useful.txt
For LDAP configuration, we need to set certain configuration options through the spec.config.settings
section of the YAML. If we have an LDAP bind password, we would also need to mount a Secret with that password using the special field spec.config.ldapBindPassword
. (We use this special field to remind users not to include passwords directly in the YAML.)
To illustrate, if we wanted to add LDAP configuration and had a Secret named ldap-pass
with the password stored in a field called my-password
, our pgadmin
spec might look like this:
spec:
config:
settings:
AUTHENTICATION_SOURCES: ['ldap']
LDAP_AUTO_CREATE_USER: True
LDAP_SERVER_URI: ldaps://my.ds.example.com
ldapBindPassword:
name: ldap-pass
key: my-password
Automated Server Discovery
In our previous architecture, each pgAdmin instance was tied to a PostgresCluster. With the new approach, a pgAdmin instance can automatically register multiple PostgresClusters. How does this work?
This is where the serverGroups
section of the PgAdmin spec, mentioned earlier, comes into play. In this section, we can define multiple selector groups that the Operator uses to select specific PostgresClusters based on labels.
The Operator accomplishes this in a Kubernetes-native manner: it lists all the PostgresClusters in the namespace and then filters them based on the labels or selectors we have defined.
For example, if we had a pgAdmin set up with the spec mentioned above, this is the section that defines our filtering:
spec:
serverGroups:
- name: demand
postgresClusterSelector:
matchLabels:
owner: logistics
This spec instructs the Operator to "list the PostgresClusters in the namespace, but filter that list for clusters with the label owner: logistics
."
The Operator then creates that list, which the pgAdmin instance registers. In fact, the pgAdmin process registers these Postgres instances as "shared servers" owned by the initial administrator user set up by pgAdmin. A "shared server" is owned by one user (the administrator) but can be viewed in the pgAdmin list by any user.
Please note that for namespace-scoped pgAdmin deployments, there is no automatic synchronization between pgAdmin and Postgres users. Even if you can sign in to pgAdmin and see PostgresClusters, you will still need a valid Postgres user and credentials to access the Postgres database.
It is possible to omit the serverGroups
field or to have no PostgresClusters discovered with the given selectors. Due to how pgAdmin currently functions, when zero servers are discovered, the initial administrator user will be unable to manually add new ServerGroups or Servers.
But what if we wanted to register PostgresClusters with different labels? We can handle this scenario by adding multiple serverGroups
, which will create separate "Server Groups" in pgAdmin.
And what if we wanted to register all the PostgresClusters in a namespace? Since we are using Kubernetes-native idioms for label matching, we can add a serverGroup with an empty postgresClusterSelector
like this:
spec:
serverGroups:
- name: supply
postgresClusterSelector: {}
This Kubernetes-specific idiom matches all PostgresClusters in the namespace.
If you want to deploy one pgAdmin to manage all the PostgresClusters in a namespace and share those servers with all pgAdmin users, you can configure your pgadmin
deployment to automatically register all those PostgresClusters, avoiding the need to manually import them one-by-one! Here’s a sample of databases added from two clusters, hippo and rhino.
Finishing Up
Upgrading to Crunchy Postgres for Kubernetes v5.5.0
is generally a simple single command. If you installed Crunchy Postgres for Kubernetes using the Kustomize installer available in the Postgres Operator examples repository, you would issue: kubectl apply --server-side -k kustomize/install/default
. For additional upgrade guidance please see the Crunchy Postgres for Kubernetes Upgrade documentation.
The entire team at Crunchy Data is thrilled to introduce this new feature to our customers and the community. As always, we welcome your feedback, so feel free to join the discussion on our Discord server.
Related Articles
- Crunchy Data Warehouse: Postgres with Iceberg for High Performance Analytics
8 min read
- Loading the World! OpenStreetMap Import In Under 4 Hours
6 min read
- Easy Totals and Subtotals in Postgres with Rollup and Cube
5 min read
- A change to ResultRelInfo - A Near Miss with Postgres 17.1
8 min read
- Accessing Large Language Models from PostgreSQL
5 min read