Use CI/CD to Automatically Update Postgres Images with Argo CD
When working with containers you always have to be mindful of the age of the containers. Every day new CVEs are being discovered and are turning up in image scans. One benefit of having a CI/CD pipeline is the ability to implement security automation. Let's assume you release a monthly update of your containers that are built on the latest version of the base image and all of the most recent patches have been applied. This ensures that each month you can remediate any CVEs that might have popped up in your images since their initial release. In this blog we show you how to use ARGO CD Image Updater as part of your CI/CD pipeline to automatically deploy, test, and promote your updated images. All by doing nothing more than putting them into your registry.
This is part 2 of CI/CD with Crunchy Postgres for Kubernetes and Argo series. We will pick up from where we left off in part 1. We will use Argo CD Image Updater to monitor a private Docker registry for changes to the Postgres image tag. The image updater will update the image tag in GitHub and the Argo CD application will deploy those changes to the postgres-dev namespace. Once deployed, the self-test will run and the changes will be applied to the postgres-qa namespace if all tests pass.
Prerequisites
There are a few prerequisites you will need to handle if you plan on following along with this example :
- A fully functional Argo CD deployment and a Crunchy Data Postgres cluster as described in my previous CI/CD blog.
- A private container registry containing the images you want to deploy. Most organizations will pull images, tag them and then upload them into their private registries. For this blog I am using a private registry for all images except the self test. That image is in a public repo in my Docker registry.
- An access token for your private registry.
- A git repository containing the Crunchy Postgres for Kubernetes manifest to be deployed. Here's a sample manifest you can use or you can fork my git repository.
- A deploy key with write access to your git repo.
Secrets
We will need to create some secrets for the registry access token and git deploy key in the argocd namespace. Sample files are provided in my git repository. You will need to provide relevant values in the provided sample files and apply them to the argocd namespace.
kubectl apply -n argocd -f secrets/privaterepo.yaml
kubectl apply -n argocd -f secrets/privatereg.yaml
You should already have a secret called argocd-token in the postgres-dev and postgres-qa namespaces. This secret contains the base64 encoded JWT token that was created in the sync role in the cicd project in ArgoCD. It was created as part of the initial CI/CD blog. If you do not have the secret, you can create it now:
kubectl apply -n argocd -f secrets/argocd_token.yaml
Argo CD Image Updater
Argo CD Image Updater is a tool to automatically update the container images of Kubernetes workloads that are managed by Argo CD. We will use it to monitor Postgres container images in our private Docker registry and update our image tag in our git repo.
Installation
We will install Argo CD Image Updater into the argocd namespace in our Kubernetes cluster. We already have Argo CD installed there from the previous blog.
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj-labs/argocd-image-updater/stable/manifests/install.yaml
You should now see it in your pod list:
$ kubectl -n argocd get po
NAME READY STATUS RESTARTS AGE
argocd-application-controller-0 1/1 Running 0 80m
argocd-applicationset-controller-685679ccb9-x9962 1/1 Running 0 80m
argocd-dex-server-8f9dbcfb6-zrhgp 1/1 Running 0 80m
argocd-image-updater-56d94c674d-8ddpf 1/1 Running 0 38s
argocd-notifications-controller-5d65949b4b-spp8p 1/1 Running 0 80m
argocd-redis-77bf5b886-67s9t 1/1 Running 0 80m
argocd-repo-server-5b889d7495-r9clr 1/1 Running 0 80m
argocd-server-785bc6f697-pb6cn 1/1 Running 0 80m
We need to inform the argocd-image-updater container about the location of the private registry that it will be monitoring for updates. We can do this by adding the following to the data property in the argocd-image-updater-config configmap in the argocd namespace:
data:
registries.conf: |
registries:
- name: Docker Hub
prefix: docker.io
api_url: https://registry-1.docker.io
credentials: pullsecret:argocd/privatereg
defaultns: library
default: true
We also need to assign policy information for the image updater. We can do this by adding the following to the data property in the argocd-rbac-cm configmap in the argocd namespace:
data:
policy.csv: |
p, role:image-updater, applications, get, */*, allow
p, role:image-updater, applications, update, */*, allow
g, image-updater, role:image-updater
policy.default: role:readonly
Argo CD Applications
In the previous CI/CD blog we created two applications:
- postgres-dev
- postgres-qa
Annotations
We want Argo CD Image Updater to update our kustomization file in git with new image tags for images that it finds in our private repo. In order to do that we will have to add annotations in the postgres-dev application.
In the ARGO CD UI click on applications in the left pane. Click on the postgres-dev application. Click on APP Details in the top bar. Click edit in the top right of the application pane. Click the + button to add a new annotation. Enter the key and value. Do this for each annotation as listed below.
- argocd-image-updater.argoproj.io/image-list: postgres=<your_registry_name>/<your_image_name>
- e.g: argocd-image-updater.argoproj.io/image-list: postgres=bobpachcrunchy/crunchy-postgres
- argocd-image-updater.argoproj.io/postgres.update-strategy: latest
- argocd-image-updater.argoproj.io/write-back-method: git
- argocd-image-updater.argoproj.io/git-branch: main
- argocd-image-updater.argoproj.io/write-back-target: kustomization
Click Save.
Note: In this demo we are only monitoring one image. You can monitor multiple images by adding them to the image-list annotation. See Argo CD Image Updater doc for more information.
Kustomize
In order for Argo CD Image Updater to update our images in git we need to change how we reference our image tags. In the previous CI/CD blog we referenced them in the PostgresCluster custom resource itself. Now we will move the Postgres image tag into the kustomization.yaml file. We will add a transformer and remove the tag from the custom resource.
Transformer
postgres-cluster-image-transformer.yaml
images:
- path: spec/image
kind: PostgresCluster
Kustomization.yaml
kustomization.yaml
Before
resources:
- postgres-self-test-config.yaml
- postgres.yaml
After
configurations:
- postgres-cluster-image-transformer.yaml
images:
- name: bobpachcrunchy/crunchy-postgres
newTag: ubi8-15.1-5.3.0-1
resources:
- postgres-self-test-config.yaml
- postgres.yaml
Postgres Cluster
postgres.yaml
Before
spec:
image: bobpachcrunchy/crunchy-postgres:ubi8-15.1-5.3.0-1
After
spec:
image: bobpachcrunchy/crunchy-postgres
Check these changes into your git repo. They will be required before updating the Postgres image in the registry.
Deploy the clusters
If you don't already have your Postgres clusters up and running, sync the postgres-dev Argo CD application. This will deploy the Postgres cluster to the postgres-dev namespace and will run the self test which will sync the Postgres cluster to the postgres-qa namespace.
Verify both clusters deployed.
$ kubectl -n postgres-dev get pods
NAME READY STATUS RESTARTS AGE
hippo-backup-5vvr-h2zmd 0/1 Completed 0 3m41s
hippo-pgha1-7lng-0 5/5 Running 0 4m2s
hippo-pgha1-g8xx-0 5/5 Running 0 4m3s
hippo-pgha1-nnrm-0 5/5 Running 0 4m2s
hippo-repo-host-0 2/2 Running 0 4m2s
$ kustomize % kubectl -n postgres-qa get pods
NAME READY STATUS RESTARTS AGE
hippo-backup-r4p8-z4689 0/1 Completed 0 3m32s
hippo-pgha1-4992-0 5/5 Running 0 3m54s
hippo-pgha1-4cmv-0 5/5 Running 0 3m54s
hippo-pgha1-mzfm-0 5/5 Running 0 3m54s
hippo-repo-host-0 2/2 Running 0 3m54s
Both Postgres clusters are up and running. Lets take a look at the Postgres image version we deployed for each cluster. We will describe the stateful sets for each namespace. Notice the crunchy-postgres image tag is currently ubi8-5.3.0-1. This is our starting image.
Note: Results shown below have been truncated for readability.
$ kubectl get -n postgres-dev sts -o wide | grep crunchy-postgres
hippo-pgha1-7lng 1/1 7m2s bobpachcrunchy/crunchy-postgres:ubi8-5.3.0-1
hippo-pgha1-g8xx 1/1 7m3s bobpachcrunchy/crunchy-postgres:ubi8-5.3.0-1
hippo-pgha1-nnrm 1/1 7m3s bobpachcrunchy/crunchy-postgres:ubi8-5.3.0-1
$ kubectl get -n postgres-qa sts -o wide | grep crunchy-postgres
hippo-pgha1-4992 1/1 7m1s bobpachcrunchy/crunchy-postgres:ubi8-5.3.0-1
hippo-pgha1-4cmv 1/1 7m1s bobpachcrunchy/crunchy-postgres:ubi8-5.3.0-1
hippo-pgha1-mzfm 1/1 7m1s bobpachcrunchy/crunchy-postgres:ubi8-5.3.0-1
Time to Automate Updates
We have completed all of the prep work. Now it's time to automate updates. We need to make one more change to the postgres-dev application in Argo CD. We will edit the application and enable auto-sync.
- Click on applications in the left panel of the Argo CD UI.
- Click on the postgres-dev application
- Click the App Details button in the top panel.
- Click the Enable Auto-Sync button in the Sync Policy Pane of the panel.
- Click OK on the confirmation dialogue.
We are ready to push an updated image into our private registry. Here is my registry before I push the updated image:
Ensure you have Docker running. We will pull the new image, tag it, and push it into our private registry.
docker pull registry.crunchydata.com/crunchydata/crunchy-postgres:ubi8-15.3-5.3.2-1
docker tag registry.crunchydata.com/crunchydata/crunchy-postgres:ubi8-15.3-5.3.2-1 bobpachcrunchy/crunchy-postgres:ubi8-15.3-5.3.2-1
docker push bobpachcrunchy/crunchy-postgres:ubi8-15.3-5.3.2-1
I now see my new image in my private repo:
Argo CD Image Updater is monitoring my Docker registry. It see's that I have a new image and it connects to my git repo to update the image tag in my kustomization.yaml file.
A quick look at the argo-cd-image-update logs shows:
time="2023-08-03T19:36:23Z" level=info msg="Processing results: applications=1 images_considered=1 images_skipped=0 images_updated=0 errors=0" │
time="2023-08-03T19:38:23Z" level=info msg="Starting image update cycle, considering 1 annotated application(s) for update" │
time="2023-08-03T19:38:23Z" level=info msg="Setting new image to bobpachcrunchy/crunchy-postgres:ubi8-15.3-5.3.2-1" alias=postgres application=postgres-dev image_name=bobpac │
time="2023-08-03T19:38:23Z" level=info msg="Successfully updated image 'bobpachcrunchy/crunchy-postgres:ubi8-15.1-5.3.0-1' to 'bobpachcrunchy/crunchy-postgres:ubi8-15.3-5.3. │
time="2023-08-03T19:38:23Z" level=info msg="Committing 1 parameter update(s) for application postgres-dev" application=postgres-dev
A look in git shows a commit message of: "build: automatic update of postgres-dev". The kustomization.yaml file now has the updated image tag.
configurations:
- postgres-cluster-image-transformer.yaml
images:
- name: bobpachcrunchy/crunchy-postgres
newTag: ubi8-15.3-5.3.2-1
resources:
- postgres-self-test-config.yaml
- postgres.yaml
The postgres-dev Argo CD app is set to auto-sync. By default auto-sync will fire every 3 minutes. When it does fire, it will see that it is out of sync with the git repo and re-apply the manifest. The Postgres pods go into a rolling restart process. Each replica pod will be taken down one at a time and re-initialized with the new image. After all replica pods are updated a failover happens to elect an updated replica to the primary pod role. The former primary is then brought down and re-initialized as a replica with the new image. At this point, all Postgres pods in the cluster are running the new image with the only downtime being the few seconds for failover to happen.
The self-test container ran after the replica was promoted to primary. It then calls the sync command on the postgres-qa Argo CD app and that Postgres cluster also gets its images updated in the same manner.
Both Postgres clusters have now been updated. Let's take a look at the Postgres image version we updated for each cluster. We will describe the stateful sets for each namespace. Notice the crunchy-postgres image tag is now ubi8-15.3-5.3.2-1. This is our updated image.
Note: Results shown below have been truncated for readability.
$ kubectl get -n postgres-dev sts -o wide | grep crunchy-postgres
hippo-pgha1-svkv 1/1 17m bobpachcrunchy/crunchy-postgres:ubi8-15.3-5.3.2-1
hippo-pgha1-x7qf 1/1 17m bobpachcrunchy/crunchy-postgres:ubi8-15.3-5.3.2-1
hippo-pgha1-zgcq 1/1 17m bobpachcrunchy/crunchy-postgres:ubi8-15.3-5.3.2-1
$ kubectl get -n postgres-qa sts -o wide | grep crunchy-postgres
hippo-pgha1-fw9r 1/1 18m bobpachcrunchy/crunchy-postgres:ubi8-15.3-5.3.2-1
hippo-pgha1-r9zk 1/1 18m bobpachcrunchy/crunchy-postgres:ubi8-15.3-5.3.2-1
hippo-pgha1-tf22 1/1 18m bobpachcrunchy/crunchy-postgres:ubi8-15.3-5.3.2-1
Summary
Using Argo CD, Git, Docker Hub, and the self-test container we were able to automatically deploy, test, and promote a new Postgres image to Postgres clusters in Dev and QA namespaces by doing nothing more than pushing an updated image into a Docker registry that argocd-image-updater was monitoring. Many organizations will use additional gating mechanisms like manual user acceptance testing and management sign-off before allowing promotions to Prod. The processes outlined in this blog can be modified to align with your specific prod gating requirements.
Test automation is a critical component of any automated CI/CD pipeline. Today's test harnesses are designed to support concurrency, use case permutations and process iterations. The more testing you automate the more rapidly you can move packages through you pipeline. Using CI/CD automation facilitates a decrease in time to market with your new container images without sacrificing quality or reliability.
A final view of the entire possible workflow:
Related Articles
- Sidecar Service Meshes with Crunchy Postgres for Kubernetes
12 min read
- pg_incremental: Incremental Data Processing in Postgres
11 min read
- Smarter Postgres LLM with Retrieval Augmented Generation
6 min read
- Postgres Partitioning with a Default Partition
16 min read
- Iceberg ahead! Analyzing Shipping Data in Postgres
8 min read