Introducing the Crunchy Bridge CLI
Many years ago a group of colleagues and I latched onto this idea of flow and what it means for developer experiences. Flow was one of those things that was always hard to measure, but you could tell when it was right or wrong. It wasn’t about KPIs or deploy metrics, but it was about the overall efficiency of the process. A big part of flow when it came to DX was a great CLI experience.
A great CLI is intuitive, it allows you to work efficiently while building on simplicity. A great CLI can be key to developer flow.
In developing our CLI for Crunchy Bridge we made some explicit decisions that felt right for us and we couldn't be happier with the results. Large tech companies may have entire teams dedicated to their CLI experience, they'll meet and converge on norms and guidelines and provide those out to vertical teams. Those teams then go and implement their APIs and CLIs to these standards. We didn't have the luxury of a large team solely dedicated to our CLI, so we looked to do what we do best - lean on technology to provide a great experience.
Get started
You can installing the Bridge CLI on of two ways:
brew install CrunchyData/brew/cb
- Install from latest release section on our Repository
First get your application ID and application secret from https://www.crunchybridge.com/settings/.
Then run cb login
to register the CLI with your Crunchy Bridge account.
You can get started by listing your clusters with cb list
, you can securely connect to your cluster with cb psql <cluster id>
or use cb scope
to run health checks against it. There are many more commands that you can use to manage things such as log destinations, firewall rules, and more.
To see what commands are available run cb --help
, and to see more detailed information for a given command add --help
to it, for example cb create --help
.
$ cb list
und13yzac5emzl7ojsqfwp25wm demo personal
475ow3natngrhaffymv7fbxmha sampledatabass staging-team
$ cb create --help
Usage: cb create <--platform|-p> <--region|-r> <--plan> <--team|-t> [--size|-s] [--name|-n] [--ha] [--network]
cb create --fork ID [--at] [--platform|-p] [--region|-r] [--plan] [--size|-s] [--name|-n] [--ha] [--network]
cb create --replica ID [--platform|-p] [--region|-r] [--plan] [--name|-n] [--network]
-h, --help Show this help
--ha <true|false> High Availability (default: false)
--plan NAME Plan (server vCPU+memory)
-n NAME, --name NAME Cluster name (default: Cluster date+time)
-p NAME, --platform NAME Cloud provider
-r NAME, --region NAME Region/Location
-s GiB, --storage GiB Storage size (default: 100GiB, or same as source)
-t ID, --team ID Team
--network network Network
--replica ID Choose source cluster for read-replica
--fork ID Choose source cluster for fork
--at TIME Recovery point-in-time in RFC3339 (default: now)
If you use the fish command line shell and have the completions installed for you (either automatically through homebrew or otherwise), nearly all arguments can be completed for you. This includes all cluster IDs available to just your account, in addition to normal subcommands and flags. Also where possible the current arguments you've given are taken into consideration. For example if you're creating a new cluster on AWS, instances sizes on Azure or regions in GCP will not be shown.
Fish?
Our development of the CLI sparked an internal recurring theme that really everyone should be using fish
as opposed to bash
or zsh
or some other shell. Will, my colleague, would regularly remark whenever the topic came up that you should simply use fish
. The result: each week someone new would show up in the slack channel saying they swapped their shell to fish
and they couldn't be happier. So... Why fish
?
- Rich tab completion out of the box
- Suggestions based on what you've recently done
- Generally just nicer environment to improve your flow
And before you say it, yes, you can get much of the above with zsh and a lot of customization, but we're fans of great software experiences that just work. In supporting tab completion for our Crunchy Bridge CLI the amount of code to make that work for fish
was:
complete --command cb --arguments '(cb --_completion (commandline -cp))' --no-files
A better database experience with cb scope
As someone who works with Postgres on a daily basis I'm pretty familiar with the ins and outs of what Postgres gives me. These days most folks tend to love Postgres and they know about things like JSONB and other advanced datatypes, but when it comes to tuning a database, that is still a black box. Under the covers Postgres actually has a lot of valuable data on how your database is doing from a performance perspective.
Within Postgres there are tools like pg_stat_statements that record your queries and give you easy ways to analyze. There is the ability to monitor your connections to know if you need to add something like pgBouncer (which is included in Crunchy Bridge). But you have to know where to look. To make getting these insights easier we've simplified this within our CLI. With cb scope
you can see key metrics and insights into your database, making it easier for you to monitor and improve the health of your database.
$ cb scope --cluster undp3yxac5emzl7ojsqfwp25wm --suite all
╭───────╮
──┤ Bloat ├─────────────────────────────────────────────────────────────────────
╰───────╯ Table and index bloat
type │ schemaname │ object name │ bloat │ waste
──────┼────────────┼────────────────────────────────────┼───────┼────────
table │ public │ clicks │ 3.1 │ 205 MB
index │ public │ impressions::idx_impressions_city │ 8.7 │ 15 MB
index │ public │ impressions::idx_impressions_state │ 7.7 │ 15 MB
...
Give cb scope
a try with your Crunchy Bridge database to see what insights it highlights for you.
A thank you
As I authored this post days ago, it just came out that since that time Mihalyi Csikszentmihalyi the author of the theory of "flow" passed away. His thoughts on flow still are held deeply interesting and a great thanks to his contributions on the thinking which have greatly influenced mine and others around creating great experiences for developers and about life in general.
Related Articles
- Crunchy Data Warehouse: Postgres with Iceberg for High Performance Analytics
8 min read
- Loading the World! OpenStreetMap Import In Under 4 Hours
6 min read
- Easy Totals and Subtotals in Postgres with Rollup and Cube
5 min read
- A change to ResultRelInfo - A Near Miss with Postgres 17.1
8 min read
- Accessing Large Language Models from PostgreSQL
5 min read