Introducing Crunchy Data Warehouse: A next-generation Postgres-native data warehouse. Crunchy Data Warehouse Learn more
Christopher Winslett
Christopher Winslett
It is never immediately obvious how to go from a simple SQL query to a complex one -- especially if it involves intricate calculations. One of the “dangers” of SQL is that you can create an executable query but return the wrong data. For example, it is easy to inflate the value of a calculated field by joining to multiple rows.
Use Crunchy Playground to follow allow with this blog post using a Postgres terminal:
Christopher Winslett
Christopher Winslett
You followed all the best practices, your sales dates are stored in perfect timestamp format …. but now you need to get reports by day, week, quarters, and months. You need to bin, bucket, and roll up sales data in easy to view reports. Do you need a BI tool? Not yet actually. Your Postgres database has hundreds of functions that let you query data analytics by date. By using some good old fashioned SQL - you have powerful analysis and business intelligence with date details on any data set.
In this post, I’ll walk through some of the key functions querying data by date.
Christopher Winslett
Christopher Winslett
Since its launch, Ruby on Rails has been a preferred open source framework for small-team B2B SaaS companies. Ruby on Rails uses a conventions-over-configuration mantra. This approach reduces common technical choices, thus elevating decisions. With this approach, the developers get an ORM (ActiveRecord), templating engine (ERB), helper methods (like number_to_currency
Christopher Winslett
Christopher Winslett
Automating pull requests to deploy staging applications is a game changer for large teams performing shipping quality products. Using Crunchy Bridge’s CLI or API, you can easily automate the entire process for these staging deployments. The simplest workflow would look something like the following:
In this example, during the “Create Postgres Cluster”, we’ll create a hobby-0 cluster for Postgres. Then, when the PR is closed the cluster will be torn down. We keep it simple for this example, but depending on your use case you can expand the capabilities.
For teams that like to have an anonymized dataset for staging, they use the Crunchy Bridge CLI to fork the production cluster
Christopher Winslett
Christopher Winslett
Over the past 12 months, AI has taken over budgets and initiatives. Postgres is a popular store for AI embedding data because it can store, calculate, optimize, and scale using the pgvector extension. A recently introduced gem to the Ruby on Rails ecosystem, the neighbor gem, makes working with pgvector and Rails even better.
Christopher Winslett
Christopher Winslett
I just spent last week at Rails World in Amsterdam and had a blast digging back into the Rails and Active Record world. In conversations with developers over the week, I had some notable takeaways from the newest version of Ruby on Rails that I just had to get written up.
A quick summary before we dig in:
async queries
Christopher Winslett
Christopher Winslett
Postgres has been steadily building on the JSON functionality initially released more than 10 years ago. With Postgres 16, working with JSON has gotten a couple nice improvements. Primarily, this release added features that ease the manipulation of data into JSON and improve the standard SQL functionality using JSON.
TL;DR:
Christopher Winslett
Christopher Winslett
Postgres’ pgvector extension recently added HNSW as a new index type for vector data. This levels up the database for vector-based embeddings output by AI models. A few months ago, we had written about approximate nearest neighbor pgvector performance using the available list-based indexes
Christopher Winslett
Christopher Winslett
Note: We have additional articles in this Postgres AI series.
Vector data has made its way into Postgres and I’m seeing more and more folks using it by the day. As I’ve seen use cases trickle in, I have been thinking a lot about scaling data and how to set yourself up for performance success from the beginning. The two primary trade-offs are performance versus accuracy. When seeking performance with vector data, we are using nearest neighbor algorithms, and those algorithms are built around probability of proximity. If your use-case requires 100% accuracy on nearest neighbor, performance will be sacrificed.
After choosing between performance versus accuracy, the next tools in the toolbox are caching and partitioning. Caching is obvious in some situations, if your product is finding “similar meals” or “similar products” or “similar support questions”, then the similarities will not change rapidly.
For the most part, the keys to scaling AI data are the same as scaling any other data type: reduce the number of rows in index and reduce the concurrent queries hitting the database. Once the index has done its work, CPU becomes the primary constraint: how fast can you calculate and compare distances between vectors? Scaling vector data is currently about performance mitigation as much as it is overpowering the data.
In the next few weeks, the Postgres pg_vector extension is launching HNSW indexes (see the commit history for pgvector
Christopher Winslett
Christopher Winslett
We have a lot of Postgres foreign data wrapper (FDW) users here at Crunchy. The postgres_fdw lets you connect to remote servers and in some cases can be an alternative for traditional ETL/ELT processes. As I see use of the Postgres foreign data wrapper expanding, I wanted to make some recommendations on how to approach performance. When you query from a single Postgres cluster, the database cleverly uses internal statistics to auto-tune performance