Introducing Crunchy Data Warehouse: A next-generation Postgres-native data warehouse. Crunchy Data Warehouse Learn more

Featured Post

8 min read

Crunchy Data Warehouse: Postgres with Iceberg for High Performance Analytics

Marco Slot

We are excited to release Crunchy Data Warehouse, a modern data warehouse for Postgres. Crunchy Data Warehouse combines Postgres with Iceberg, Parquet, and data lake formats for fast analytics queries and cost efficient storage.

Read This article
  • 12 min read

    Sidecar Service Meshes with Crunchy Postgres for Kubernetes

    Andrew L'Ecuyer

    One of the great new features recently added to Kubernetes - native Sidecar Containers - continues to get closer to GA with each new Kubernetes release. I was reviewing all of the great progress recently made by the Kubernetes Enhancement Proposal (KEP) on Sidecar Containers

    Read More
  • 11 min read

    pg_incremental: Incremental Data Processing in Postgres

    Marco Slot

    Today I’m excited to introduce pg_incremental, a new open source PostgreSQL extension for automated, incremental, reliable batch processing. This extension helps you create processing pipelines for append-only streams of data, such as IoT / time series / event data workloads.

    Notable pg_incremental use cases include:

    • Creation and incremental maintenance of rollups, aggregations, and interval aggregations
    • Incremental data transformations
    • Periodic imports or export of new data using standard SQL
    Read More
  • 6 min read

    Smarter Postgres LLM with Retrieval Augmented Generation

    Paul Ramsey

    "Retrieval Augmented Generation" (RAG) is a useful technique in working with large language models (LLM) to improve accuracy when dealing with facts in a restricted domain of interest.

    Asking an LLM about Shakespeare: works pretty good. The model was probably fed a lot of Shakespeare in training.

    Asking it about holiday time off rules from the company employee manual: works pretty bad. The model may have ingested a few manuals in training, but not yours

    Read More
  • 16 min read

    Postgres Partitioning with a Default Partition

    Keith Fiske

    Partitioning is an important database maintenance strategy for a growing application backed by PostgreSQL. As one of the main authors of pg_partman and an engineer here at Crunchy Data, I spend a lot of my time helping folks implement partitioning. One of the nuances of PostgreSQL’s partitioning implementation is the default partition

    Read More
  • 8 min read

    Iceberg ahead! Analyzing Shipping Data in Postgres

    Marco Slot

    PostgreSQL is one of the most versatile data storage and processing tools available. We enhanced it even further by adding Iceberg tables to PostgreSQL in Crunchy Data Warehouse with a fast analytical query engine.

    What is Iceberg? Iceberg tables are stored in a compressed columnar format for fast analytics in object storage (S3). This means storage is cheap and there are no storage limits. Yet the tables are still transactional and work with nearly all PostgreSQL features. Crunchy Data Warehouse can also query or load raw data from object storage into Iceberg tables via PostgreSQL commands.

    A pattern we repeatedly see in data analytics scenarios is:

    • Use temporary or external tables to collect raw data
    • Use Iceberg as a central repository to organize data
    • Use PostgreSQL tables or materialized views for querying insights
    Read More
  • 8 min read

    PostGIS Day 2024 Summary

    Paul Ramsey

    In late November, on the day after GIS Day, we hosted the annual PostGIS day online event. 22 speakers from around the world, in an agenda that ran from mid-afternoon in Europe to mid-afternoon on the Pacific coast.

    We had an amazing collection of speakers, exploring all aspects of PostGIS, from highly technical specifics, to big picture culture and history. A full playlist

    Read More
  • 8 min read

    Crunchy Data Warehouse: Postgres with Iceberg for High Performance Analytics

    Marco Slot

    PostgreSQL is the bedrock on which many of today’s organizations are built. The versatility, reliability, performance, and extensibility of PostgreSQL make it the perfect tool for a large variety of operational workloads.

    The one area in which PostgreSQL has historically been lacking is analytics, which involves queries that summarize, filter, or transform large amounts of data. Modern analytical databases are designed to query data in data lakes in formats like Parquet

    Read More
  • Loading the World! OpenStreetMap Import In Under 4 Hours

    Greg Smith

    The OpenStreetMap (OSM) database builds almost 750GB of location data from a single file download. OSM notoriously takes a full day to run. A fresh open street map load involves both a massive write process and large index builds. It is a great performance stress-test bulk load for any Postgres system. I use it to stress the latest PostgreSQL versions and state-of-the-art hardware. The stress test validates new tuning tricks and identifies performance regressions.

    Two years ago, I presented (video

    Read More
  • 5 min read

    Easy Totals and Subtotals in Postgres with Rollup and Cube

    Elizabeth Christensen

    Postgres is being used more and more for analytical workloads. There’s a few hidden gems I recently ran across that are really handy for doing SQL for data analysis, ROLLUP and CUBE. Rollup and cube don’t get a lot of attention, but follow along with me in this post to see how they can save you a few steps and enhance your date binning

    Read More
  • 8 min read

    A change to ResultRelInfo - A Near Miss with Postgres 17.1

    Craig Kerstiens

    Version 17.2 of PostgreSQL has now released which rolls back the changes to ResultRelInfo. See the release notes for more details.

    Since its inception Crunchy Data

    Read More
  • 5 min read

    Accessing Large Language Models from PostgreSQL

    Paul Ramsey

    Large language models (LLM) provide some truly unique capacities that no other software does, but they are notoriously finicky to run, requiring large amounts of RAM and compute.

    That means that mere mortals are reduced to two possible paths for experimenting with LLMs:

    • Use a cloud-hosted service like OpenAI
    Read More