Introducing Crunchy Data Warehouse: A next-generation Postgres-native data warehouse. Crunchy Data Warehouse Learn more
Greg Nokes
Greg Nokes
We are excited to introduce Crunchy Postgres for Kubernetes (CPK) 5.6, the latest version of our PostgreSQL Kubernetes operator. This release brings several new features that will elevate your PostgreSQL experience to new heights, ensuring better management, automation, and scalability.
Brian Pace
Brian Pace
In the evolving world of data management, ensuring consistency and accuracy across multiple database systems is paramount. Whether you're migrating data, synchronizing systems, or performing routine audits, the ability to compare data across different database platforms is crucial. Enter pgCompare
Greg Nokes
Greg Nokes
When your company has decided it's time to invest in more open source, Postgres is the obvious choice. Managing databases is not new and you already have established practices and requirements for rolling out a new database. One of the big requirements we frequently help new customers with on their Postgres adoption is data encryption. While the question is simple, there's a few layers to it that determine which is the right approach for you. Here we'll walk through the pros and cons of approaches and help you identify the right path for your needs.
Craig Kerstiens
Craig Kerstiens
Today we're excited to announce a new scheduler for Crunchy Bridge. Scheduler makes it easy for you to create and manage automated database maintenance tasks such as:
Marco Slot
Marco Slot
Last month we launched Crunchy Bridge for Analytics, a new managed PostgreSQL offering that lets you query your data lake directly from PostgreSQL. Since then, we have had quite a few exciting conversations with customers handling large amounts of data in PostgreSQL. A common question is of course: How does it work?
In this post, I wanted to shed some light on the internals. Crunchy Bridge for Analytics abstracts the query engine to offer fast analytics on data in Amazon S3 in PostgreSQL. In principle, it can support multiple query engines, and it likely will in the future, but the current query engine is DuckDB.
Elizabeth Christensen
Elizabeth Christensen
I love taking random spatial data and turning it into maps. Any location data can be put into PostGIS in a matter of minutes. Often when I’m working with data that humans collected, like historic locations or things that have not yet traditionally been done with computational data, I’ll find traditional Degrees, Minutes, Seconds (DMS) data. To get this into PostGIS and QGIS, you’ll need to convert this data to a different system for decimal degrees. There’s probably proprietary tools that will do this for you, but we can easily write our own code to do it. Let’s walk through a quick example today.
Let’s say I found myself with a list of coordinates, that look like this:
38°58′17″N 95°14′05″W
Keith Fiske
Keith Fiske
Whether you are managing a large table or setting up automatic archiving, time based partitioning in Postgres is incredibly powerful. pg_partman
Marco Slot
Marco Slot
One of the unique characteristics of the recently launched Crunchy Bridge for Analytics is that it is effectively a hybrid between a transactional and an analytical database system. That is a powerful tool when dealing with data-intensive applications which may for example require a combination of low latency, high throughput insertion, efficient lookup of recent data, and fast interactive analytics over historical data.
A common source of large data volumes is append-mostly time series data or event data generated by an application. PostgreSQL has various tools to optimize your database for time series, such as partitioning
Marco Slot
Marco Slot
A lot of the world’s data lives in data lakes, huge collections of data files in object stores like Amazon S3. There are many tools for querying data lakes, but none are as versatile and have as wide an ecosystem as PostgreSQL. So, what if you could use PostgreSQL to easily query your data lake with state-of-the-art analytics performance?
Today we’re announcing Crunchy Bridge for Analytics
Keith Fiske
Keith Fiske
You could be saving money every month on databases costs with a smarter data retention policy. One of the primary reasons, and a huge benefit of partitioning is using it to automatically archive your data. For example, you might have a huge log table. For business purposes, you need to keep this data for 30 days. This table grows continually over time and keeping all the data makes database maintenance challenging. With time-based partitioning, you can simply archive off data older than 30 days.
The nature of most relational databases means that deleting large volumes of data can be very inefficient and that space is not immediately, if ever, returned to the file system. PostgreSQL does not return the space it reserves to the file system when normal deletion operations are run except under very specific conditions: