Introducing Crunchy Data Warehouse: A next-generation Postgres-native data warehouse. Crunchy Data Warehouse Learn more
Elizabeth Christensen
Elizabeth Christensen
Wouldn’t it be awesome if money worked just like time in Postgres? You could store one canonical version of it, it worked across the globe? Well sadly, money is a whole different ball of wax. Though like time, money is part of most database implementations and I wanted to lay out some of the best practices I’ve gathered for working with money in Postgres.
I also have a tutorial
Elizabeth Christensen
Elizabeth Christensen
Postgres databases are very compliant, they do what you tell them until you tell them to stop. It is really common for a runaway process, query, or even something a co-worker runs to accidentally start a never ending transaction
Elizabeth Christensen
Elizabeth Christensen
Beyond a basic query with a join or two, many queries require extracting subsets of data for comparison, conditionals, or aggregation. Postgres’ use of the SQL language is standards compliant and SQL has a world of tools for subqueries. This post will look at many of the different subquery tools. We’ll talk about the advantages and use cases of each, and provide further reading and tutorials to dig in more.
I’ll take a broad definition of “subquery”. Why am I calling all of these subqueries? These are all queries that work on subsets of data. Having read the article title, you might have come here to say that a subquery is a specific thing vs
Paul Ramsey
Paul Ramsey
In a previous life, I worked on a CRM system that really loved the idea of tags. Everything could be tagged, users could create new tags, tags were a key organizing principle of searching and filtering.
The trouble was, modeled traditionally, tags can really make for some ugly tables and equally ugly queries. Fortunately, and as usual, Postgres has an answer.
Today I’m going to walk through working with tags in Postgres with a sample database of 🐈 cats and their attributes
Elizabeth Christensen
Elizabeth Christensen
Since humans first started recording data, they’ve been keeping track of time. Time management is one of those absolutely crucial database tasks and Postgres does a great job of it. Postgres has a lot of options for storing and querying time so I wanted to provide an overview of some of the most common needs for storing and retrieving time data.
This blog is also available as a hands on tutorial
Paul Ramsey
Paul Ramsey
It's the easiest thing in the world to put a timestamp on a column and track when events like new records or recent changes happen, but what about reporting?
Binning data for large data sets like time series is a great way to let you group data sets by obvious groups and then use SQL to pull out a query that easily works in a graph.
Here's some PostgreSQL secrets that you can use to build up complete reports of time-based data.
Philip Hurst
Philip Hurst
Backups in the database world are essential. They are the safety net protecting you from even the smallest bit of data loss. There’s a variety of ways to back up your data and this post aims to explain the basic tools involved in backups and what options you have, from just getting started to more sophisticated production systems.
Jacob Coblentz
Jacob Coblentz
Geocoding is the process of taking addresses or location information and getting the coordinates for that location. Anytime you route a new location or look up a zip code, the back end is geocoding the location and then using the geocode inside other PostGIS functions to give you the routes, locations, and other data you asked for.
PostGIS comes equipped with an easy way to use the US Census data with the Tiger geocoder
Paul Ramsey
Paul Ramsey
In geospatial terminology, a "raster" is a cover of an area divided into a uniform gridding, with one or more values assigned to each grid cell.
A "raster" in which the values are associated with red, green and blue bands might be a visual image. The rasters that come off the Landsat 7
Paul Ramsey
Paul Ramsey
Over and over when I look at applications for performance, the lesson I learn and re-learn is, do more things right inside the database.
Create, read, update, delete! All the things you do to a table or collection of tables to work with your ever-changing data.
Most CRUD examples, and most CRUD thinking, tend to focus on one table at a time. That's easy to understand. It's also unrealistic. Even the simplest application will be working with several interlinked normalized tables.
Here's our working example tables.