Introducing Crunchy Data Warehouse: A next-generation Postgres-native data warehouse. Crunchy Data Warehouse Learn more
Christopher Winslett
Christopher Winslett
Postgres has been steadily building on the JSON functionality initially released more than 10 years ago. With Postgres 16, working with JSON has gotten a couple nice improvements. Primarily, this release added features that ease the manipulation of data into JSON and improve the standard SQL functionality using JSON.
TL;DR:
Christopher Winslett
Christopher Winslett
"Last time I had data like this, I stored the data in MongoDB." --Me, last week.
I told this to a friend while talking through some of their data problems. As Craig likes to say, Crunchy Data is "Just Postgres"
Christopher Winslett
Christopher Winslett
MongoDB got an early start as a developer friendly database. Mongo was trivial to get started with and has some good tooling for scaling. Since you didn't have to create a schema, you were off and running in no time. Don't get us wrong, we do firmly believe a well designed schema is important. If you're not managing your schema it's managing you. Because of these capabilities, MongoDB has been used by teams for:
Christopher Winslett
Christopher Winslett
JSON is everywhere, even in your SQL database. Since Postgres added JSON support 10 years ago, we've seen widespread adoption. When JSON support was first released in 2012 the implementation was a fairly flat representation of the JSON data type. It had limited querying and indexing capabilities. Craig
Paul Ramsey
Paul Ramsey
Too often, web tiers are full of boilerplate that does nothing except convert a result set into JSON. A middle tier could be as simple as a function call that returns JSON. All we need is an easy way to convert result sets into JSON in the database.
PostgreSQL has built-in JSON generators
Craig Kerstiens
Craig Kerstiens
Postgres has had "JSON" support for nearly 10 years now. I put JSON
in quotes because well, 10 years ago when we announced JSON support we kinda cheated. We validated JSON was valid and then put it into a standard text field. Two years later in 2014 with Postgres 9.4 we got more proper JSON support with the JSONB
Steve Pousty
Steve Pousty
Let's imagine a scenario in which you are dealing with JSON in your application and you want to store it in your database. You let out a heavy sigh and think, "I guess I am going to have to add something besides my favorite DB (Postgres) to my architecture
Jonathan S. Katz
Jonathan S. Katz
If you have been asked to provide a CSV that someone can open up in their favorite spreadsheet editor, chances are you have used the PostgreSQL COPY command. COPY has been around since the early open source releases of PostgreSQL back in the late 1990s and was designed to quickly get data in and out of PostgreSQL.
COPY is also incredibly helpful for ingesting data into a table, especially if you have a lot of it to ingest, and will generally outperform INSERT. Let’s explore a few ways to use COPY to load some data into a table.