Introducing Crunchy Data Warehouse: A next-generation Postgres-native data warehouse. Crunchy Data Warehouse Learn more
Elizabeth Christensen
Elizabeth Christensen
In my day to day, I'm surrounded by great database engineers. They talk about things like HA and raft protocol and the right and wrong approach for configuring synchronous vs. asynchronous replication. There is a lot of value in all that deep technical knowledge, but for when interacting with customers, I like to boil it down a bit. What I've seen is that for many folks the basics of key database principles can get lost in the details. What follows is a summary of conversations I've had with customers on how to think about key tenants of database management: high availability and disaster recovery.
Joe Conway
Joe Conway
If you run Linux in production for any significant amount of time, you have likely run into the "Linux Assassin" that is, the OOM (out-of-memory) killer. When Linux detects that the system is using too much memory, it will identify processes for termination and, well, assassinate them. The OOM killer has a noble role in ensuring a system does not run out of memory, but this can lead to unintended consequences.
For years the PostgreSQL community has made recommendations
Paul Laurence
Paul Laurence
There is increasing consensus that Postgres is a great choice of database for a broad range of use cases. As our friends at RedMonk have said:
the answer is postgres, now what's the question again? ;-)
— Elon Mook (@monkchips) April 29, 2017
Dave Cramer
Dave Cramer
My colleague @craigkerstiens recently wrote about some guidance for cleaning up your Postgres database. One of the things he mentioned in his post, "Don't put your logs or messages in your database." got a number of questions
Craig Kerstiens
Craig Kerstiens
Last week I was on a call with someone giving an overview of Crunchy Bridge, our multi-cloud fully managed database as a service. During the call they asked about what was the best way to get a sense of how their database was doing, a health check
Paul Laurence
Paul Laurence
While every year feels like the year of Postgres these days, 2012 did not. For most observers, 2012 was the year of "Big Data" as NoSQL technologies like Hadoop and MongoDB were demonstrating powerful new data management use cases.
At the same time, Crunchy Data was still just an idea and was beginning to engage with various consumers of database technology on how this wave of new open source tools were impacting their data strategy. During these early discussions - and many since - we heard how organizations were building a modern data management toolbox. The tools were being selected to support the next generation of application development. Organizations were including a NoSQL tool like Hadoop, one or two legacy databases, a data caching or message broker technology, and a modern relational tool as the new SQL standard. And the relational database tool of choice that we heard about time and time again, was Postgres
Greg Smith
Greg Smith
Some people are obsessed with sports or cars. I follow computer hardware. The PC industry has overclocking instead of nitrous, plexi cases instead of chrome, and RGB lighting as its spinning wheels.
The core challenge I enjoy is cascading small improvements to see if I can move a bottleneck. The individual improvements are often just a few percent. Percentage gains can compound as you chain them together.
Today I'm changing the memory speed on my main test system, going from 2133MHz to 3200MHz, and measuring how that impacts PostgreSQL SELECT results. I'm seeing a 3% gain on this server, but as always with databases that's only on a narrow set of in-memory use cases. Preview:
Greg Smith
Greg Smith
This week Apple started delivering Macs using their own Apple Silicon chips, starting with a Mac SOC named the M1. M1 uses the ARM instruction set and claims some amazing acceleration for media workloads. I wanted to know how it would do running PostgreSQL, an app that's been running on various ARM systems for years. The results are great!
The OSS community around the homebrew project already qualified their PostgreSQL package as working on M1, and with some recompiling work that all worked as expected:
$ /opt/homebrew/bin/psql -c "select version()"
PostgreSQL 13.0 on arm-apple-darwin20.1.0, compiled by
Apple clang version 12.0.0 (clang-1200.0.32.28), 64-bit
Yorvi Arias
Yorvi Arias
The Postgres documentation covers streaming replication pretty comprehensively, but you may also need something more digestible for reference. In this blog post, we'll discuss how to set up streaming replication in Windows. Credit goes to my colleague Douglas Hunley whose blog post on setting up streaming replication on Linux
Greg Smith
Greg Smith
Apple's Intel-based laptops are very popular among developers, and that's as true of people who work on PostgreSQL as other groups. Tomorrow, the first shipping Apple laptops running on ARM CPUs instead of Intel are expected. That is likely to include at least a 13" MacBook Pro. I decided to prepare for that with a survey of PostgreSQL performance on my small herd of Apple laptops. Mine are all the 15" or newer 16" models.
Crunchy Data has already started digging into PostgreSQL on ARM performance as part of Crunchy Bridge