In the aftermath of this huge storm in the USA, a number of web sites I use on a daily basis are either offline or threatening to go offline or operating on reduced functionality. If your business relies on your site and your data being available, you need to make sure that all parts of it can suffer a major event like this and yet continue. That means having failover systems that you can rapidly bring online. In the Postgres world, that means having at least one standby replica that you can activate on very short notice. This replica should be placed either in the cloud or in an alternative data centre, but in either case in some physical location far from where your primary is. For example, if your major data centre is on the east coast, the backup should possibly be on the west coast. There is no excuse these days for not having a good disaster recovery plan. Most large businesses I am familiar with have long learned this lesson, but it's one I have seen ignored many times by smaller and even medium sized businesses. The other thing I have often seen is having a DR plan that's never been tested. That's as bad as having a backup mechanism that's never been tested. Just the other day I saw a business suffer 100% data loss because they had never tested their backups, which turned out to be useless. It's like running with scissors.
Yup, data recovery will tested after disaster :p<br />
I think cost of to have double data center is to big for medium businesses than disaster possibility.<br />
Better do some daily encrypted backup with simple .sh script to other continent
I agree with you 100%. We do have many databases serving customers split between 2 geographically close but distant data centres. We depend on WAL archiving for replication. However, out of curiosity, what would be the preferred way to ship these WALs for extremely active databases between very distant geographical data centres given the fact these WAL can still be large even after zipping. Thanks!