Some findings around replication following an earlier blog post.
I have a simple streaming replication setup with 1 primary and 1 standby DB server. My reference DB is about 42GB in size.
- A full vacuum will break the replication. Remember, PostgresSQL is recreating every table in the full mode. This wont fit through any default sized xlogs.
A regular vacuum works fine though.
- In a complete disaster scenario where both primary and standby are corrupted I have to rely on the nightly pg_dump.
The import on the primary DB server (AWS m2.4xl instance) takes about 3 hours to restore with the default parameters.
We can significantly speed it up by utilizing more than 1 default CPU for the data and index creation using the j parameter. Constraints cannot be created in parallel though.
sudo -u postgres pg_restore -j 8 -v -d dbname dumpfilename
This way I could reducte the restore time to 70 minutes.
I still have to get the standby in sync with pg_basebackup, takes another 60 minutes. This time I could reduce to 30 minutes by tar and copying the complete DB files over to the standby.
(But pg_basebackup works while the primary server is in use, provided you have enough xlogs to cover the until you want to link the standby)