The upcoming version 1.6.1 of Barman will introduce a few interesting
new features which consolidate its central role in business continuity
installations of PostgreSQL databases. Discover why.

Version 1.6.1 of Barman, backup and recovery manager for PostgreSQL, is on the way (at the time of writing, we have just finished rolling out the first alpha version, which is available for public testing).
Barman 1.6.1 improves its robustness. Therefore, it’s an important step towards full support of “streaming replication”-only backups (yes, this means that SSH connections to PostgreSQL servers won’t be necessary any more!). A few bugs have been fixed, but, most importantly, the barman check command has been enhanced, making it easier to spot one of the most common initial installation problems: setup of continuous WAL archiving.
Also, Barman consolidates its central role in modern installations of PostgreSQL database clusters running in business continuity environments. Two handy features stand out:
replication-statuscommand;--peekoption for theget-walcommand.
Monitoring streaming replication from Barman
The replication-status command is a clear example of an improvement in that direction. Especially if you run several PostgreSQL clusters, with one or more hot standby servers in streaming replication (with or without rempgr), and you centrally manage backup and recovery procedures with a single Barman installation.
So many times I have desired to be able to monitor the status of the replication of a PostgreSQL server directly from the Barman server, without having to connect via psql and query the one and only authority: pg_stat_replication. That is why I have come up with the idea of this tiny enhancement, yet very useful (ok, I am not objective here, I admit that).
The following example shows that the angus server has 3 standby servers:
cliff: synchronous standbychris: potentially synchronous standbyaxl: asynchronous standby
It also has an async WAL streamer: Barman with WAL streaming enabled.
$ barman replication-status angus
Status of streaming clients for server 'angus':
Current xlog location on master: 1/C10000C8
Number of streaming clients: 4
1. #1 Sync standby
Application name: cliff
Sync stage : 5/5 Hot standby (max)
Communication : TCP/IP
IP Address : 192.168.0.40 / Port: 57567 / Host: -
User name : streaming_barman
Current state : streaming (sync)
WAL sender PID : 10188
Started at : 2016-05-06 16:31:59.193649+02:00
Sent location : 1/C10000C8 (diff: 0 B)
Write location : 1/C10000C8 (diff: 0 B)
Flush location : 1/C10000C8 (diff: 0 B)
Replay location : 1/C10000C8 (diff: 0 B)
2. #2 Potential standby
Application name: chris
Sync stage : 5/5 Hot standby (max)
Communication : TCP/IP
IP Address : 192.168.0.41 / Port: 57568 / Host: -
User name : streaming_barman
Current state : streaming (potential)
WAL sender PID : 10205
Started at : 2016-05-06 16:32:03.160853+02:00
Sent location : 1/C10000C8 (diff: 0 B)
Write location : 1/C10000C8 (diff: 0 B)
Flush location : 1/C10000C8 (diff: 0 B)
Replay location : 1/C10000C8 (diff: 0 B)
3. Async standby
Application name: axl
Sync stage : 5/5 Hot standby (max)
Communication : TCP/IP
IP Address : 192.168.0.43 / Port: 57569 / Host: -
User name : streaming_barman
Current state : streaming (async)
WAL sender PID : 10223
Started at : 2016-05-06 16:32:06.307472+02:00
Sent location : 1/C10000C8 (diff: 0 B)
Write location : 1/C10000C8 (diff: 0 B)
Flush location : 1/C10000C8 (diff: 0 B)
Replay location : 1/C10000C8 (diff: 0 B)
4. Async WAL streamer
Application name: barman_receive_wal
Sync stage : 3/3 Remote write
Communication : TCP/IP
IP Address : 192.168.0.30 / Port: 57569 / Host: -
User name : streaming_barman
Current state : streaming (async)
WAL sender PID : 12160
Started at : 2016-05-06 16:37:16.112675+02:00
Sent location : 1/C10000C8 (diff: 0 B)
Write location : 1/C10000C8 (diff: 0 B)
As you can see, replication-status displays the current synchronisation stage of the streaming client – for example level 5 out of 5 (meaning that the standby is up to date with the master for read operations too).
Relevant information such as ‘sent’, ‘write’, ‘flush’ and ‘replay’ location of any standby server are also returned, highlighting the lag from the master (measured in bytes of WAL information).
As for most server commands, you can get the information for every instance managed with Barman through the all alias:
barman replication-status all
For a list of options (including machine-readable output and filtering), type:
barman replication-status --help
Peeking the WAL hub
As you may know, version 1.5.0 of Barman introduced the get-wal command.
NOTE: For detailed information on
get-waland how to use Barman as an infinite basin of WAL files for your standby servers, read the article on 34.244.74.172.
Version 1.6.1 adds the --peek N option, which asks Barman to return a list of N WAL files starting from the requested one.
Thanks to this simple option, you can easily write a script that works in conjunction with a standby’s restore_command and fetches WAL files from Barman in a parallel fashion.
Stay tuned, you might hear more from us in the forthcoming weeks!
Conclusions
As Barman’s project leader, I am really happy about this release. Barman is getting more and more robust, through the feedback of users from all over the world. It is a well-tested tool, thanks to the intangible contribution and the hard work of our QA/UAT Kanban phase and the continuous integration system that we have setup.
Recently, Giulio Calacoci and Francesco Canovai covered it extensively in a talk about Continuous Integration at Pycon 7 in Florence (in Italian).
Please participate to the pre-release testing program for Barman 1.6.1 and let us have your feedback through the mailing list or by opening issues on Github.
I can anticipate that version 1.6.2 will include streaming-only backup method through pg_basebackup and support of replication slots. If you are interested in helping us by sponsoring the development, even partially, drop us a line (info at pgbarman.org). For now, you just only have to wait a few more weeks.