The way I’ve solved the problem before 9.4 is to use a command called 'pv' 
(pipe view).  Normally this command is useful for seeing the rate of data flow 
in a pipe, but it also does have a rate limiting capacity.  The trick for me 
was running the output of pg_basebackup through pv (emulates having a slow 
disk) without having to have double the storage when building a new slave.

First, 'pg_basebackup' to standard out in the tar format.  Then pipe that to 
'pv' to quietly do rate limiting.  Then pipe that to 'tar' to lay it out in a 
directory format.  Tar will dump everything into the current directory, but 
transform will give you the effect of having selected a directory in the 
initial command.

The finished product looks something like:

pg_basebackup -U postgres -D - -F t -x -vP | pv -q --rate-limit 100m | tar -xf 
- --transform='s`^`./pgsql-data-backup/`'



Reply via email to