Time is not really a problem for me, if we talk about hours rather than days.  
On a roughly comparable machine I’ve made backups of databases less than 10 GB, 
and it was a matter of minutes.  But I know that there are scale problems. 
Sometimes programs just hang if the data are beyond some size.  Is that likely 
in Postgres if you go from ~ 10 GB to ~100 GB?  There isn’t any interdependence 
among my tables beyond  queries I construct on the fly, because I use the 
database in a single user environment

From: "David G. Johnston" <david.g.johns...@gmail.com>
Date: Tuesday, December 5, 2017 at 3:59 PM
To: Martin Mueller <martinmuel...@northwestern.edu>
Cc: "pgsql-general@lists.postgresql.org" <pgsql-general@lists.postgresql.org>
Subject: Re: a back up question

On Tue, Dec 5, 2017 at 2:52 PM, Martin Mueller 
<martinmuel...@northwestern.edu<mailto:martinmuel...@northwestern.edu>> wrote:
Are there rules for thumb for deciding when you can dump a whole database and 
when you’d be better off dumping groups of tables? I have a database that has 
around 100 tables, some of them quite large, and right now the data directory 
is well over 100GB. My hunch is that I should divide and conquer, but I don’t 
have a clear sense of what counts as  “too big” these days. Nor do I have a 
clear sense of whether the constraints have to do with overall size, the number 
of tables, or machine memory (my machine has 32GB of memory).

Is 10GB a good practical limit to keep in mind?


​I'd say the rule-of-thumb is if you have to "divide-and-conquer" you should 
use non-pg_dump based backup solutions.  Too big is usually measured in units 
of time, not memory.​

Any ability to partition your backups into discrete chunks is going to be very 
specific to your personal setup.  Restoring such a monster without constraint 
violations is something I'd be VERY worried about.

David J.

Reply via email to