Just couple of suggestions:
I think on the current server you're pretty much hosed since you are
look like you are cpu bottlenecked. You probably should take a good
look at PITR and see if that meets your requirements. Also you
definately want to go to 8.1...it's faster, and every bit helps.
G
On Wed, Jun 14, 2006 at 05:18:14PM -0400, John Vincent wrote:
> On 6/14/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
> >
> >On Wed, Jun 14, 2006 at 02:11:19PM -0400, John Vincent wrote:
> >> Out of curiosity, does anyone have any idea what the ratio of actual
> >> datasize to backup size is if I use
On 6/14/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
On Wed, Jun 14, 2006 at 02:11:19PM -0400, John Vincent wrote:> Out of curiosity, does anyone have any idea what the ratio of actual> datasize to backup size is if I use the custom format with -Z 0 compression
> or the tar format?-Z 0 should mean n
time gzip -6 claDW_PGSQL.test.bakreal 3m4.360suser 1m22.090ssys 0m6.050sWhich is still less time than it would take to do a compressed pg_dump. On 6/14/06,
Scott Marlowe <[EMAIL PROTECTED]> wrote:
How long does gzip take to compress this backup?On Wed, 2006-06-14 at 15:59, John Vincent w
On Wed, Jun 14, 2006 at 02:11:19PM -0400, John Vincent wrote:
> Out of curiosity, does anyone have any idea what the ratio of actual
> datasize to backup size is if I use the custom format with -Z 0 compression
> or the tar format?
-Z 0 should mean no compression.
Something you can try is piping
How long does gzip take to compress this backup?
On Wed, 2006-06-14 at 15:59, John Vincent wrote:
> Okay I did another test dumping using the uncompressed backup on the
> system unloaded and the time dropped down to 8m for the backup.
> There's still the size issue to contend with but as I said, I
On 6/14/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
Description of "Queries gone wild" redacted. hehe.Yeah, I've seen those kinds of queries before too. you might be able tolimit your exposure by using alter user:alter user userwhoneedslotsofworkmem set work_mem=100;
Is this applicable on 8
On Wed, 2006-06-14 at 12:04, John Vincent wrote:
>
> On 6/14/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:
> > -- this is the third time I've tried sending this and I
> never saw it get
> > through to the list. So
Out of curiosity, does anyone have any idea what the ratio of actual datasize to backup size is if I use the custom format with -Z 0 compression or the tar format? Thanks.On 6/14/06,
Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:> -- this is the third
On Wed, June 14, 2006 1:04 pm, John Vincent wrote:
> I know it is but that's what we need for some of our queries. Our ETL
> tool (informatica) and BI tool (actuate) won't let us set those things as
> part of our jobs. We need it for those purposes. We have some really nasty
> queries that will be
On 6/14/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:> -- this is the third time I've tried sending this and I never saw it get> through to the list. Sorry if multiple copies show up.>> Hi all,
BUNCHES SNIPPED> work_mem = 1048576 ( I know this is h
On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:
> -- this is the third time I've tried sending this and I never saw it get
> through to the list. Sorry if multiple copies show up.
>
> Hi all,
BUNCHES SNIPPED
> work_mem = 1048576 ( I know this is high but you should see some of our
> sorts
"John E. Vincent" <[EMAIL PROTECTED]> writes:
> I've watched the backup process and I/O is not a problem. Memory isn't a
> problem either. It seems that we're CPU bound but NOT in I/O wait.
Is it the pg_dump process, or the connected backend, that's chewing the
bulk of the CPU time? (This should
13 matches
Mail list logo