Re: [PERFORM] How do I bulk insert to a table without affecting read performance on that table?

2008-01-25 Thread Scott Marlowe
On Jan 25, 2008 5:27 PM, growse <[EMAIL PROTECTED]> wrote: > > Hi, > > I've got a pg database, and a batch process that generates some metadata to > be inserted into one of the tables. Every 15 minutes or so, the batch script > re-calculates the meta data (600,000 rows), dumps it to file, and then

[PERFORM] How do I bulk insert to a table without affecting read performance on that table?

2008-01-25 Thread growse
Hi, I've got a pg database, and a batch process that generates some metadata to be inserted into one of the tables. Every 15 minutes or so, the batch script re-calculates the meta data (600,000 rows), dumps it to file, and then does a TRUNCATE table followed by a COPY to import that file into the

Re: [PERFORM] Linux/PostgreSQL scalability issue - problem with 8 cores

2008-01-25 Thread Simon Riggs
On Mon, 2008-01-07 at 19:54 -0500, Tom Lane wrote: > Alvaro Herrera <[EMAIL PROTECTED]> writes: > > Perhaps it would make sense to try to take the "fast path" in > > SIDelExpiredDataEntries with only a shared lock rather than exclusive. > > I think the real problem here is that sinval catchup proc

Re: [PERFORM] 8.3rc1 Out of memory when performing update

2008-01-25 Thread Magnus Hagander
Roberts, Jon wrote: Subject: Re: [PERFORM] 8.3rc1 Out of memory when performing update A simple update query, over roughly 17 million rows, populating a newly added column in a table, resulted in an out of memory error when the process memory usage reached 2GB. Could this be due to a poor choic

Re: [PERFORM] 8.3rc1 Out of memory when performing update

2008-01-25 Thread Roberts, Jon
> Subject: Re: [PERFORM] 8.3rc1 Out of memory when performing update > > > > > > A simple update query, over roughly 17 million rows, populating a > > > newly added column in a table, resulted in an out of memory error > > > when the process memory usage reached 2GB. Could this be due to a > > > p

Re: [PERFORM] 8.3rc1 Out of memory when performing update

2008-01-25 Thread cgallant
> > > A simple update query, over roughly 17 million rows, populating a > > newly added column in a table, resulted in an out of memory error > > when the process memory usage reached 2GB. Could this be due to a > > poor choice of some configuration parameter, or is there a limit on > > how many r

Re: [PERFORM] Postgres 8.2 memory weirdness

2008-01-25 Thread Tory M Blue
On Jan 24, 2008 10:49 AM, Greg Smith <[EMAIL PROTECTED]> wrote: > 8.2.1 has a nasty bug related to statistics collection that causes > performance issues exactly in the kind of heavy update situation you're > in. That's actually why i asked for the exact 8.2 version. You should > plan an upgrade

Re: [PERFORM] 1 or 2 servers for large DB scenario.

2008-01-25 Thread Matthew
On Fri, 25 Jan 2008, Greg Smith wrote: If you're seeing <100TPS you should consider if it's because you're limited by how fast WAL commits can make it to disk. If you really want good insert performance, there is no substitute for getting a disk controller with a good battery-backed cache to w

Re: [PERFORM] 1 or 2 servers for large DB scenario.

2008-01-25 Thread Matthew
On Fri, 25 Jan 2008, David Brain wrote: We currently have one large DB (~1.2TB on disk), that essentially consists of 1 table with somewhere in the order of 500 million rows , this database has daily inserts as well as being used for some semi-data mining type operations, so there are a fairly

Re: [PERFORM] 1 or 2 servers for large DB scenario.

2008-01-25 Thread Greg Smith
On Fri, 25 Jan 2008, David Brain wrote: The hardware storing this DB (a software RAID6) array seems to be very IO bound for writes and this is restricting our insert performance to ~50TPS. If you're seeing <100TPS you should consider if it's because you're limited by how fast WAL commits can

[PERFORM] 1 or 2 servers for large DB scenario.

2008-01-25 Thread David Brain
Hi, I'd appreciate some assistance in working through what would be the optimal configuration for the following situation. We currently have one large DB (~1.2TB on disk), that essentially consists of 1 table with somewhere in the order of 500 million rows , this database has daily insert

Re: [PERFORM] 8.3rc1 Out of memory when performing update

2008-01-25 Thread Stephen Denne
I don't have a PostgreSQL build environment. It is now Friday night for me. I left the alternate query running, and will find out on Monday what happened. If I drop the fk constraint, and/or its index, would I still be affected by the leak you found? Regards, Stephen Denne. _

Re: [PERFORM] Making the most of memory?

2008-01-25 Thread Florian Weimer
* Chris Browne: >> A dedicated RAID controller with battery-backed cache of ssuficient >> size and two mirrored disks should not perform that bad, and has the >> advantage of easy availability. > > That won't provide as "souped up" performance as "WAL on SSD," and > it's from technical people wish