On Jan 25, 2008 5:27 PM, growse <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I've got a pg database, and a batch process that generates some metadata to
> be inserted into one of the tables. Every 15 minutes or so, the batch script
> re-calculates the meta data (600,000 rows), dumps it to file, and then
Hi,
I've got a pg database, and a batch process that generates some metadata to
be inserted into one of the tables. Every 15 minutes or so, the batch script
re-calculates the meta data (600,000 rows), dumps it to file, and then does
a TRUNCATE table followed by a COPY to import that file into the
On Mon, 2008-01-07 at 19:54 -0500, Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > Perhaps it would make sense to try to take the "fast path" in
> > SIDelExpiredDataEntries with only a shared lock rather than exclusive.
>
> I think the real problem here is that sinval catchup proc
Roberts, Jon wrote:
Subject: Re: [PERFORM] 8.3rc1 Out of memory when performing update
A simple update query, over roughly 17 million rows, populating a
newly added column in a table, resulted in an out of memory error
when the process memory usage reached 2GB. Could this be due to a
poor choic
> Subject: Re: [PERFORM] 8.3rc1 Out of memory when performing update
>
> >
> > > A simple update query, over roughly 17 million rows, populating a
> > > newly added column in a table, resulted in an out of memory error
> > > when the process memory usage reached 2GB. Could this be due to a
> > > p
>
> > A simple update query, over roughly 17 million rows, populating a
> > newly added column in a table, resulted in an out of memory error
> > when the process memory usage reached 2GB. Could this be due to a
> > poor choice of some configuration parameter, or is there a limit on
> > how many r
On Jan 24, 2008 10:49 AM, Greg Smith <[EMAIL PROTECTED]> wrote:
> 8.2.1 has a nasty bug related to statistics collection that causes
> performance issues exactly in the kind of heavy update situation you're
> in. That's actually why i asked for the exact 8.2 version. You should
> plan an upgrade
On Fri, 25 Jan 2008, Greg Smith wrote:
If you're seeing <100TPS you should consider if it's because you're limited
by how fast WAL commits can make it to disk. If you really want good insert
performance, there is no substitute for getting a disk controller with a good
battery-backed cache to w
On Fri, 25 Jan 2008, David Brain wrote:
We currently have one large DB (~1.2TB on disk), that essentially consists of
1 table with somewhere in the order of 500 million rows , this database has
daily inserts as well as being used for some semi-data mining type
operations, so there are a fairly
On Fri, 25 Jan 2008, David Brain wrote:
The hardware storing this DB (a software RAID6) array seems to be very
IO bound for writes and this is restricting our insert performance to
~50TPS.
If you're seeing <100TPS you should consider if it's because you're
limited by how fast WAL commits can
Hi,
I'd appreciate some assistance in working through what would be the
optimal configuration for the following situation.
We currently have one large DB (~1.2TB on disk), that essentially
consists of 1 table with somewhere in the order of 500 million rows ,
this database has daily insert
I don't have a PostgreSQL build environment.
It is now Friday night for me. I left the alternate query running, and will
find out on Monday what happened.
If I drop the fk constraint, and/or its index, would I still be affected by the
leak you found?
Regards,
Stephen Denne.
_
* Chris Browne:
>> A dedicated RAID controller with battery-backed cache of ssuficient
>> size and two mirrored disks should not perform that bad, and has the
>> advantage of easy availability.
>
> That won't provide as "souped up" performance as "WAL on SSD," and
> it's from technical people wish
13 matches
Mail list logo