"Ralph Mason" <[EMAIL PROTECTED]> writes:
> Ralph Mason wrote:
>> We have a database running on a 4 processor machine. As time goes by
>> the IO gets worse and worse peeking at about 200% as the machine loads
up.
>>
>> The weird thing is that if
number of
>concurrent connections are opened.
>
>-- Mark Lewis
No - no swap on this machine. The number of connections is stable.
Ralph
On Fri, 2007-05-18 at 10:45 +1200, Ralph Mason wrote:
> We have a database running on a 4 processor machine. As time goes by
> the IO gets wor
Ralph Mason wrote:
> We have a database running on a 4 processor machine. As time goes by
> the IO gets worse and worse peeking at about 200% as the machine loads up.
>
> The weird thing is that if we restart postgres it’s fine for hours but
> over time it goes bad again.
>
Hi Josh - thanks for thoughts.
>
> This is Postgres 8.1.4 64bit.
>1. Upgrade to 8.1.9. There is a bug with autovac that is fixed that is
>pretty important.
We don't use pg_autovac - we have our own process that runs very often
vacuuming tables that are dirty. It works well and vacuums when ac
We have a database running on a 4 processor machine. As time goes by the IO
gets worse and worse peeking at about 200% as the machine loads up.
The weird thing is that if we restart postgres it’s fine for hours but over
time it goes bad again.
(CPU usage graph here HYPERLINK
"http://www.fl
>To me, that means that his machine is allowing the new FD to be created,
>but then can't really support that many so it gives an error.
files-max is 297834
ulimit is 100
(doesn't make sense but there you go)
What I don’t really understand is with max_files_per_process at 800 we don't
get th
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of CAJ CAJ
Sent: 10 May 2007 12:26
To: Ralph Mason
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance Woes
I have several databases. They are each about 35gb in size and have about
10.5K relations (count
Hi,
I have several databases. They are each about 35gb in size and have about
10.5K relations (count from pg_stat_all_tables) in them. Pg_class is about
26k rows and the data directory contains about 70k files. These are busy
machines, they run about 50 xactions per second, ( aproxx insert /
Hi,
I have 2 tables both have an index on ID (both ID columns are an oid).
I want to find only only rows in one and not the other.
Select ID from TableA where ID not IN ( Select ID from Table B)
This always generates sequential scans.
Table A has about 250,000 rows. Table B has about 250,00
Hi,
I have a simple query that is running inside a plpgsql function.
SELECT INTO _point_id id FROM ot2.point WHERE unit_id = _unit_id AND
time > _last_status ORDER BY time LIMIT 1;
Both _unit_id and _last_status variables in the function. the table has
an index on unit_id,point
When this r
Tom Lane wrote:
Ralph Mason <[EMAIL PROTECTED]> writes:
Is there any way I can say make ret a binary cursor?
It's possible to determine that at the protocol level, if you're using
V3 protocol; but whether this is exposed to an application depends on
what client-side
Hi,
I am trying to get better performance reading data from postgres, so I
would like to return the data as binary rather than text as parsing it
is taking a considerable amount of processor.
However I can't figure out how to do that! I have functions like.
function my_func(ret refcursor) re
Alvaro Herrera wrote:
Christian Paul B. Cosinas wrote:
Does Creating Temporary table in a function and NOT dropping them affects
the performance of the database?
The system will drop it automatically, so it shouldn't affect.
What _could_ be affecting you if you execute that function a
Hi,
I have a transaction that has multiple separate command in it (nothing
unusual there).
However sometimes one of the sql statements will fail and so the whole
transaction fails.
In some cases I could fix the failing statement if only I knew which one
it was. Can anyone think of any way
I want to do statement level triggers for performance, but it seems
there is no 'updated', 'inserted', or 'deleted' tables inside the
trigger and nothing I can find in the documentation that offers similar
functionality.
Is there any way that I can access only those rows that were changed?
Th
[EMAIL PROTECTED] wrote:
But, yeah. It's probably not that easy, especially with really big
databases. Where is this free list stored? How efficient is it to keep
track of the lowest running transaction at all times? How does one
synchronize access to this free list, to ensure that processes don
This is a wild and crazy thought which I am sure is invalid for some
good reason.
But why can't postgres just vacuum itself as it goes along?
When a row is orphaned it's added to a list of possibly available rows.
When a new row is needed the list of possible rows is examined and the
first o
17 matches
Mail list logo