On Aug 31, 2006, at 3:08 PM, Tom Lane wrote:
Vivek Khera <[EMAIL PROTECTED]> writes:
Curious... See Message-ID: <[EMAIL PROTECTED]>
from the October 2003 archives. (I'd provide a full link to it, but
the http://archives.postgresql.org/pgsql-performance/ archives are
botched --
Still? I fou
Vivek Khera <[EMAIL PROTECTED]> writes:
> Curious... See Message-ID: <[EMAIL PROTECTED]>
> from the October 2003 archives. (I'd provide a full link to it, but
> the http://archives.postgresql.org/pgsql-performance/ archives are
> botched --
Still? I found it easily enough with a search for
On 31-Aug-06, at 2:15 PM, Vivek Khera wrote:
On Aug 30, 2006, at 7:48 PM, Dave Cramer wrote:
Actually unless you have a ram disk you should probably leave
random_page_cost at 4, shared buffers should be 2x what you have
here, maintenance work mem is pretty high
effective cache should be
It will be very important to determine if as performance degrades you
are either i/o bound, cpu bound or hindered by some other contention
(db locks, context switching, etc).
Try turning on statement duration logging for all statments or "slow"
statments (like those over 100ms or some arbit
On Aug 30, 2006, at 7:48 PM, Dave Cramer wrote:
Actually unless you have a ram disk you should probably leave
random_page_cost at 4, shared buffers should be 2x what you have
here, maintenance work mem is pretty high
effective cache should be much larger 3/4 of 4G or about 36
I've be
On 31-Aug-06, at 11:45 AM, Cosimo Streppone wrote:
Good morning,
I'd like to ask you some advice on pg tuning in a high
concurrency OLTP-like environment.
The application I'm talking about is running on Pg 8.0.1.
Under average users load, iostat and vmstat show that iowait stays
well under 1%.
On 31-Aug-06, at 1:54 PM, Tom Lane wrote:
"Indika Maligaspe" <[EMAIL PROTECTED]> writes:
The problem is when we are querying a specific set of table (which
all
tables having over 100K of rows), the Postgres user process takes
over or
close 700MB of memory. This is just to return 3000 odd r
On Aug 30, 2006, at 12:26 PM, Jim C. Nasby wrote:
You misunderstand how effective_cache_size is used. It's the *only*
memory factor that plays a role in cost estimator functions. This
means
it should include the memory set aside for caching in shared_buffers.
Also, hibufspace is only talkin
"Indika Maligaspe" <[EMAIL PROTECTED]> writes:
> The problem is when we are querying a specific set of table (which all
> tables having over 100K of rows), the Postgres user process takes over or
> close 700MB of memory. This is just to return 3000 odd rows. Even though we
> have lot of data we st
On 8/31/06, Cosimo Streppone <[EMAIL PROTECTED]> wrote:
Good morning,
- postgresql.conf, especially:
effective_cache_size (now 5000)
bgwriter_delay (500)
commit_delay/commit_siblings (default)
while thse settings may help, don't expect too much. ditto shared
buffers. your fs
--On August 31, 2006 5:45:18 PM +0200 Cosimo Streppone
<[EMAIL PROTECTED]> wrote:
Good morning,
- postgresql.conf, especially:
effective_cache_size (now 5000)
bgwriter_delay (500)
commit_delay/commit_siblings (default)
commit delay and siblings should be turned up, als
Hey guys,
We are running a Linux 2.4 enterprise edition box
with 6GB of RAM, Postgres 8.0.3. Our applications are running on JBoss 3.2.6.
We are having a Database of over 22GB in size.
The problem is when we are querying a specific set of table
(which all tables having over 100K of row
Cosimo,
On 8/31/06, Cosimo Streppone <[EMAIL PROTECTED]> wrote:
The problem is that under peak load, when n. of concurrent transactions
raises, there is a sensible performance degradation.
Could you give us more information about the performance degradation?
Especially cpu load/iostat/vmstat d
Good morning,
I'd like to ask you some advice on pg tuning in a high
concurrency OLTP-like environment.
The application I'm talking about is running on Pg 8.0.1.
Under average users load, iostat and vmstat show that iowait stays
well under 1%. Tables and indexes scan and seek times are also good.
14 matches
Mail list logo