> That seems quite peculiar; AFAICS the pgstat code shouldn't be any
> slower than before. At first I thought it might be because we'd
> increased PGSTAT_ACTIVITY_SIZE, but actually that happened before
> 8.0 release, so it shouldn't be a factor in this comparison.
Just FYI the last time I looked
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> One thing that might interest you is that the penalty in 8.1 for
> stats_command_string=true in this type of access pattern is very high: I
> was experimenting to see if the new cpu efficiency gave me enough of a
> budget to start using this. This mor
> Bill of Materials Traversal ( ~ 62k records).
>
> ISAM* pg 8.0 pg 8.1 devel delta 8.0->8.1
> running time 63 sec 90 secs71 secs21%
> cpu load 17%45%32%29%
> loadsecs** 10.71 40.5 22.72 44%
> recs/sec
> Cool --- we've done a fair amount of work on squeezing out internal
> inefficiencies during this devel cycle, but it's always hard to
predict
> just how much anyone will notice in the real world.
>
> Care to do some oprofile or gprof profiles to see where it's still
bad?
>
Since release of 8.0
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> Relative to 8.0, I am seeing a dramatic, almost miraculous reduction in
> CPU load times in 8.1devel. This is for ISAM style access patterns over
> the parse/bind interface. (IOW one record at a time, 90% read, 10%
> write).
> Relative to commercial
> Kari Lavikka <[EMAIL PROTECTED]> writes:
> > However, those configuration changes didn't have significant effect
to
> > oprofile results. AtEOXact_CatCache consumes even more cycles.
>
> I believe I've fixed that for 8.1.
Relative to 8.0, I am seeing a dramatic, almost miraculous reduction in
C
Kari Lavikka <[EMAIL PROTECTED]> writes:
> However, those configuration changes didn't have significant effect to
> oprofile results. AtEOXact_CatCache consumes even more cycles.
I believe I've fixed that for 8.1.
regards, tom lane
---(end of broa
On Mon, 8 Aug 2005, Tom Lane wrote:
What that sounds like to me is a machine with inadequate disk I/O bandwidth.
Your earlier comment that checkpoint drives the machine into the ground
fits right into that theory, too. You said there is "almost no IO-wait"
but are you sure you are measuring that
I think I have a solution for you.
You have posted that you presently have these RAID volumes and behaviors:
sda: data (10 spindles, raid10)
sdb: xlog & clog (2 spindles, raid1)
sdc: os and other stuff
Usually iostat (2 second interval) says:
avg-cpu: %user %nice %sys %iowait %idle
Kari Lavikka <[EMAIL PROTECTED]> writes:
> Disk configurations looks something like this:
>sda: data (10 spindles, raid10)
>sdb: xlog & clog (2 spindles, raid1)
>sdc: os and other stuff
That's definitely wrong. Put clog on the data disk. The entire point
of giving xlog its own spindl
On Mon, 8 Aug 2005, Tom Lane wrote:
What that sounds like to me is a machine with inadequate disk I/O bandwidth.
Your earlier comment that checkpoint drives the machine into the ground
fits right into that theory, too. You said there is "almost no IO-wait"
but are you sure you are measuring that
Kari Lavikka <[EMAIL PROTECTED]> writes:
> We are having performance problems with some smaller tables and very
> simple queries. For example:
> SELECT u.uid, u.nick, extract(epoch from uc.stamp) AS stamp FROM
> user_channel uc INNER JOIN users u USING (uid) WHERE channel_id = 281321
> AND u.st
Actually I modified postgresql.conf a bit and there isn't commit delay any
more. That didn't make noticeable difference though..
Workload is generated by a website with about 1000 dynamic page views a
second. Finland's biggest site among youths btw.
Anyway, there are about 70 tables and her
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
>> Kari Lavikka <[EMAIL PROTECTED]> writes:
>>> samples %symbol name
>>> 13513390 16.0074 AtEOXact_CatCache
>>
>> That seems quite odd --- I'm not used to seeing that function at the top
>> of a profile. What is the workload being profiled, e
> Kari Lavikka <[EMAIL PROTECTED]> writes:
> > samples %symbol name
> > 13513390 16.0074 AtEOXact_CatCache
>
> That seems quite odd --- I'm not used to seeing that function at the
top
> of a profile. What is the workload being profiled, exactly?
He is running a commit_delay of 8.
Kari Lavikka <[EMAIL PROTECTED]> writes:
> samples %symbol name
> 13513390 16.0074 AtEOXact_CatCache
That seems quite odd --- I'm not used to seeing that function at the top
of a profile. What is the workload being profiled, exactly?
regards, tom lane
-
Hi!
Oprofile looks quite interesting. I'm not very familiar with postgresql
internals, but here's some report output:
CPU: AMD64 processors, speed 2190.23 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Cycles outside of halt state) with a unit
mask of 0x00 (No unit mask) count 10
samp
On 7/28/05 2:21 AM, "Kari Lavikka" <[EMAIL PROTECTED]> wrote:
There's a new profiling tool called oprofile:
http://oprofile.sourceforge.net/download/
that can be run without instrumenting the binaries beforehand. To actually
find out what the code is doing during these stalls, oprofile can sho
Kari Lavikka wrote:
> shared_buffers = 15000
you can play around with this one but in my experience it doesn't make
much difference anymore (it used to).
> work_mem = 1536 # min 64, size in KB
this seems low. are you sure you are not getting sorts swapped to disk?
> fsy
> effective_cache_size = 100 # typically 8KB each
I have this setting on postgresql 7.4.8 on FreeBSD with 4 GB RAM:
effective_cache_size = 27462
So eventhough your machine runs Debian and you have four times as much
RAM as mine your effective_cache_size is 36 times larger. You could
try low
Hi,
On Thu, 28 Jul 2005, Kari Lavikka wrote:
> ->8 Relevant rows from postgresql.conf 8<-
>
> shared_buffers = 15000 # min 16, at least max_connections*2, 8KB each
> work_mem = 1536 # min 64, size in KB
As an aside, I'd increase work_mem -
Hello,
we recently upgraded our dual Xeon Dell to a brand new Sun v40z with 4
opterons, 16GB of memory and MegaRAID with enough disks. OS is Debian
Sarge amd64, PostgreSQL is 8.0.3. Size of database is something like 80GB
and our website performs about 600 selects and several updates/inserts a
22 matches
Mail list logo