Hi all,
2006/10/5, Tom Lane <[EMAIL PROTECTED]>:
"Peter Bauer" <[EMAIL PROTECTED]> writes:
> tps = 50.703609 (including connections establishing)
> tps = 50.709265 (excluding connections establishing)
That's about what you ought to expect for a single transaction stream
running on honest disk h
"Peter Bauer" <[EMAIL PROTECTED]> writes:
> tps = 50.703609 (including connections establishing)
> tps = 50.709265 (excluding connections establishing)
That's about what you ought to expect for a single transaction stream
running on honest disk hardware (ie, disks that don't lie about write
comple
it seems that the machine doesn't really care about the pgbench run. I did a
pgbench -c 10 -t 1 -s 10 pgbench
and here is the output of vmstat 1 100 which has been started some
seconds before pgbench:
vmstat 1 100
procs ---memory-- ---swap-- -io --system-- cpu
r
If you are on Linux, I recommend iostat(1) and vmstat(8) over top.
Iostat will report I/O transfer statistics; it's how I discovered
that work_mem buffers were spilling over to disk files. For Vmstat,
look in particular at the load (ie., how many processes are competing
for the scheduler) i
I forgot to mention that top does not show a noticeable increase of
CPU or system load during the pgbench runs (postmaster has 4-8% CPU).
Shouldn't the machine be busy during such a test?
thx,
Peter
2006/10/5, Peter Bauer <[EMAIL PROTECTED]>:
I finished the little benchmarking on our server and
I finished the little benchmarking on our server and the results are
quite curios.
With the numbers from http://sitening.com/tools/postgresql-benchmark/
in mind i did
./pgbench -i pgbench
and then performed some pgbench tests, for example
./pgbench -c 1 -t 1000 -s 1 pgbench
starting vacuum...end.
It appears to me that work_mem is a more significant configuration
option than previously assumed by many PostgreSQL users, myself
included. As with many database optimizations, it's an obscure
problem to diagnose because you generally only observe it through I/O
activity.
One possibility
Hi all,
inspired by the last posting "Weird disk write load caused by
PostgreSQL?" i increased the work_mem from 1 to 7MB and did some
loadtest with vacuum every 10 minutes. The system load (harddisk) went
down and everything was very stable at 80% idle for nearly 24 hours!
I am currently perform
Ray Stell <[EMAIL PROTECTED]> writes:
> How would one determine the lock situation definitively? Is there
> an internal mechanism that can be queried?
pg_locks view.
regards, tom lane
---(end of broadcast)---
TIP 2: Don't
On Sun, Oct 01, 2006 at 12:55:51PM +0200, MaXX wrote:
>
> Pure speculation: are you sure you aren't vacuuming too agressively?
> The DELETE waiting and SELECT waiting sound to me like they are waiting
> for a lock that another vacuum is holding.
How would one determine the lock situation defini
2006/10/2, Tom Lane <[EMAIL PROTECTED]>:
"Peter Bauer" <[EMAIL PROTECTED]> writes:
> Attached you can find the postgresql logfiles and a logfile which
> contains alls SQL statements executed in the relevant time together
> with the excpetions thrown. I also attached a file with all used
> Pl/pgSQ
"Peter Bauer" <[EMAIL PROTECTED]> writes:
> Attached you can find the postgresql logfiles and a logfile which
> contains alls SQL statements executed in the relevant time together
> with the excpetions thrown. I also attached a file with all used
> Pl/pgSQL functions. Since we were not able to find
2006/10/1, Matthew T. O'Connor :
MaXX wrote:
>> There are 10-15 postmaster processes running which use all the CPU
>> power.
>> A restart of tomcat and then postgresql results in the same situation.
>> Some postgres processes are in DELETE waiting or SELECT waiting.
>> VACUUM runs through in just
2006/10/1, MaXX <[EMAIL PROTECTED]>:
Peter Bauer wrote:
> 2006/10/1, MaXX <[EMAIL PROTECTED]>:
>> Peter Bauer wrote:
>> [...]
>> > There are 10-15 postmaster processes running which use all the CPU
>> power.
>> > A restart of tomcat and then postgresql results in the same situation.
>> > Some pos
"Peter Bauer" <[EMAIL PROTECTED]> writes:
> yes, there are about 10 postmaster processes in top which eat up all
> of the CPU cycles at equal parts.
What are these processes doing exactly --- can you show us the queries
they're executing? It might be worth attaching to a few of them with
gdb to g
MaXX wrote:
There are 10-15 postmaster processes running which use all the CPU
power.
A restart of tomcat and then postgresql results in the same situation.
Some postgres processes are in DELETE waiting or SELECT waiting.
VACUUM runs through in just about 1-2 seconds and is run via cron
every mi
2006/10/1, Chris Mair <[EMAIL PROTECTED]>:
Hi,
a few random question...
> > i have a Tomcat application with PostgreSQL 8.1.4 running which
> > performs about 1 inserts/deletes every 2-4 minutes and updates on
> > a database and after some hours of loadtesting the top output says
> > 0.0% i
Peter Bauer wrote:
[...]
There are 10-15 postmaster processes running which use all the CPU power.
A restart of tomcat and then postgresql results in the same situation.
Some postgres processes are in DELETE waiting or SELECT waiting.
VACUUM runs through in just about 1-2 seconds and is run via c
Hi,
a few random question...
> > i have a Tomcat application with PostgreSQL 8.1.4 running which
> > performs about 1 inserts/deletes every 2-4 minutes and updates on
> > a database and after some hours of loadtesting the top output says
> > 0.0% idle, 6-7% system load, load average 32, 31, 2
2006/10/1, Peter Bauer <[EMAIL PROTECTED]>:
Hi all,
i have a Tomcat application with PostgreSQL 8.1.4 running which
performs about 1 inserts/deletes every 2-4 minutes and updates on
a database and after some hours of loadtesting the top output says
0.0% idle, 6-7% system load, load average 3
20 matches
Mail list logo