I was waiting to digest what I saw before sending it to the group
I am running EAStress workload
I am using odata_sync which should sync as soon as it is written
with checkpoint_completion_target=0.9 and checkpoint_time=5m it seems to
be doing the right thing from the logfile output
2007-11-13 09:20:49.070 PST 9180 LOG: checkpoint starting: time
2007-11-13 09:21:13.808 PST 9458 LOG: automatic analyze of table
"specdb.public.o_orderline" system usage: CPU 0.03s/0.50u sec elapsed
7.79 sec
2007-11-13 09:21:19.830 PST 9458 LOG: automatic vacuum of table
"specdb.public.txn_log_table": index scans: 1
pages: 11 removed, 105 remain
tuples: 3147 removed, 40 remain
system usage: CPU 0.11s/0.09u sec elapsed 6.02 sec
2007-11-13 09:22:12.112 PST 9462 LOG: automatic vacuum of table
"specdb.public.txn_log_table": index scans: 1
pages: 28 removed, 77 remain
tuples: 1990 removed, 95 remain
system usage: CPU 0.11s/0.09u sec elapsed 5.98 sec
2007-11-13 09:23:12.121 PST 9466 LOG: automatic vacuum of table
"specdb.public.txn_log_table": index scans: 1
pages: 0 removed, 77 remain
tuples: 3178 removed, 128 remain
system usage: CPU 0.11s/0.04u sec elapsed 5.87 sec
2007-11-13 09:24:12.220 PST 9470 LOG: automatic vacuum of table
"specdb.public.txn_log_table": index scans: 1
pages: 0 removed, 77 remain
tuples: 3394 removed, 57 remain
system usage: CPU 0.11s/0.04u sec elapsed 5.85 sec
2007-11-13 09:25:12.400 PST 9474 LOG: automatic vacuum of table
"specdb.public.txn_log_table": index scans: 1
pages: 0 removed, 77 remain
tuples: 3137 removed, 1 remain
system usage: CPU 0.11s/0.04u sec elapsed 5.93 sec
2007-11-13 09:25:18.723 PST 9180 LOG: checkpoint complete: wrote 33362
buffers (2.2%); 0 transaction log file(s) added, 0 removed, 0 recycled;
write=269.642 s, sync=0.003 s, total=269.653 s
2007-11-13 09:25:49.000 PST 9180 LOG: checkpoint starting: time
However actual iostat output still shows non-uniform distribution but I
havent put the exact time stamp on the iostat outputs to correlate that
with the logfile entries.. Maybe I should do that.
So from the PostgreSQL view things are doing fine based on outputs: I
need to figure out the Solaris view on it now.
Could it be related to autovacuum happening also?
Regards,
Jignesh
Tom Lane wrote:
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
I will turn on checkpoint_logging to get more idea as Heikki suggested
Did you find out anything?
Did this happen on every checkpoint, or only some of them? The bug
Itagaki-san pointed out today in IsCheckpointOnSchedule might account
for some checkpoints being done at full speed, but not all ...
regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings