Greg Stark wrote:
Bruce Momjian <[EMAIL PROTECTED]> writes:
I am not really aiming at removing sync() alltogether. We know already
that open,fsync,close does not guarantee you flush dirty OS-buffers for
which another process might so far only have done open,write. And you
So for what
Bruce Momjian <[EMAIL PROTECTED]> writes:
> > I am not really aiming at removing sync() alltogether. We know already
> > that open,fsync,close does not guarantee you flush dirty OS-buffers for
> > which another process might so far only have done open,write. And you
So for what it's worth, tho
Jan Wieck wrote:
> Bruce Momjian wrote:
>
> > Jan Wieck wrote:
>
> >> What doing frequent fdatasync/fsync during a constant ongoing checkpoint
> >> will cause is to significantly lower the physical write storm happening
> >> at the sync(), which is causing huge problems right now.
> >
> > I do
Jan Wieck <[EMAIL PROTECTED]> writes:
> I am not really aiming at removing sync() alltogether.
> ...
> What doing frequent fdatasync/fsync during a constant ongoing checkpoint
> will cause is to significantly lower the physical write storm happening
> at the sync(), which is causing huge problems
Bruce Momjian wrote:
Jan Wieck wrote:
What doing frequent fdatasync/fsync during a constant ongoing checkpoint
will cause is to significantly lower the physical write storm happening
at the sync(), which is causing huge problems right now.
I don't see that frankly because sync() is syncing ever
Jan Wieck wrote:
> Bruce Momjian wrote:
>
> > Jan Wieck wrote:
> >> If the system is write-bound, the checkpointer will find that many dirty
> >> blocks that he has no time to nap and will burst them out as fast as
> >> possible anyway. Well, at least that's the theory.
> >>
> >> PostgreSQL wit
Bruce Momjian wrote:
Jan Wieck wrote:
If the system is write-bound, the checkpointer will find that many dirty
blocks that he has no time to nap and will burst them out as fast as
possible anyway. Well, at least that's the theory.
PostgreSQL with the non-overwriting storage concept can never ha
Bruce Momjian wrote:
Greg Stark wrote:
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Have you considered having the background writer check the pages it is
> about to write to see if they can be added to the FSM, thereby reducing
> the need for vacuum? Seems we would need to add a statistics param
Greg Stark wrote:
>
> Bruce Momjian <[EMAIL PROTECTED]> writes:
>
> > Have you considered having the background writer check the pages it is
> > about to write to see if they can be added to the FSM, thereby reducing
> > the need for vacuum? Seems we would need to add a statistics parameter
> >
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Have you considered having the background writer check the pages it is
> about to write to see if they can be added to the FSM, thereby reducing
> the need for vacuum? Seems we would need to add a statistics parameter
> so pg_autovacuum would know how
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Have you considered having the background writer check the pages it is
> > about to write to see if they can be added to the FSM, thereby reducing
> > the need for vacuum?
>
> The 7.4 rewrite of FSM depends on the assumption that all
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Have you considered having the background writer check the pages it is
> about to write to see if they can be added to the FSM, thereby reducing
> the need for vacuum?
The 7.4 rewrite of FSM depends on the assumption that all the free space
in a given re
Jan Wieck wrote:
> If the system is write-bound, the checkpointer will find that many dirty
> blocks that he has no time to nap and will burst them out as fast as
> possible anyway. Well, at least that's the theory.
>
> PostgreSQL with the non-overwriting storage concept can never have
> hot-wr
Jan Wieck wrote:
> It also contains the starting work of the discussed background buffer
> writer. Thus far, the BufferSync() done at a checkpoint only writes out
> all dirty blocks in their LRU order and over a configurable time
> (lazy_checkpoint_time in seconds). But that means at least, whil
Zeugswetter Andreas SB SD wrote:
> Why not use the checkpointer itself inbetween checkpoints ?
> use a min and a max dirty setting like Informix. Start writing
> when more than max are dirty stop when at min. This avoids writing
> single pages (which is slow, since it cannot be grouped together
> b
> > Why not use the checkpointer itself inbetween checkpoints ?
> > use a min and a max dirty setting like Informix. Start writing
> > when more than max are dirty stop when at min. This avoids writing
> > single pages (which is slow, since it cannot be grouped together
> > by the OS).
>
> Curren
Zeugswetter Andreas SB SD wrote:
My plan is to create another background process very similar to
the checkpointer and to let that run forever basically looping over that
BufferSync() with a bool telling that it's the bg_writer.
Why not use the checkpointer itself inbetween checkpoints ?
use a mi
> My plan is to create another background process very similar to
> the checkpointer and to let that run forever basically looping over that
> BufferSync() with a bool telling that it's the bg_writer.
Why not use the checkpointer itself inbetween checkpoints ?
use a min and a max dirty setting l
Jan Wieck wrote:
Jan Wieck wrote:
Jan Wieck wrote:
I will follow up shortly with an approach that integrates Tom's delay
mechanism plus my first READ_BY_VACUUM hack into one combined experiement.
Okay,
the attached patch contains the 3 already discussed and one additional
change.
Ooopsy
Ooops
Jan Wieck wrote:
Jan Wieck wrote:
I will follow up shortly with an approach that integrates Tom's delay
mechanism plus my first READ_BY_VACUUM hack into one combined experiement.
Okay,
the attached patch contains the 3 already discussed and one additional
change.
Ooopsy
the B1/B2 queue length
Jan Wieck wrote:
I will follow up shortly with an approach that integrates Tom's delay
mechanism plus my first READ_BY_VACUUM hack into one combined experiement.
Okay,
the attached patch contains the 3 already discussed and one additional
change. I also made a few changes.
1) ARC policy. Has n
Attached is a first trial implementation of the Adaptive Replacement
Cache (ARC). The patch is against CVS HEAD of 7.4.
The algorithm is based on what's described in these papers:
http://www.almaden.ibm.com/StorageSystems/autonomic_storage/ARC/arcfast.pdf
http://www.almaden.ibm.com/StorageSystem
22 matches
Mail list logo