Florian G. Pflug wrote:
> Greg Stark wrote:
> > Florian G. Pflug wrote:
> >> The same holds true for index scans, though. Maybe we can find a
> >> solution that benefits both cases - something along the line of a
> >> bgreader process
> > I posted a patch to do readahead for bitmap index scans us
I have added the following TODO:
* Speed WAL recovery by allowing more than one page to be prefetched
This involves having a separate process that can be told which pages
the recovery process will need in the near future.
http://archives.post
Aidan Van Dyk wrote:
How difficult is it to parse the WAL logs with enough knowledge to know
what heap page (file/offset) a wal record contains (I haven't looked
into any wal code)?
Unfortunately there's no common format for that. All the heap-related
WAL records, insert, update and delete, ha
On Sat, Mar 1, 2008 at 2:13 AM, Heikki Linnakangas
<[EMAIL PROTECTED]> wrote:
>
>
> I used to think it's a big problem, but I believe the full-page-write
> optimization in 8.3 made it much less so. Especially with the smoothed
> checkpoints: as checkpoints have less impact on response times, you
On Fri, Feb 29, 2008 at 8:19 PM, Florian G. Pflug <[EMAIL PROTECTED]> wrote:
> Pavan Deolasee wrote:
> > What I am thinking is if we can read ahead these blocks in the shared
> > buffers and then apply redo changes to them, it can potentially
> > improve things a lot. If there are multiple read
Greg Stark wrote:
Florian G. Pflug wrote:
The same holds true for index scans, though. Maybe we can find a
solution that benefits both cases - something along the line of a
bgreader process
I posted a patch to do readahead for bitmap index scans using
posix_fadvise. Experiments showed it works
Florian G. Pflug wrote:
> The same holds true for index scans, though. Maybe we can find a
> solution that benefits both cases - something along the line of a
> bgreader process
I posted a patch to do readahead for bitmap index scans using posix_fadvise.
Experiments showed it works great on raid
* Tom Lane <[EMAIL PROTECTED]> [080229 15:49]:
>
> If that isn't entirely useless, you need a better kernel. The system
> should *certainly* be bright enough to do read-ahead for our reads of
> the source xlog file. The fetches that are likely to be problematic are
> the ones for pages in the da
Decibel! <[EMAIL PROTECTED]> writes:
> Perhaps a good short-term measure would be to have recovery allocate
> a 16M buffer and read in entire xlog files at once.
If that isn't entirely useless, you need a better kernel. The system
should *certainly* be bright enough to do read-ahead for our rea
Decibel! wrote:
On Feb 29, 2008, at 8:10 AM, Florian Weimer wrote:
In the end, I wouldn't be surprised if for most loads, cache warming
effects dominated recovery times, at least when the machine is not
starved on RAM.
Uh... that's exactly what all the synchronous reads are doing... warming
On Feb 29, 2008, at 8:10 AM, Florian Weimer wrote:
In the end, I wouldn't be surprised if for most loads, cache warming
effects dominated recovery times, at least when the machine is not
starved on RAM.
Uh... that's exactly what all the synchronous reads are doing...
warming the cache. And s
On Fri, 2008-02-29 at 15:49 +0100, Florian G. Pflug wrote:
> I know that Simon has some ideas about parallel restored, though I don't
> know how he wants to solve the dependency issues involved. Perhaps by
> not parallelizing withon one table or index...
Well, I think that problem is secondary to
"Florian G. Pflug" <[EMAIL PROTECTED]> writes:
> I know that Simon has some ideas about parallel restored, though I don't
> know how he wants to solve the dependency issues involved. Perhaps by
> not parallelizing withon one table or index...
I think we should be *extremely* cautious about introdu
Pavan Deolasee wrote:
What I am thinking is if we can read ahead these blocks in the shared
buffers and then apply redo changes to them, it can potentially
improve things a lot. If there are multiple read requests, kernel (or
controller ?) can probably schedule the reads more efficiently.
Th
* Pavan Deolasee:
> The current redo-recovery is a single threaded, synchronous process.
> The XLOG is read sequentially, each log record is examined and
> replayed if required. This requires reading disk blocks in the
> shared buffers and applying changes to the buffer. The reading
> happens sync
I remember Heikki mentioned improving redo recovery in one of the
emails in the past, so I know people are already thinking about this.
I have some ideas and just wanted to get comments here.
ISTM that its important to keep the redo recovery time as small as possible
in order to reduce the downtim
16 matches
Mail list logo