Shridhar Daithankar wrote:
> Bruce Momjian wrote:
>
> > Shridhar Daithankar wrote:
> >>I can not see why writing an 8K block is any more safe than writing just the
> >>changes.
> >>
> >>I may be dead wrong but just putting my thoughts together..
> > The problem is that we need to record what was
Bruce Momjian wrote:
Our current WAL implementation writes copies of full pages to WAL before
modifying the page on disk. This is done to prevent partial pages from
being corrupted in case the operating system crashes during a page
write.
InnoDB uses a doublebuffer system instead.
htt
Shridhar Daithankar wrote:
> Hi,
>
> I was thinking other way round. What if we write to WAL pages only to those
> portions which we need to modify and let kernel do the job the way it sees fit?
> What will happen if it fails?
So you are saying only write the part of the page that we modify? I
Hi,
I was thinking other way round. What if we write to WAL pages only to those
portions which we need to modify and let kernel do the job the way it sees fit?
What will happen if it fails?
Bruce Momjian wrote:
Our current WAL implementation writes copies of full pages to WAL before
modifying
Marty Scholes wrote:
2. Put them on an actual (or mirrored actual) spindle
Pros:
* Keeps WAL and data file I/O separate
Cons:
* All of the non array drives are still slower than the array
Are you sure this is a problem? The dbt-2 benchmarks from osdl run on an
8-way Intel computer with several ra
Tom Lane wrote:
Your analysis is missing an important point, which is what happens when
multiple transactions successively modify the same page. With a
sync-the-data-files approach, we'd have to write the data page again for
each commit. With WAL, the data page will likely not get written at all
Marty Scholes <[EMAIL PROTECTED]> writes:
> I suspect (but cannot prove) that performance would jump for systems
> like ours if WAL was done away with entirely and the individual data
> files were synchronized on commit.
I rather doubt this, since we used to do things that way and we saw an
acro
> I suspect (but cannot prove) that performance would jump for systems
> like ours if WAL was done away with entirely and the individual data
> files were synchronized on commit.
You know.. thats exactly what WAL is designed to prevent? Grab a copy of
7.0 and 7.1. Do a benchmark between the 2 wi
If I understand WAL correctly (and I may not), it is essentially a write
cache for writes to the data files, because:
1. Data file writes are notoriously random, and writing the log is
sequential. Ironically, the sectors mapped by the OS to the disk are
likely not at all sequential, but they l
Our current WAL implementation writes copies of full pages to WAL before
modifying the page on disk. This is done to prevent partial pages from
being corrupted in case the operating system crashes during a page
write.
For example, suppose an 8k block is being written to a heap file.
First the
10 matches
Mail list logo