This question triggered some silly questions in my mind: Lots of folks are determined that the whole COW to different locations are a Bad Thing(tm), and in some cases, I guess it might actually be...
What if ZFS had a pool / filesystem property that caused zfs to do a journaled, but non-COW update so the data's relative location for databases is always the same? Or - What if it did a double update: One to a staged area, and another immediately after that to the 'old' data blocks. Still always have on-disk consistency etc, at a cost of double the I/O's... Of course, both of these would require non-sparse file creation for the DB etc, but would it be plausible? For very read intensive and position sensitive applications, I guess this sort of capability might make a difference? Just some stabs in the dark... Cheers! Nathan. Louwtjie Burger wrote: > Hi > > After a clean database load a database would (should?) look like this, > if a random stab at the data is taken... > > [8KB-m][8KB-n][8KB-o][8KB-p]... > > The data should be fairly (100%) sequential in layout ... after some > days though that same spot (using ZFS) would problably look like: > > [8KB-m][ ][8KB-o][ ] > > Is this "pseudo logical-physical" view correct (if blocks n and p was > updated and with COW relocated somewhere else)? > > Could a utility be constructed to show the level of "fragmentation" ? > (50% in above example) > > IF the above theory is flawed... how would fragmentation "look/be > observed/calculated" under ZFS with large Oracle tablespaces? > > Does it even matter what the "fragmentation" is from a performance > perspective? > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss