Ming, Lets take a pro example with a minimal performance tradeoff.
All FSs that modify a disk block, IMO, do a full disk block read before anything. If doing a extended write and moving to a larger block size with COW you give yourself the ability to write to a single block vs having to fill the original block and also needing to write the next block. The "performance loss" is the additional latency to transfer more bytes within the larger block on the next access. This pro doesn't just benefit at the end of the file but also at both ends of a hole within the file. In addition, the next non recent IO op that accesses the disk block will be able to perform a single seek. Also, if we allow ourselves to dynamicly increase the size of the block and we are within direct access to the blocks, we can delay moving to the additional latencies going to a indirect block or... So, this has a performance benefit in addition to removing the case where a OS panic occures in the middle of the disk block and losing the original and the full next iteration of the file. After the write completes we should be able to update the FS's node data struct. Mitchell Erblich Ex-Sun Kernel Engineer who proposed and implemented this in a limited release of UFS many years ago. ------------------ Ming Zhang wrote: > > Hi All > > I wonder if any one have idea about the performance loss caused by COW > in ZFS? If you have to read old data out before write it to some other > place, it involve disk seek. > > Thanks > > Ming > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss