> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
> 
> To put things in proper perspective, with 128K filesystem blocks, the
> worst case file fragmentation as a percentage is 0.39%
> (100*1/((128*1024)/512)).  On a Microsoft Windows system, the
> defragger might suggest that defragmentation is not warranted for this
> percentage level.

I don't think that's correct...
Suppose you write a 1G file to disk.  It is a database store.  Now you start
running your db server.  It starts performing transactions all over the
place.  It overwrites the middle 4k of the file, and it overwrites 512b
somewhere else, and so on.  Since this is COW, each one of these little
writes in the middle of the file will actually get mapped to unused sectors
of disk.  Depending on how quickly they're happening, they may be aggregated
as writes...  But that's not going to help the sequential read speed of the
file, later when you stop your db server and try to sequentially copy your
file for backup purposes.

In the pathological worst case, you would write a file that takes up half of
the disk.  Then you would snapshot it, and overwrite it in random order,
using the smallest possible block size.  Now your disk is 100% full, and if
you read that file, you will be performing worst case random IO spanning 50%
of the total disk space.  Granted, this is not a very realistic case, but it
is the worst case, and it's really really really bad for read performance.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to