Well, I don't know about his particular case, but many QFS clients have found 
the separation of data and metadata to be invaluable. The primary reason is 
that it avoids disk seeks. We have QFS customers who are running at over 90% of 
theoretical bandwidth on a medium-sized set of FibreChannel controllers and 
need to maintain that streaming rate. Taking a seek to update the on-disk 
inodes once a minute or so slowed down transfers enough that QFS was invented.  
;-)

QFS uses an allocate-forward policy which means that the disk head is always 
moving in one direction and, for new file creation (the data capture case), we 
issue large writes that are always sequential. (And when multiple files are 
being captured simultaneously, they can be directed onto different physical 
disk arrays within the same file system, to avoid interference.)

ZFS will be a great file system for transactional work (small reads/writes) and 
its data integrity should be unmatched. But for large streaming, it's hard to 
beat QFS. (And it will take some cleverness to figure out a multi-host ZFS.)

(For what it's worth, the current 128K-per-I/O policy of ZFS really hurts its 
performance for large writes. I imagine this would not be too difficult to fix 
if we allowed multiple 128K blocks to be allocated as a group.)
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to