On 24/02/2010 02:21, v wrote:
Hi,
Thanks for the reply.
So the problem of sequential read after random write problem exist in zfs.
I wonder if it is a real problem, ie, for example cause longer backup time, 
will it be addressed in future?


Once the famous bp rewriter is integrated and a defrag functionality built on top of it you will be able to re-arrange your data again so it is sequential again.

So I should ask anther question: is zfs suitable for an environment that has 
lots of data changes? I think for random I/O, there will be no such performance 
penalty, but if you backup a zfs dataset, must the backup utility sequentially 
read blocks of a dataset? Will zfs dataset suitable for database temporary 
tablespace or online redo logs?

Will a defrag utility be implemented?


From my own experience it is not an issue in most cases - environments with lots of random writes tend to do random reads as well anyway. Then when it comes to backup - yes, it might get slower over time but in most cases it will still be faster than your network bandwidth so it would not be a bottleneck. Additionally if your environment is doing lots of random updates ZFS (or a CoW fs in general) should be able to perform them much faster than other filesystems.

But of course YMMV.

--
Robert Milkowski
http://milek.blogspot.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to