Absolutely, I have done hot spot tests using a Poisson random
distribution.  With that pattern (where there are many cache hits), the
writes are 3-10 times faster than sequential speed.  My comment was
regarding purely random i/o across a large (at least much larger than
available memory cache) area.  A real workload is likely to have a
combination of patterns, i.e. some fairly random, some hot spot, and
some sequential. 

        Chuck

-----Original Message-----
From: Roch Bourbonnais - Performance Engineering
[mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 11, 2006 1:18 AM
To: Gehr, Chuck R
Cc: [EMAIL PROTECTED]; Boyd Adamson; ZFS filesystem discussion
list
Subject: RE: [zfs-discuss] ZFS and databases


Gehr, Chuck R writes:
 > One word of caution about random writes.  From my experience, they
are  > not nearly as fast as sequential writes (like 10 to 20 times
slower)  > unless they are carefully aligned on the same boundary as the
file  > system record size. Otherwise, there is a heavy read penalty
that you  > can easily observe by doing a zpool iostat.  So, depending
on the  > workload, it's really a stretch to say random writes can be
done at  > sequential speed.
 > 
 >      Chuck
 > 

Could we agree on saying that

        partial writes to blocks that are not in cache are much
        slower than writes to blocks that are.

Then  given that    Sequential pattern  can   benefit   from
readahead, then those will fall in the fast category most of the time.
Performance of Random  writes will  depend on the
cached   ratio.  For DB working   sets  that greatly exceeds
system memory, which  is common, then  this fall in the slower case and
this stays true for any filesystem.

Or said otherwise, There is no free lunch.


-r


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to