Anton B. Rang writes:
 > If your database performance is dominated by sequential reads, ZFS may
 > not be the best solution from a performance perspective. Because ZFS
 > uses a write-anywhere layout, any database table which is being
 > updated will quickly become scattered on the disk, so that sequential
 > read patterns become random reads. 

While for OLTP our best practice is to set the ZFS
recordsize to match the DB blocksize, for DSS we would
advise to run without such tuning.

True the sequential reads becomes random reads but
of 128K records and that should still allow to draw 
close to 20-25MB/s per [modern] disk.

So to reach your goal of 500MB/s++ you would need 20++ disks.

-r


 > 
 > Of course, ZFS has other benefits, such as ease of use and protection
 > from many sources of data corruption; if you want to use ZFS in this
 > application, though, I'd expect that you will need substantially more
 > raw I/O bandwidth than UFS or QFS (which update in place) would
 > require. 
 > 
 > (If you have predictable access patterns to the tables, a QFS setup
 > which ties certain tables to particular LUNs using stripe groups might
 > work well, as you can guarantee that accesses to one table will not
 > interfere with accesses to another.) 
 > 
 > As always, your application is the real test.  ;-)
 >  
 >  
 > This message posted from opensolaris.org
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to