You propose ((2-way mirrored)  x RAID-Z (3+1)) .  That gives
you 3  data disks worth and  you'd have to   loose 2 disk in
each mirror (4 total) to loose data.

For random read  load you describe,  I could expect that the
per device cache to work nicely;  That is file blocks stored
at  some given  time  in  the  past  may be restituted  also
closely in time; Basically I update page Foo and page Bar at
some  time   in  the  past   because   they  contain  shared
information or reference one another and clients pulling one
page, hits  the other soon  after. But if every file records
are updated  (written)  fully  independantly  to  the   read
(input) pattern, then you'd be  in the low range of response
time.

Best case would give you up to 6 disks worth of IOPS serving
capacity   (maybe even more).  If    the device cache  fails
miserably then you'd have 2 disk worth on input IOPS.

Now if you buy one more disk, you could envision (3-way
mirror) x (3-disk dynamic stripe). Same amount of data as
before but 9 disks worth of IOPS; But some 3-disks failure
may put data as risk.


Client NFS for inputs traffic seems quite ok to me.
It mostly for output that NFS can be an issue in general.

NFS  causes   individual  client threads doing    updates to
operate  very much  in    synchronization with  the  storage
subsystem. This contrast with a  local FS that can work much
more asynchoneously.  With direct attached   FS, We can much
better  decouple  application   updates to  memory,  and  FS
updates to storage. 

-r


David J. Orman writes:
 > Just as a hypothetical (not looking for exact science here folks..),
 > how would ZFS fare (in your educated opinion) in this sitation: 
 > 
 > 1 - Machine with 8 10k rpm SATA drives. High performance machine of
 > sorts (ie dual proc, etc..let's weed out cpu/memory/bus bandwidth as
 > much as possible from the equation). 
 > 
 > 2 - Workload is webserving, well - application serving. Java app
 > server 9, various java applications requiring database access (mostly
 > small tables/data elements, but millions and millions of rows). 
 > 
 > 3 - App server would be running in one zone, with a (NFS) mounted ZFS
 > filesystem as storage. 
 > 
 > 4 - DB server (PgSQL) would be running in another zone, with a (NFS)
 > mounted ZFS filesystem as storage. 
 > 
 > 5 - Multiple disk redundancy is needed. So, I'm assuming two raid-z
 > pools of 3 drives each, mirrored is the solution. If people have a
 > better suggestion, tell me! :P 
 > 
 > 6 - OS will be Sol10U2, OS/Root FS will be installed on mirrored
 > drives, using UFS (my only choice..) 
 > 
 > Now, please eliminate CPU/RAM from this equation, assume the server
 > has 4 cores of goodness powering it, and 32 gigs of ram. No, running
 > on a ram-disk isn't what I'm asking for. :P 
 > 
 > * NFS being optional, just curious what the difference would be, as
 > getting a T1000 + building an external storage box is an option. I
 > just can't justify Sun's crazy storage pricing at the moment. 
 > 
 > How would ZFS perform (educated opinions, I realize I won't be getting
 > exact answers) in this situation. I can't be more specific because I
 > don't have the HW in front of me, I'm trying to get a feel for the
 > "correct" solution before I make huge purchases. 
 > 
 > If anything else is needed, please feel free to ask!
 > 
 > Thanks,
 > David
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to