Our application Canary has approx 750 clients uploading to the server
every 10 mins, that's approx 108,000 gzip tarballs per day writing to
the /upload directory.  The parser untars the tarball which consists of
8 ascii files into the /archives directory.  /app is our application and
tools (apache, tomcat, etc) directory.  We also have batch jobs that run
throughout the day, I would say we read 2 to 3 times more than we write.

Since we have an alternate server, downtime or data lost is somewhat
acceptable.  How can we best layout our filesystems to get the most
performance.

directory info
--------------
/app      - 30G
/upload   - 10G
/archives - 35G

HW info
-------
System Configuration:  Sun Microsystems  sun4v Sun Fire T200
System clock frequency: 200 MHz
Memory size: 8184 Megabytes
CPU: 32 x 1000 MHz  SUNW,UltraSPARC-T1
Disks: 4x68G
  Vendor:   FUJITSU
  Product:  MAV2073RCSUN72G
  Revision: 0301


We plan on using 1 disk for OS, the others 3 disks for canary
filesystems, /app, /upload, and /archives.  Should I create 3 pools, ie
   zpool create canary_app c1t1d0
   zpool create canary_upload c1t2d0
   zpool create canary_archives c1t3d0

--OR--
create 1 pool using dynamic stripe, ie
   zpool create canary c1t1d0 c1t2d0 c1t3d0

--OR--
create a single-parity raid-z pool, ie.
   zpool create canary raidz c1t1d0 c1t2d0 c1t3d0

Which option gives us the best performance?  If there's another method
that's not mentioned, please let me know.

Also, should be enable read/write cache on the OS as well as the other
disks?

Is build 9 in S10U2 RR??  If not, please point me to the OS image on
nana.eng.


Thanks,
karen


-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
NOTICE:  This email message is for the sole use of the intended recipient(s)
and may contain confidential and privileged information.  Any unauthorized
review, use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and destroy all
copies of the original message.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to