On Tue, Jul 25, 2006 at 03:39:11PM -0700, Karen Chau wrote:
> Our application Canary has approx 750 clients uploading to the server
> every 10 mins, that's approx 108,000 gzip tarballs per day writing to
> the /upload directory.  The parser untars the tarball which consists of
> 8 ascii files into the /archives directory.  /app is our application and
> tools (apache, tomcat, etc) directory.  We also have batch jobs that run
> throughout the day, I would say we read 2 to 3 times more than we write.
> 
> Since we have an alternate server, downtime or data lost is somewhat
> acceptable.  How can we best layout our filesystems to get the most
> performance.
> 
> directory info
> --------------
> /app      - 30G
> /upload   - 10G
> /archives - 35G
> 
> HW info
> -------
> System Configuration:  Sun Microsystems  sun4v Sun Fire T200
> System clock frequency: 200 MHz
> Memory size: 8184 Megabytes
> CPU: 32 x 1000 MHz  SUNW,UltraSPARC-T1
> Disks: 4x68G
>   Vendor:   FUJITSU
>   Product:  MAV2073RCSUN72G
>   Revision: 0301
> 
> 
> We plan on using 1 disk for OS, the others 3 disks for canary
> filesystems, /app, /upload, and /archives.  Should I create 3 pools, ie
>    zpool create canary_app c1t1d0
>    zpool create canary_upload c1t2d0
>    zpool create canary_archives c1t3d0
> 
> --OR--
> create 1 pool using dynamic stripe, ie
>    zpool create canary c1t1d0 c1t2d0 c1t3d0
> 
> --OR--
> create a single-parity raid-z pool, ie.
>    zpool create canary raidz c1t1d0 c1t2d0 c1t3d0
> 
> Which option gives us the best performance?  If there's another method
> that's not mentioned, please let me know.

You should create a single pool of a RAID-Z stripe.  This will give you
approximately 140G of usuable space, and if you turn on compression
(on everything but /upload, since that's already gzipped) you'll get
much more.  You'll also have some data redundancy in case one of the
disks fails.  Simply create 3 datasets, along the lines of:

        # zpool create canary raidz c1t1d0 c1t2d0 c1t3d0
        # zfs set mountpoint=none canary
        # zfs set compression=on canary
        # zfs create canary/app
        # zfs set mountpoint=/app canary/app
        # zfs create canary/upload
        # zfs set mountpoint=/upload canary/upload
        # zfs set compression=off canary/upload
        # zfs create canary/archives
        # zfs set mountpoint=/archives canary/archives

This will give you reasonable performance.  If this isn't enough, then
you should probably do a 3-way mirror (which gives you redundancy but
perhaps not enough space), or a dynamic stripe (which gives you better
performance but no data redundancy).  I would try both configurations,
benchmark your app, and see if raidz will actually be a bottleneck (my
guess is it won't be).

> Also, should be enable read/write cache on the OS as well as the other
> disks?

If you give zpool(1M) 'whole disks' (i.e. no 's0' slice number) and let
it label and use the disks, it will automatically turn on the write
cache for you.

- Eric

--
Eric Schrock, Solaris Kernel Development       http://blogs.sun.com/eschrock
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to