Hi Torrey; we are the cobblers kids. We borrowed this T2000 from Niagara engineering after we did some performance tests for them. I am trying to get a thumper to run this data set. This could take up to 3-4 months. Today we are watching 750 Sun Ray servers and 30,000 employees. Lets see
1) Solaris 10
2) ZFS version 6
3) T2000 32x1000 with the poorer performing  drives that come with the Niagara

We need a short term solution. Niagara engineering has given us two more of the internal drives so we can max out the Niagara with 4 internal drives. This is the hardware we need to use this week. .  When we get a new box, more drives we will reconfigure.

Our graphs have 5000 data points per month, 140 data points per day. we can stand to lose data.

my suggestion was one drive as the system volume and the remaining three drives as one big zfs volume , probably raidz.

thanks
sean


Torrey McMahon wrote:
Given the amount of I/O wouldn't it make sense to get more drives involved or something that has cache on the front end or both? If you're really pushing the amount of I/O you're alluding too - Hard to tell without all the details - then you're probably going to hit a limitation on the drive IOPS. (Even with the cache on.)

Karen Chau wrote:
Our application Canary has approx 750 clients uploading to the server
every 10 mins, that's approx 108,000 gzip tarballs per day writing to
the /upload directory.  The parser untars the tarball which consists of
8 ascii files into the /archives directory.  /app is our application and
tools (apache, tomcat, etc) directory.  We also have batch jobs that run
throughout the day, I would say we read 2 to 3 times more than we write.

 


--
Sean Meighan
Mgr ITSM Engineering

Sun Microsystems, Inc.
US
Phone x32329 / +1 408 850-9537
Mobile 303-520-2024
Fax 408 850-9537
Email [EMAIL PROTECTED]

NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to