Sean,
This is looking better! Once you get to the latest ZFS changes that we
just putback into s10 you will be able to upgrade to ZFS version 3 which
will provide such key features as Hot spares, RAID-6, clone promotion,
and fast snapshots. Additionally, there are more performance gains that
will probably help you out.
Thanks,
George
Sean Meighan wrote:
*Hi George; life is better for us now.
we upgraded to s10s_u3wos_01 last Friday on itsm-mpk-2.sfbay , the
production Canary server http://canary.sfbay. What do we look like now?
*
# zpool upgrade
This system is currently running ZFS version 2.
All pools are formatted using this version.
we added two more lower performance disk drives last Friday. we went
from two drives that were mirrored to four drives. now we look like this
on our T2000:
(1) 68 gig running unmirrored for the system
(3) 68 gig drives setup as raidz
# zpool status
pool: canary
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
canary ONLINE 0 0 0
raidz ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
errors: No known data errors
our 100% disk drive from previous weeks is now three drives. iostat now
shows that no single drive is reaching 100% . here is a "iostat -xn 1 99"
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
4.0 0.0 136.0 0.0 0.0 0.0 0.0 5.3 0 2 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0
0.0 288.9 0.0 939.3 0.0 7.0 0.0 24.1 1 74 c1t1d0
0.0 300.9 0.0 940.8 0.0 6.2 0.0 20.7 1 72 c1t2d0
0.0 323.9 0.0 927.8 0.0 5.3 0.0 16.5 1 63 c1t3d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
itsm-mpk-2:vold(pid334)
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0
0.0 70.9 0.0 118.8 0.0 0.5 0.0 7.6 0 28 c1t1d0
0.0 74.9 0.0 124.3 0.0 0.5 0.0 6.1 0 26 c1t2d0
0.0 75.8 0.0 120.3 0.0 0.5 0.0 7.2 0 27 c1t3d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
itsm-mpk-2:vold(pid
Here is our old box
# more /etc/release
Solaris 10 6/06 s10s_u2wos_06 SPARC
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 30 March 2006
# pkginfo -l SUNWzfsr
PKGINST: SUNWzfsr
NAME: ZFS (Root)
CATEGORY: system
ARCH: sparc
VERSION: 11.10.0,REV=2006.03.22.02.15
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: ZFS root components
PSTAMP: on10-patch20060322021857
INSTDATE: Apr 04 2006 13:52
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 18 installed pathnames
5 shared pathnames
7 directories
4 executables
1811 blocks used (approx)
here is the current version
# more /etc/release
Solaris 10 11/06 s10s_u3wos_01 SPARC
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 27 June 2006
# pkginfo -l SUNWzfsr
PKGINST: SUNWzfsr
NAME: ZFS (Root)
CATEGORY: system
ARCH: sparc
VERSION: 11.10.0,REV=2006.05.18.02.15
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: ZFS root components
PSTAMP: on10-patch20060315140831
INSTDATE: Jul 27 2006 12:10
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 18 installed pathnames
5 shared pathnames
7 directories
4 executables
1831 blocks used (approx)
In my opinion the 2 1/2" disk drives in the Niagara box were not
designed to receive one million files per day. these two extra drives
(thanks Denis!) have given us acceptable performance. i still want a
thumper *smile*. It is pretty amazing that we have 800 servers, 30,000
users, 140 million lines of ASCII per day all fitting in a 2u T2000 box!
thanks
sean
George Wilson wrote:
Sean,
Sorry for the delay getting back to you.
You can do a 'zpool upgrade' to see what version of the on-disk format
you pool is currently running. The latest version is 3. You can then
issue a 'zpool upgrade <pool>' to upgrade. Keep in mind that the
upgrade is a one-way ticket and can't be rolled backwards.
ZFS can be upgraded by just applying patches. So if you were running
Solaris 10 06/06 (a.k.a u2) you could apply the patches that will come
out when u3 ships. Then issue the 'zpool upgrade' command to get the
functionality you need.
Does this help? Can you send me the output of 'zpool upgrade' on your
system?
Thanks,
George
Sean Meighan wrote:
Hi George; we are trying to build our server today. We should have
the four disk drives mounted by this afternoon.
Separate question; we were on an old ZFS version, how could we have
upgraded to a new version? Do we really have to re-install Solaris to
upgrade ZFS?
thanks
sean
George Wilson wrote:
Sean,
The gate for s10u3_03 closed yesterday and I think the DVD image
will be available early next week. I'll keep you posted. If you want
to try this out before then what I can provide you are the binaries
to run on top of s10u3_02.
Thanks,
George
Sean Meighan wrote:
George; is there a link to s10u3_03? My team would be happy to put
the latest in.
thanks
sean
George Wilson wrote:
Karen and Sean,
You mention ZFS version 6 do yo mean that you are running
s10u2_06? If so, then definitely you want to upgrade to the RR
version of s10u2 which is s10u2_09a.
Additionally, I've just putback the latest feature set and
bugfixes which will be part of s10u3_03. There were some
additional performance fixes which may really benefit plus it will
provide hot spares support. Once this build is available I would
highly recommend that you guys take it for a spin (works great on
Thumper).
Thanks,
George
Sean Meighan wrote:
Hi Torrey; we are the cobblers kids. We borrowed this T2000 from
Niagara engineering after we did some performance tests for them.
I am trying to get a thumper to run this data set. This could
take up to 3-4 months. Today we are watching 750 Sun Ray servers
and 30,000 employees. Lets see
1) Solaris 10
2) ZFS version 6
3) T2000 32x1000 with the poorer performing drives that come
with the Niagara
We need a short term solution. Niagara engineering has given us
two more of the internal drives so we can max out the Niagara
with 4 internal drives. This is the hardware we need to use this
week. . When we get a new box, more drives we will reconfigure.
Our graphs have 5000 data points per month, 140 data points per
day. we can stand to lose data.
my suggestion was one drive as the system volume and the
remaining three drives as one big zfs volume , probably raidz.
thanks
sean
Torrey McMahon wrote:
Given the amount of I/O wouldn't it make sense to get more
drives involved or something that has cache on the front end or
both? If you're really pushing the amount of I/O you're alluding
too - Hard to tell without all the details - then you're
probably going to hit a limitation on the drive IOPS. (Even with
the cache on.)
Karen Chau wrote:
Our application Canary has approx 750 clients uploading to the
server
every 10 mins, that's approx 108,000 gzip tarballs per day
writing to
the /upload directory. The parser untars the tarball which
consists of
8 ascii files into the /archives directory. /app is our
application and
tools (apache, tomcat, etc) directory. We also have batch jobs
that run
throughout the day, I would say we read 2 to 3 times more than
we write.
--
<http://www.sun.com> * Sean Meighan *
Mgr ITSM Engineering
*Sun Microsystems, Inc.*
US
Phone x32329 / +1 408 850-9537
Mobile 303-520-2024
Fax 408 850-9537
Email [EMAIL PROTECTED]
------------------------------------------------------------------------
NOTICE: This email message is for the sole use of the intended
recipient(s) and may contain confidential and privileged
information. Any unauthorized review, use, disclosure or
distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy
all copies of the original message.
------------------------------------------------------------------------
------------------------------------------------------------------------
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
<http://www.sun.com> * Sean Meighan *
Mgr ITSM Engineering
*Sun Microsystems, Inc.*
US
Phone x32329 / +1 408 850-9537
Mobile 303-520-2024
Fax 408 850-9537
Email [EMAIL PROTECTED]
------------------------------------------------------------------------
NOTICE: This email message is for the sole use of the intended
recipient(s) and may contain confidential and privileged
information. Any unauthorized review, use, disclosure or
distribution is prohibited. If you are not the intended recipient,
please contact the sender by reply email and destroy all copies of
the original message.
------------------------------------------------------------------------
--
<http://www.sun.com> * Sean Meighan *
Mgr ITSM Engineering
*Sun Microsystems, Inc.*
US
Phone x32329 / +1 408 850-9537
Mobile 303-520-2024
Fax 408 850-9537
Email [EMAIL PROTECTED]
------------------------------------------------------------------------
NOTICE: This email message is for the sole use of the intended
recipient(s) and may contain confidential and privileged information.
Any unauthorized review, use, disclosure or distribution is
prohibited. If you are not the intended recipient, please contact the
sender by reply email and destroy all copies of the original message.
------------------------------------------------------------------------
--
<http://www.sun.com> * Sean Meighan *
Mgr ITSM Engineering
*Sun Microsystems, Inc.*
US
Phone x32329 / +1 408 850-9537
Mobile 303-520-2024
Fax 408 850-9537
Email [EMAIL PROTECTED]
------------------------------------------------------------------------
NOTICE: This email message is for the sole use of the intended
recipient(s) and may contain confidential and privileged information.
Any unauthorized review, use, disclosure or distribution is prohibited.
If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
------------------------------------------------------------------------
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss