Ben Rockwood wrote:
Eric Kustarz wrote:
Ben Rockwood wrote:
> I've got a Thumper doing nothing but serving NFS. Its using B43 with
> zil_disabled. The system is being consumed in waves, but by what I
> don't know. Notice vmstat:
We made several performance fixes in the NFS/ZFS area in recen
Ben Rockwood wrote:
I wanted to add one more piece of information to this problem that may
or not may be helpful.
On an NFS client if we just do "ls" commands over and over and over we
can snoop the wire and see TCP retransmits whenever the CPU is burned
up. nfsstat doesn't record these retr
Hi Luke,
I wonder if it is the HBA. We had issues with Solaris and LSI HBAs
back when we were using an Xserve RAID.
Haven't had any of the issues you're describing between our LSI array
and the Qlogic HBAs we're using now.
If you have another type of HBA I'd try it. MPXIO and ZFS haven't ever
c
Jason,
I am no longer looking at not using STMS multipathing because without STMS you
loose the binding to the array and I loose all transmissions between the server
and array. The binding does come back after a few minutes but this is not
acceptable in our environment.
Load times vary depe
Hi Dale,
For what its worth, the SX releases tend to be pretty stable. I'm not
sure if snv_52 has made a SX release yet. We ran for over 6 months on
SX 10/05 (snv_23) with no downtime.
Best Regards,
Jason
On 12/7/06, Dale Ghent <[EMAIL PROTECTED]> wrote:
On Dec 7, 2006, at 6:14 PM, Anton B. Ra
> Be careful here. If you are using files that have no
> data in them yet
> you will get much better compression than later in
> life. Judging by
> the fact that you got only 12.5x, I suspect that your
> files are at
> least partially populated. Expect the compression to
> get worse over
> time.
On Dec 7, 2006, at 6:14 PM, Anton B. Rang wrote:
This does look like the ATA driver bug rather than a ZFS issue per se.
Yes indeed. Well, that answers that. FWIW, I'm hour 2 of a mysql
configure script run. Yow!
(For the curious, the reason ZFS triggers this when UFS doesn't is
because Z
On Dec 7, 2006, at 5:22 PM, Nicholas Senedzuk wrote:
You said you are running Solaris 10 FCS but zfs was not released
until Solaris 10 6/06 which is Solaris 10U2.
Look at a Solaris 10 6/06 CD/DVD. Check out the Solaris_10/
UpgradePatches directory.
ah! well whaddya know...
Yes, apply thos
That's gotta be what it is. All our MySQL IOP issues have gone away
one we moved to RAID-1 from RAID-Z.
-J
On 12/7/06, Anton B. Rang <[EMAIL PROTECTED]> wrote:
This does look like the ATA driver bug rather than a ZFS issue per se.
(For the curious, the reason ZFS triggers this when UFS doesn't
On 12/7/06, Andrew Miller <[EMAIL PROTECTED]> wrote:
Quick question about the interaction of ZFS filesystem compression and the
filesystem cache. We have an Opensolaris (actually Nexenta alpha-6) box
running RRD collection. These files seem to be quite compressible. A test
filesystem conta
This does look like the ATA driver bug rather than a ZFS issue per se.
(For the curious, the reason ZFS triggers this when UFS doesn't is because ZFS
sends a synchronize cache command to the disk, which is not handled in DMA mode
by the controller; and for this particular controller, switching b
On Dec 7, 2006, at 1:46 PM, Jason J. W. Williams wrote:
Hi Dale,
Are you using MyISAM or InnoDB?
InnoDB.
Also, what's your zpool configuration?
A basic mirror:
[EMAIL PROTECTED]>zpool status
pool: local
state: ONLINE
scrub: none requested
config:
NAME STATE READ W
> I'm still confused though, I believe that locking an adaptive mutex will spin
> for a short
> period then context switch and so they shouldn't be burning CPU - at least
> not .4s worth!
An adaptive mutex will spin as long as the thread which holds the mutex is on
CPU. If the lock is moderate
> Looking at the source code overview, it looks like
> the compression happens "underneath" the ARC layer,
> so by that I am assuming the uncompressed blocks are
> cached, but I wanted to ask to be sure.
>
> Thanks!
> -Andy
>
> Yup, your assumption is correct. We currently do
> compression
On 12/8/06, Mark Maybee <[EMAIL PROTECTED]> wrote:
Yup, your assumption is correct. We currently do compression below the
ARC. We have contemplated caching data in compressed form, but have not
really explored the idea fully yet.
Hmm... interesting idea.
That will incur CPU to do a decompres
Andrew Miller wrote:
Quick question about the interaction of ZFS filesystem compression and the filesystem cache. We have an Opensolaris (actually Nexenta alpha-6) box running RRD collection. These files seem to be quite compressible. A test filesystem containing about 3,000 of these files sho
You said you are running Solaris 10 FCS but zfs was not released until
Solaris 10 6/06 which is Solaris 10U2.
On 12/7/06, Jason J. W. Williams <[EMAIL PROTECTED]> wrote:
Hi Dale,
Are you using MyISAM or InnoDB? Also, what's your zpool configuration?
Best Regards,
Jason
On 12/7/06, Dale Ghent
Hi Luke,
That's terrific!
You know you might be able to tell ZFS which disks to look at. I'm not
sure. It would be interesting, if anyone with a Thumper could comment
on whether or not they see the import time issue. What are your load
times now with MPXIO?
Best Regards,
Jason
On 12/7/06, Lu
0.4 seconds of CPU on average to do a dmu_object_alloc is a wee sluggish!
I suspect, however, it's contention on osi->os_obj_lock as we don't seem
to be looping in dmu_object_alloc. I'm still confused though, I believe
that locking an adaptive mutex will spin for a short period then context
switch
Quick question about the interaction of ZFS filesystem compression and the
filesystem cache. We have an Opensolaris (actually Nexenta alpha-6) box
running RRD collection. These files seem to be quite compressible. A test
filesystem containing about 3,000 of these files shows a compressratio
Luke Schwab wrote:
Hi,
I am running Solaris 10 ZFS and I do not have STMS multipathing enables. I have dual FC connections to storage using two ports on an Emulex HBA.
In the Solaris ZFS admin guide. It says that a ZFS file system monitors disks by their path and their device ID. If a disk is
Hi Dale,
Are you using MyISAM or InnoDB? Also, what's your zpool configuration?
Best Regards,
Jason
On 12/7/06, Dale Ghent <[EMAIL PROTECTED]> wrote:
Hey all, I run a netra X1 as the mysql db server for my small
personal web site. This X1 has two drives in it with SVM-mirrored UFS
slices for
Ben,
The attached dscript might help determining the zfs_create issue.
It prints:
- a count of all functions called from zfs_create
- average wall count time of the 30 highest functions
- average cpu time of the 30 highest functions
Note, please ignore warnings of the fol
Ben Rockwood wrote:
> I've got a Thumper doing nothing but serving NFS. Its using B43 with
> zil_disabled. The system is being consumed in waves, but by what I
> don't know. Notice vmstat:
We made several performance fixes in the NFS/ZFS area in recent builds,
so if possible it would be great
Hi Ben,
Your sar output shows one core pegged pretty much constantly! From the Solaris
Performance and Tools book that SLP state value has "The remainder of important
events such as disk and network waits. along with other kernel wait
events.. kernel locks or condition variables also a
> I am about to plan an upgrade of about 500 systems (sparc) to Solaris 10 and
> would like to go for ZFS to manage the rootdisk. But what timeframe are we
> looking at?
I've heard update 5, so several months at least.
> and what should we take into account to be able to migrate to it
> later on?
Hey all, I run a netra X1 as the mysql db server for my small
personal web site. This X1 has two drives in it with SVM-mirrored UFS
slices for / and /var, a swap slice, and slice 7 is zfs. There is one
zfs mirror pool called "local" on which there are a few file systems,
one of which is f
> Why all people are strongly recommending to use whole disk (not part
> of disk) for creation zpools / ZFS file system ?
One thing is performance; ZFS can enable/disable write cache in the disk
at will if it has full control over the entire disk..
ZFS will also flush the WC when nec
Hi
I am about to plan an upgrade of about 500 systems (sparc) to Solaris 10 and
would like to go for ZFS to manage the rootdisk. But what timeframe are we
looking at? and what should we take into account to be able to migrate to it
later on?
--
// Flemming Danielsen
_
Hey Ben - I need more time to look at this and connect some dots,
but real quick
Some nfsstat data that we could use to potentially correlate to the local
server activity would be interesting. zfs_create() seems to be the
heavy hitter, but a periodic kernel profile (especially if we can catc
On 07 December, 2006 - dudekula mastan sent me these 2,9K bytes:
> Hi Folks,
>
> Man pages of ZFS and ZPOOL, clearly saying that it is not good
> (recommended) to use some portion of device for ZFS file system
> creation.
>
> Hardly what are the problems if we use only some portion
I've got a Thumper doing nothing but serving NFS. Its using B43 with
zil_disabled. The system is being consumed in waves, but by what I don't know.
Notice vmstat:
3 0 0 25693580 2586268 0 0 0 0 0 0 0 0 0 0 0 926 91 703 0 25 75
21 0 0 25693580 2586268 0 0 0 0 0 0 0 0 0 13
Hi Folks,
Man pages of ZFS and ZPOOL, clearly saying that it is not good (recommended)
to use some portion of device for ZFS file system creation.
Hardly what are the problems if we use only some portion of disk space for
ZFS FS ?
or
The whole raid does not fail -- we are talking about corruption
here. If you lose some inodes your whole partition is not gone.
My ZFS pool would not salvage -- poof, whole thing was gone (granted
it was a test one and not a raidz or mirror yet). But still, for
what happened, I cannot believe t
34 matches
Mail list logo