On Thu, 2011-03-03 at 21:02 +0100, Roy Sigurd Karlsbakk wrote:
> > > Last I checked, it didn't help much. IMHO we need a driver that can
> > > display the drives in the order they're plugged in. Like Windoze.
> > > Like Linux. Like FreeBSD. I really don't understand what should be
> > > so hard to
Hi,
Has anyone met this?
I meet this every time just like somebody steps on brake paddle suddenly and
release it in the car.
Thanks.
Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
> > > Just a shot in the dark, but could this possibly be related to my
> > > issue as posted with the subject "Nasty zfs issue"?
> >
> > I do not think they are directly related. I have seen some odd
> > behavior when I replace a failed drive before the resilver
> > completes,
> > but nothing as d
> > Just a shot in the dark, but could this possibly be related to my
> > issue as posted with the subject "Nasty zfs issue"?
>
> I do not think they are directly related. I have seen some odd
> behavior when I replace a failed drive before the resilver completes,
> but nothing as dramatic as what
On Thu, Mar 3, 2011 at 4:28 PM, Roy Sigurd Karlsbakk wrote:
> Just a shot in the dark, but could this possibly be related to my issue as
> posted with the subject "Nasty zfs issue"?
I do not think they are directly related. I have seen some odd
behavior when I replace a failed drive before
Just a shot in the dark, but could this possibly be related to my issue as
posted with the subject "Nasty zfs issue"?
roy
- Original Message -
> Apologies in advance as this is a Solaris 10 question and not an
> OpenSolaris issue (well, OK, it *may* also be an OpenSolaris issue).
> Syste
On Thu, Mar 3, 2011 at 2:08 PM, Cindy Swearingen
wrote:
> I've seen some spare stickiness too and its generally when I'm trying to
> simulate a drive failure (like you are below) without actually
> physically replacing the device.
>
> If I actually physically replace the failed drive, the spare i
On Thu, Mar 3, 2011 at 11:48 AM, Richard Elling
wrote:
>> 1) zpool with multiple vdevs and hot spares
>> 2) multiple drive failures at once
>
> In my experience, hot spares do not help with the case where the failures
> are not explicitly drive failures. In those cases where I see multiple
> fai
> > Last I checked, it didn't help much. IMHO we need a driver that can
> > display the drives in the order they're plugged in. Like Windoze.
> > Like Linux. Like FreeBSD. I really don't understand what should be
> > so hard to do it like the others. As one said "I don't have their
> > sources", bo
Hi all
I have this pool with 11 7-drive RAIDz2 VDEVs, all WD Black 2TB (FASS) drives.
Another drive died recently, and I went to replace it. zpool offline, cfgadm -c
unconfigure, unplug, devfsadm, zpool replace. Now, after this, I realize the
resilver to a spare hadn't finished, so now it's tel
> Date: Mon, 28 Feb 2011 22:02:37 +0100 (CET)
> From: Roy Sigurd Karlsbakk
>
> > > I cannot but agree. On Linux and Windoze (haven't tested FreeBSD),
> > > drives connected to an LSI9211 show up in the correct order, but
not
> > > on OI/osol/S11ex (IIRC), and fmtopo doesn't always show a mapping
> On 02/28/11 22:39, Dave Pooser wrote:
> > On 2/28/11 4:23 PM, "Garrett D'Amore"
> wrote:
> >
> >> Drives are ordered in the order they are *enumerated* when they
*first*
> >> show up in the system. *Ever*.
> >
> > Is the same true of controllers? That is, will c12 remain c12 or
> > /pci@0,0/pci
Hi Paul,
I've seen some spare stickiness too and its generally when I'm trying to
simulate a drive failure (like you are below) without actually
physically replacing the device.
If I actually physically replace the failed drive, the spare is
detached automatically after the new device is resilve
On Mar 3, 2011, at 6:45 AM, Paul Kraus wrote:
> Apologies in advance as this is a Solaris 10 question and not an
> OpenSolaris issue (well, OK, it *may* also be an OpenSolaris issue).
> System is a T2000 running Solaris 10U9 with latest ZFS patches (zpool
> version 22). Storage is a pile of J4400
Apologies in advance as this is a Solaris 10 question and not an
OpenSolaris issue (well, OK, it *may* also be an OpenSolaris issue).
System is a T2000 running Solaris 10U9 with latest ZFS patches (zpool
version 22). Storage is a pile of J4400 (5 of them).
I have run into what appears to be (Sun)
Hi,
This turned out to be a scheduler issue. The system was still running the
default TS scheduler. By
switching to the FSS scheduler the performance was back to what it was before
the system was
reinstalled.
When using the TS scheduler the writes would not evenly spread across the
drives. We
16 matches
Mail list logo