On 28/10/2011, at 3:06 PM, Daniel Carosone wrote:
> On Thu, Oct 27, 2011 at 10:49:22AM +1100, afree...@mac.com wrote:
>> Hi all,
>>
>> I'm seeing some puzzling behaviour with my RAID-Z.
>>
>
> Indeed. Start with zdb -l on each of the disks to look at the labels in more
> detail.
>
> --
> Dan
On Thu, Oct 27, 2011 at 10:49:22AM +1100, afree...@mac.com wrote:
> Hi all,
>
> I'm seeing some puzzling behaviour with my RAID-Z.
>
Indeed. Start with zdb -l on each of the disks to look at the labels in more
detail.
--
Dan.
pgpRTwLfC9flo.pgp
Description: PGP signature
_
Hi all,
I'm seeing some puzzling behaviour with my RAID-Z.
loki# uname -a
FreeBSD loki.local 8.2-RELEASE-p1 FreeBSD 8.2-RELEASE-p1 #4: Sat Apr 30
10:39:46 PDT 2011
jpaet...@servant.ixsystems.com:/usr/home/jpaetzel/freenas/obj.amd64/usr/home/jpaetzel/freenas/FreeBSD/src/sys/FREENAS.amd64
a
Hi Doug,
The "vms" pool was created in a non-redundant way, so there is no way to
get the data off of it unless you can put back the original c0t3d0 disk.
If you can still plug in the disk, you can always do a zpool replace on it
afterwards.
If not, you'll need to restore from backup, pref
Help - I've got a bad disk in a zpool and need to replace it. I've
got
an extra drive that's not being used, although it's still marked
like
it's in a pool. So I need to get the "xvm" pool destroyed, c0t5d0
marked as available, and replace c0t3d0 with c0t5d0.
Hello,
I have an EON server installed on which a drive mysteriously went offline a few
time, so I decided to replace the drive with one I connected to another port.
Unfortunately the replace operation failed, I think because of hardware issues
with the replacement drive. I bought a new replace
For the record, in case anyone else experiences this behaviour: I tried
various things which failed, and finally as a last ditch effort, upgraded my
freebsd, giving me zpool v14 rather than v13 - and now it's resilvering as it
should.
Michael
On Monday 17 May 2010 09:26:23 Michael Donaghy wrot
Hi,
I recently moved to a freebsd/zfs system for the sake of data integrity, after
losing my data on linux. I've now had my first hard disk failure; the bios
refused to even boot with the failed drive (ad18) connected, so I removed it.
I have another drive, ad16, which had enough space to replac
$ cat /etc/release
Solaris Express Community Edition snv_114 X86
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 04 May 2009
I recently replaced two drives in a
2009.06 is v111b, but you're running v111a. I don't know, but perhaps the a->b
transition addressed this issue, among others?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
I forgot to mention this is a
SunOS biscotto 5.11 snv_111a i86pc i386 i86pc
version.
Maurilio.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
Hi,
I have a pc where a pool suffered a disk failure, I did replace the failed disk
and the pool resilvered but, after resilvering, it was in this state
mauri...@biscotto:~# zpool status iscsi
pool: iscsi
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
[EMAIL PROTECTED] said:
> Thanks for the tips. I'm not sure if they will be relevant, though. We
> don't talk directly with the AMS1000. We are using a USP-VM to virtualize
> all of our storage and we didn't have to add anything to the drv
> configuration files to see the new disk (mpxio was alr
Thanks for the tips. I'm not sure if they will be relevant, though. We don't
talk directly with the AMS1000. We are using a USP-VM to virtualize all of our
storage and we didn't have to add anything to the drv configuration files to
see the new disk (mpxio was already turned on). We are usin
[EMAIL PROTECTED] said:
> I think we found the choke point. The silver lining is that it isn't the
> T2000 or ZFS. We think it is the new SAN, an Hitachi AMS1000, which has
> 7200RPM SATA disks with the cache turned off. This system has a very small
> cache, and when we did turn it on for one of
I think we found the choke point. The silver lining is that it isn't the T2000
or ZFS. We think it is the new SAN, an Hitachi AMS1000, which has 7200RPM SATA
disks with the cache turned off. This system has a very small cache, and when
we did turn it on for one of the replacement LUNs we saw
It's something we've considered here as well.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Would any of this have to do with the system being a T2000? Would ZFS
resilvering be affected by single threadedness, slowish US-T1 clock
speed or lack of strong FPU performance?
On 12/1/08, Alan Rubin <[EMAIL PROTECTED]> wrote:
> We will be considering it in the new year, but that will not happe
We will be considering it in the new year, but that will not happen in time to
affect our current SAN migration.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
Have you considered moving to 10/08 ? ZFS resilver performance is
much improved in this release, and I suspect that code might help you.
You can easily test upgrading with Live Upgrade. I did the transition
using LU and was very happy with the results.
For example, I added a disk to a mirror an
I had posted at the Sun forums, but it was recommended to me to try here as
well. For reference, please see
http://forums.sun.com/thread.jspa?threadID=5351916&tstart=0.
In the process of a large SAN migration project we are moving many large
volumes from the old SAN to the new. We are making u
I have been trying to replace a disk in a raidz1 zpool for a few days
now, whatever i try zfs keeps using the original disk rather than the
replacement.
I'm running snv_95
-
pool: tank
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tank
Hi,
on a Solaris 10u5 box (X4500) with latest patches (Oct 8) one disk was
marked as failed. We replaced it yesterday, I configured it via cfgadm
and told ZFS to replace it with the replacement:
cfgadm -c configure sata1/4
zpool replace atlashome c1t4d0
Initially it looked well, resilvering star
Marc,
Thanks - you were right - I had two identical drives and I mixed them
up. It's going through the resilver process now... I expect it will
run all night.
Breandan
On Jul 27, 2008, at 11:20 PM, Marc Bevand wrote:
> It looks like you *think* you are trying to add the new drive, when
>
It looks like you *think* you are trying to add the new drive, when you are in
fact re-adding the old (failing) one. A new drive should never show up as
ONLINE in a pool with no action from your part, if only because it contains no
partition and no vdev label with the right pool GUID.
If I am r
I had a drive fail in my home fileserver - I've replaced the drive,
but I can't make the system see it properly. I'm running nevada B85,
with 5 750GB drives in a raidz1 pool named "tank" and booting off a
separate 80 GB SATA drive I had laying around.
Without the new drive attached, I simpl
26 matches
Mail list logo