Okay I fixed it - did an export/import under 111b and that seemed to work. I'm
not sure why I didn't try that before!
On further examination, it seems that my /etc/zfs/zpool.cache didn't survive
the pkg image-update properly - those files were different in my two rpools
whereas I'd have expecte
I am have a strange problem with liveupgrade of ZFS boot environment. I found
a similar discussion on the zones-discuss, but, this happens for me on installs
with and without zones, so I do not think it is related to zones. I have been
able to reproduce this on both sparc (ldom) and x86 (phsy
More info - my device ids changed recently when I changed sata controllers (and
motherboards). My root pool is fine on a different disk.
The mystifying thing to me is that my raidz pool works great under 2008.11 and
not 2009.06. Was there a known change in functionality which could lead to the
Cindy
How does the SS7000 do it?
Today I demoed pulling a disk and the spare just automatically became
part of the pool. After it was re-silvered I then pulled three more
(latest Q3 version with triple RAID-Z). I then plugged all the drives
back in (different slots) and everything was back t
> You should just be able to detach 'c0t6d0' in the config below. The
> spare (c0t7d0) will assume its place and be removed from the idle spare
> list, becoming a "normal" vdev in the process.
Yes, that's what I thought too. This is build 124 bfu'd.
See the output below when I just detach the s
On 10/14/09 14:33, Cindy Swearingen wrote:
Hi Eric,
I tried that and found that I needed to detach and remove
the spare before replacing the failed disk with the spare
disk.
You should just be able to detach 'c0t6d0' in the config below. The
spare (c0t7d0) will assume its place and be remove
> "cs" == Cindy Swearingen writes:
cs> # zpool detach test c0t7d0
cs> # zpool remove test c0t7d0
cs> # zpool replace test c0t6d0 c0t7d0
This is less than ideal because it unnecessarily leaves the pool's
redundancy reduced while the replacement resilver is happening.
During this
I think it is difficult to cover all the possible ways to replace
a disk with a spare.
This example in the ZFS Admin Guide didn't work for me:
http://docs.sun.com/app/docs/doc/819-5461/gcvcw?a=view
See the manual replacement example. After the zpool detach and
zpool replace operations, the spar
Hi Eric,
I tried that and found that I needed to detach and remove
the spare before replacing the failed disk with the spare
disk.
What actually worked is below.
Thanks,
Cindy
# zpool status test
pool: test
state: DEGRADED
status: One or more devices could not be opened. Sufficient replic
On 10/14/09 14:26, Jason Frank wrote:
Thank you, that did the trick. That's not terribly obvious from the
man page though. The man page says it detaches the devices from a
mirror, and I had a raidz2. Since I'm messing with production data, I
decided I wasn't going to chance it when I was readi
On 10/14/09 14:17, Cindy Swearingen wrote:
Hi Jason,
I think you are asking how do you tell ZFS that you want to replace the
failed disk c8t7d0 with the spare, c8t11d0?
I just tried do this on my Nevada build 124 lab system, simulating a
disk failure and using zpool replace to replace the faile
Hi Jason,
I think you are asking how do you tell ZFS that you want to replace the
failed disk c8t7d0 with the spare, c8t11d0?
I just tried do this on my Nevada build 124 lab system, simulating a
disk failure and using zpool replace to replace the failed disk with
the spare. The spare is now busy
So, my Areca controller has been complaining via email of read errors for a
couple days on SATA channel 8. The disk finally gave up last night at 17:40.
I got to say I really appreciate the Areca controller taking such good care of
me.
For some reason, I wasn't able to log into the server las
Where is the original post? I was wondering what the steps would be to replace
with larger drives.
Thanks,
Greg
Message was edited by: ggee
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
Hi Rodney,
I've not seen this problem.
Did you install using LiveCD or the automated installer?
Here are some things to try/think about:
1. After a reboot with no swap or dump devices, run this command:
# zfs volinit
If this works, then this command isn't getting run on boot.
Let me know th
On Tue, Oct 13, 2009 at 10:59:37PM -0600, Drew Balfour wrote:
...
> For Opensolaris, Solaris CIFS != samba. Solaris now has a native in kernel
> CIFS server which has nothing to do with samba. Apart from having it's
> commands start with "smb", which can be confusing.
>
> http://www.opensolaris.
Julio wrote:
Hi,
I have the following partions on my laptop, Inspiron 6000, from fdisk:
1 Other OS 011 12 0
2 EXT LBA 12 25612550 26
3 ActiveSolaris2 2562 97287167 74
What are you running there? snv or OpenSolaris?
Could you try an OpenSolaris 2009.06 live disc and boot directly from that.
Once I was running that build every single hot plug I tried worked flawlessly.
I tried for several hours to replicate the problems that caused me to log that
bug report
> "sj" == Shawn Joy writes:
sj> "ZFS will handle the drive failures gracefully as part of the
sj> BUG 6322646 fix in the case of non-redundant configurations by
sj> degrading the pool instead of initiating a system panic with
sj> the help of Solaris[TM] FMA
The problem was n
Just to put closure to this discussion about how CR 6565042 and 6322646 change
how ZFS functions with in the below scenario.
>ZFS no longer has the issue where loss of a single device (even
>intermittently) causes pool corruption. That's been fixed.
>
>That is, there used to be an issue in this
Well, I upgraded to b124, disabling ACPI because of [1], and I get exactly the
same behaviour. I've removed the device from the zpool, and tried dd-ing from
the device while I remove it; it still hangs all IO on the system until the
disk is re-inserted.
I'm running the kernel with -v (from diag
Rodney wrote:
Needing a larger swap than the default I followed the steps at:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Swap_and_Dump_Devices
and at:
http://docs.sun.com/app/docs/doc/819-5461/ggvlr?a=view
Namely:
zfs create -V 2G -b 4k rpool/swap (I've also tr
Needing a larger swap than the default I followed the steps at:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Swap_and_Dump_Devices
and at:
http://docs.sun.com/app/docs/doc/819-5461/ggvlr?a=view
Namely:
zfs create -V 2G -b 4k rpool/swap (I've also tried just zfs cre
Can imagine how it would have been like... did you have any luck after all?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Bob,
Regarding my bonus question: I haven't found yet a definite answer if
there is a way to read the currently active controller setting. I
still assume that the nvsram settings which can be read with
service -d -c read -q nvsram region=0xf2 host=0x00
do not necessarily reflect the
25 matches
Mail list logo