Actually, there's still the primary issue of this post - the apparent hang. At
the moment, I have 3 zpool commands running, all apparently hung and doing
nothing:
bleon...@opensolaris:~$ ps -ef | grep zpool
root 20465 20411 0 18:10:44 pts/4 0:00 zpool clear r5pool
root 20408 2040
Hi Cindy,
I'm trying to demonstrate how ZFS behaves when a disk fails. The drive
enclosure I'm using (http://www.icydock.com/product/mb561us-4s-1.html) says it
supports hot swap, but that's not what I'm experiencing. When I plug the disk
back in, all 4 disks are no longer recognizable until I r
Hi,
I'm currently trying to work with a quad-bay USB drive enclosure. I've created
a raidz pool as follows:
bleon...@opensolaris:~# zpool status r5pool
pool: r5pool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
r5poolONLINE
ems when moving/changing/re-inserting but without
more info, its hard to tell what happened.
cs
On 06/29/10 14:13, W Brian Leonard wrote:
Interesting, this time it worked! Does specifying the device to clear
cause the command to behave differently? I had assumed w/out the
device specification, the c
laris release and what
events preceded this problem.
Thanks,
Cindy
On 06/29/10 11:15, W Brian Leonard wrote:
Hi Cindy,
The scrub didn't help and yes, this is an external USB device.
Thanks,
Brian
Cindy Swearingen wrote:
Hi Brian,
You might try running a scrub on this pool.
Is this a
Hi Cindy,
The scrub didn't help and yes, this is an external USB device.
Thanks,
Brian
Cindy Swearingen wrote:
Hi Brian,
You might try running a scrub on this pool.
Is this an external USB device?
Thanks,
Cindy
On 06/29/10 09:16, Brian Leonard wrote:
Hi,
I have a zpool whi
Hi,
I have a zpool which is currently reporting that the ":<0x13>" file
is corrupt:
bleon...@opensolaris:~$ pfexec zpool status -xv external
pool: external
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
a
> Check contents of /dev/dsk and /dev/rdsk to see if
> there are some
> missing links there for devices in question. You may
> want to run
>
> devfsadm -c disk -sv
> devfsadm -c disk -Csv
>
> and see if it reports anything.
There were quite a few links it removed, all on c0.
> Try to move c6d
> h... export the pool again. Then try simply "zpool import"
> and it should show the way it sees vault. Reply with that output.
zpool export vault
cannot open 'vault': no such pool
zpool import
pool: vault
id: 196786381623412270
state: UNAVAIL
action: The pool cannot be imported d
> Since you did not export the pool, it may be looking for the wrong
> devices. Try this:
> zpool export vault
> zpool import vault
That was the first thing I tried, with no luck.
> Above, I used slice 0 as an example, your system may use a
> different slice. But you can run zdb -l on all o
I had a machine die the other day and take one of its zfs pools with it. I
booted the new machine, with the same disks but a different SATA controller,
and the rpool was mounted but another pool "vault" was not. If I try to import
it I get "invalid vdev configuration". fmdump shows zfs.vdev.ba
Karthik, did you ever file a bug or this? I'm experiencing the same hang and
wondering how to recover.
/Brian
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listi
12 matches
Mail list logo