Re: [zfs-discuss] pool wide corruption, "Bad exchange descriptor"

2010-07-06 Thread Brian Kolaci
Well, I see no takers or even a hint... I've been playing with zdb to try to examine the pool, but I get: # zdb -b pool4_green zdb: can't open pool4_green: Bad exchange descriptor # zdb -d pool4_green zdb: can't open pool4_green: Bad exchange descriptor So I'm not sure how to debug using zdb.

Re: [zfs-discuss] pool wide corruption, "Bad exchange descriptor"

2010-07-06 Thread Victor Latushkin
On Jul 6, 2010, at 6:30 PM, Brian Kolaci wrote: > Well, I see no takers or even a hint... > > I've been playing with zdb to try to examine the pool, but I get: > > # zdb -b pool4_green > zdb: can't open pool4_green: Bad exchange descriptor > > # zdb -d pool4_green > zdb: can't open pool4_green

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-07-06 Thread Victor Latushkin
On Jul 4, 2010, at 4:58 AM, Andrew Jones wrote: > Victor, > > The zpool import succeeded on the next attempt following the crash that I > reported to you by private e-mail! >From the threadlist it looked like system was pretty low on memory with stacks >of userland stuff swapped out, hence s

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-07-06 Thread Victor Latushkin
On Jul 3, 2010, at 1:20 PM, George wrote: >> Because of that I'm thinking that I should try >> to change the hostid when booted from the CD to be >> the same as the previously installed system to see if >> that helps - unless that's likely to confuse it at >> all...? > > I've now tried changing

Re: [zfs-discuss] ZFS recovery tools

2010-07-06 Thread Victor Latushkin
On Jul 4, 2010, at 1:33 AM, R. Eulenberg wrote: > R. Eulenberg web.de> writes: > >> op> I was setting up a new systen (osol 2009.06 >>> and updating to op> the lastest version of osol/dev - snv_134 - >>> with op> deduplication) and then I tried to import my >>> backup zpoo

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-07-06 Thread Roy Sigurd Karlsbakk
> I think it is quite likely to be possible to get readonly access to > your data, but this requires modified ZFS binaries. What is your pool > version? What build do you have installed on your system disk or > available as LiveCD? Sorry, but does this mean if ZFS can't write to the drives, access

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-07-06 Thread Victor Latushkin
On Jun 28, 2010, at 11:27 PM, George wrote: > Again this core dumps when I try to do "zpool clear storage2" > > Does anyone have any suggestions what would be the best course of action now? Do you have any crahsdumps saved? First one is most interesting one... __

Re: [zfs-discuss] Hashing files rapidly on ZFS

2010-07-06 Thread Arne Jansen
Daniel Carosone wrote: > Something similar would be useful, and much more readily achievable, > from ZFS from such an application, and many others. Rather than a way > to compare reliably between two files for identity, I'ld liek a way to > compare identity of a single file between two points in t

[zfs-discuss] Remove non-redundant disk

2010-07-06 Thread Cassandra Pugh
Hello list, This has probably been discussed, however I would like to bring it up again, so that the powers that be, know someone else is looking for this feature. I would like to be able to shrink a pool and remove a non-redundant disk. Is this something that is in the works? It would be fanta

Re: [zfs-discuss] Remove non-redundant disk

2010-07-06 Thread Roy Sigurd Karlsbakk
- Original Message - > Hello list, > > This has probably been discussed, however I would like to bring it up > again, so that the powers that be, know someone else is looking for > this feature. > > I would like to be able to shrink a pool and remove a non-redundant > disk. > > Is this s

Re: [zfs-discuss] Remove non-redundant disk

2010-07-06 Thread Cassandra Pugh
The pool is not redundant, so I would suppose, yes, is is Raid-1 on the software level. I have a few drives, which are on a specific array, which I would like to remove from this pool. I have discovered the "replace" command, and I am going to try and replace, 1 for 1, the drives I would like to

[zfs-discuss] Consequences of resilvering failure

2010-07-06 Thread Michael Johnson
I'm just about to start using ZFS in a RAIDZ configuration for a home file server (mostly holding backups), and I wasn't clear on what happens if data corruption is detected while resilvering. For example: let's say I'm using RAIDZ1 and a drive fails. I pull it and put in a new one. While res

[zfs-discuss] Legacy MountPoint for /rpool/ROOT

2010-07-06 Thread Ketan
I have two different servers with ZFS root but both of them has different mountpoint for rpool/ROOT one is /rpool/ROOT and other is legacy. Whats the difference between the two and which is the one we should keep. And why there is 3 different zfs datasets rpool, rpool/ROOT and rpool/ROOT/zfs

[zfs-discuss] Legacy MountPoint for /rpool/ROOT

2010-07-06 Thread Ketan
I have two different servers with ZFS root but both of them has different mountpoint for rpool/ROOT one is /rpool/ROOT and other is legacy. Whats the difference between the two and which is the one we should keep. And why there is 3 different zfs datasets rpool, rpool/ROOT and rpool/ROOT/zfs

[zfs-discuss] Help with Faulted Zpool Call for Help(Cross post)

2010-07-06 Thread Sam Fourman Jr.
Hello list, I posted this a few days ago on opensolaris-discuss@ list I am posting here, because there my be too much noise on other lists I have been without this zfs set for a week now. My main concern at this point,is it even possible to recover this zpool. How does the metadata work? what to

Re: [zfs-discuss] Remove non-redundant disk

2010-07-06 Thread Roy Sigurd Karlsbakk
- Original Message - > The pool is not redundant, so I would suppose, yes, is is Raid-1 on > the software level. > > I have a few drives, which are on a specific array, which I would like > to remove from this pool. > > I have discovered the "replace" command, and I am going to try and >

Re: [zfs-discuss] Legacy MountPoint for /rpool/ROOT

2010-07-06 Thread Lori Alt
On 07/ 6/10 10:56 AM, Ketan wrote: I have two different servers with ZFS root but both of them has different mountpoint for rpool/ROOT one is /rpool/ROOT and other is legacy. It should be legacy. Whats the difference between the two and which is the one we should keep. And why there is 3

Re: [zfs-discuss] Consequences of resilvering failure

2010-07-06 Thread Roy Sigurd Karlsbakk
- Original Message - > I'm just about to start using ZFS in a RAIDZ configuration for a home > file server (mostly holding backups), and I wasn't clear on what > happens if data corruption is detected while resilvering. For example: > let's say I'm using RAIDZ1 and a drive fails. I pull it

Re: [zfs-discuss] Remove non-redundant disk

2010-07-06 Thread Cassandra Pugh
I tried zfs replace, however the new drive is slightly smaller, and even with a -f, it refuses to replace the drive. I guess i will have to export the pool and destroy this one to get my drives back. Still would like the ability to shrink a pool. - Cassandra (609) 243-2413 Unix Administrator "F

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-07-06 Thread Andrew Jones
> > Good. Run 'zpool scrub' to make sure there are no > other errors. > > regards > victor > Yes, scrubbed successfully with no errors. Thanks again for all of your generous assistance. /AJ -- This message posted from opensolaris.org ___ zfs-discus

[zfs-discuss] ZFS fsck?

2010-07-06 Thread Roy Sigurd Karlsbakk
Hi all With several messages in here about troublesome zpools, would there be a good reason to be able to fsck a pool? As in, check the whole thing instead of having to boot into live CDs and whatnot? Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 r...@karlsbakk.n

Re: [zfs-discuss] ZFS fsck?

2010-07-06 Thread iMx
- Original Message - > From: "Roy Sigurd Karlsbakk" > To: "OpenSolaris ZFS discuss" > Sent: Tuesday, 6 July, 2010 6:35:51 PM > Subject: [zfs-discuss] ZFS fsck? > Hi all > > With several messages in here about troublesome zpools, would there be > a good reason to be able to fsck a pool? A

Re: [zfs-discuss] ZFS fsck?

2010-07-06 Thread Mark J Musante
On Tue, 6 Jul 2010, Roy Sigurd Karlsbakk wrote: Hi all With several messages in here about troublesome zpools, would there be a good reason to be able to fsck a pool? As in, check the whole thing instead of having to boot into live CDs and whatnot? You can do this with "zpool scrub". It vi

Re: [zfs-discuss] ZFS fsck?

2010-07-06 Thread Roy Sigurd Karlsbakk
> You can do this with "zpool scrub". It visits every allocated block > and > verifies that everything is correct. It's not the same as fsck in that > scrub can detect and repair problems with the pool still online and > all > datasets mounted, whereas fsck cannot handle mounted filesystems. > > I

Re: [zfs-discuss] ZFS fsck?

2010-07-06 Thread Mark J Musante
On Tue, 6 Jul 2010, Roy Sigurd Karlsbakk wrote: what I'm saying is that there are several posts in here where the only solution is to boot onto a live cd and then do an import, due to metadata corruption. This should be doable from the installed system Ah, I understand now. A couple of thing

Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-06 Thread Spandana Goli
Release Notes information: If there are new features, each release is added to http://www.nexenta.com/corp/documentation/release-notes-support. If just bug fixes, then the Changelog listing is updated: http://www.nexenta.com/corp/documentation/nexentastor-changelog Regards, Spandana

Re: [zfs-discuss] Help with Faulted Zpool Call for Help(Cross post)

2010-07-06 Thread Cindy Swearingen
Hi Sam, In general, FreeBSD uses different device naming conventions and power failures seem to clobber disk labeling. The "I/O error" message also points to problems accessing these disks. I'm not sure if this helps, but I see that the 6 disks from the zdb -e output are indicated as c7t0d0p0 --

Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-06 Thread Giovanni Tirloni
On Tue, Jul 6, 2010 at 4:06 PM, Spandana Goli wrote: > Release Notes information: > If there are new features, each release is added to > http://www.nexenta.com/corp/documentation/release-notes-support. > > If just bug fixes, then the Changelog listing is updated: > http://www.nexenta.com/corp/doc

[zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-06 Thread Chad Cantwell
Hi all, I've noticed something strange in the throughput in my zpool between different snv builds, and I'm not sure if it's an inherent difference in the build or a kernel parameter that is different in the builds. I've setup two similiar machines and this happens with both of them. Each system ha

Re: [zfs-discuss] pool wide corruption, "Bad exchange descriptor"

2010-07-06 Thread Richard Elling
On Jul 6, 2010, at 7:30 AM, Brian Kolaci wrote: > Well, I see no takers or even a hint... > > I've been playing with zdb to try to examine the pool, but I get: > > # zdb -b pool4_green > zdb: can't open pool4_green: Bad exchange descriptor For the archives, EBADE "Bad exchange descriptor" was r

Re: [zfs-discuss] Help with Faulted Zpool Call for Help(Cross post)

2010-07-06 Thread Richard Elling
On Jul 6, 2010, at 10:02 AM, Sam Fourman Jr. wrote: > Hello list, > > I posted this a few days ago on opensolaris-discuss@ list > I am posting here, because there my be too much noise on other lists > > I have been without this zfs set for a week now. > My main concern at this point,is it even

[zfs-discuss] Lost ZIL Device

2010-07-06 Thread Andrew Kener
Hello All, I've recently run into an issue I can't seem to resolve. I have been running a zpool populated with two RAID-Z1 VDEVs and a file on the (separate) OS drive for the ZIL: raidz1-0 ONLINE c12t0d0 ONLINE c12t1d0 ONLINE c12t2d0 ONLINE c12t3d0 ONLINE raidz1-

Re: [zfs-discuss] Remove non-redundant disk

2010-07-06 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Cassandra Pugh > > I would like to be able to shrink a pool and remove a non-redundant > disk. > > Is this something that is in the works? I think the request is to remove vdev's from a pool.

Re: [zfs-discuss] Lost ZIL Device

2010-07-06 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Andrew Kener > > the OS hard drive crashed [and log device] Here's what I know: In zpool >= 19, if you import this, it will prompt you to confirm the loss of the log device, and then it will

Re: [zfs-discuss] Expected throughput

2010-07-06 Thread James Van Artsdalen
Under FreeBSD I've seen zpool scrub sustain nearly 500 MB/s in pools with large files (a pool with eight MIRROR vdevs on two Silicon Image 3124 controllers). You need to carefully look for bottlenecks in the hardware. You don't indicate how the disks are attached. I would measure the total ban