Re: [zfs-discuss] using zpool attach/detach to migrate drives from one controller to another

2006-12-27 Thread Derek E. Lewis
On Thu, 28 Dec 2006, George Wilson wrote: Have you tried doing a 'zpool replace poolname c1t53d0 c2t53d0'? I'm not sure if this will work but worth a shot. You may still end up with a complete resilver. George, Just tried it with a '-f' and I received the following error: # zpool replace -f

Re: [zfs-discuss] using zpool attach/detach to migrate drives from one controller to another

2006-12-27 Thread George Wilson
Derek, Have you tried doing a 'zpool replace poolname c1t53d0 c2t53d0'? I'm not sure if this will work but worth a shot. You may still end up with a complete resilver. Thanks, George Derek E. Lewis wrote: On Thu, 28 Dec 2006, George Wilson wrote: You're best bet is to export and re-import

Re: [zfs-discuss] using zpool attach/detach to migrate drives from one controller to another

2006-12-27 Thread Derek E. Lewis
On Thu, 28 Dec 2006, George Wilson wrote: You're best bet is to export and re-import the pool after moving devices. You might also try to 'zpool offline' the device, move it and then 'zpool online' it. This should force a reopen of the device and then it would only have to resilver the transac

Re: [zfs-discuss] Saving scrub results before scrub completes

2006-12-27 Thread George Wilson
Siegfried, Can you provide the panic string that you are seeing? We should be able to pull out the persistent error log information from the corefile. You can take a look at spa_get_errlog() function as a starting point. Additionally, you can look at the corefile using mdb and take a look at

Re: [zfs-discuss] using zpool attach/detach to migrate drives from one controller to another

2006-12-27 Thread George Wilson
Derek, I don't think 'zpool attach/detach' is what you want as it will always result in a complete resilver. You're best bet is to export and re-import the pool after moving devices. You might also try to 'zpool offline' the device, move it and then 'zpool online' it. This should force a reo

Re: [zfs-discuss] using zpool attach/detach to migrate drives from one controller to another

2006-12-27 Thread Derek E. Lewis
On Wed, 27 Dec 2006, Torrey McMahon wrote: One, use mpxio from now on. `socal' HBAs are not supported under MPxIO, which is what I have in the attached host (an E4500). Two, I thought you could export the pool, move the LUNs to the new controller, and import the pool? Like I said, I have

Re: [zfs-discuss] using zpool attach/detach to migrate drives from one controller to another

2006-12-27 Thread Torrey McMahon
Derek E. Lewis wrote: Greetings, I'm trying to move some of my mirrored pooldevs to another controller. I have a StorEdge A5200 (Photon) with two physical paths to it, and originally, when I created the storage pool, I threw all of the drives on c1. Several days after my realization of this,

[zfs-discuss] using zpool attach/detach to migrate drives from one controller to another

2006-12-27 Thread Derek E. Lewis
Greetings, I'm trying to move some of my mirrored pooldevs to another controller. I have a StorEdge A5200 (Photon) with two physical paths to it, and originally, when I created the storage pool, I threw all of the drives on c1. Several days after my realization of this, I'm trying to change th

Re: [zfs-discuss] Very strange performance patterns

2006-12-27 Thread Peter Schuller
> Short version: Pool A is fast, pool B is slow. Writing to pool A is > fast. Writing to pool B is slow. Writing to pool B WHILE writing to > pool A is fast on both pools. Explanation? [snip] For the archives, it is interesting to note that when I do not perform a local "dd" to the device, but i

Re: [zfs-discuss] Re: Difference between ZFS and UFS with one LUN froma SAN

2006-12-27 Thread Torrey McMahon
A LUN "going away" should not cause a panic. (The obvious exception being the boot LUN) If mpxio saw the LUN move and everything moved ... then it's a bug. The panic backtrace will point to the guilty party in any case. Jason J. W. Williams wrote: Hi Robert, MPxIO had correctly moved the pat

Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-27 Thread Nicolas Williams
On Wed, Dec 27, 2006 at 08:45:23AM -0500, Bill Sommerfeld wrote: > I think your paranoia is indeed running a bit high if the alternative is > that some blocks escape bleaching "forever" when they were freed shortly > before a crash. Lazy bg bleaching of freed blocks is not enough if you're really

[zfs-discuss] Re: RE: What SATA controllers are people using for ZFS?

2006-12-27 Thread Dennis
what I donĀ“ t unserstand. If the marvell driver is in opensolaris, but closed source, why isnĀ“ t it in this list: http://www.opensolaris.org/os/about/no_source/ This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@openso

Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-27 Thread Bill Sommerfeld
On Tue, 2006-12-26 at 13:59 -0500, Torrey McMahon wrote: > > clearly you'd need to store the "unbleached" list persistently in the > > pool. > > Which could then be easily referenced to find all the blocks that were > recently deleted but not yet bleached? Is my paranoia running a bit too > high