Ok the "logfix" program compiled for svn111 does run, and lets me change the HDD 32GB slog, with the new SSD (~29GB) slog, comes up as faulty, but I can replace it with itself, and everything is OK. I can attach the second SSD without issues.


Assuming that it doesn't try to write the full 32GB ever, it should be ok. Don't know if ZPOOL stores the physical size in the label, or when importing.

# zpool export zpool1
# ./logfix /dev/rdsk/c5t1d0s0 /dev/rdsk/c10t4d0s0 13049515403703921770
# zpool import zpool1
# zpool status

        logs
          13049515403703921770  FAULTED      0     0     0  was 
/dev/dsk/c10t4d0s0

# zpool replace -f zpool1 13049515403703921770 c10t4d0
# zpool status

        logs
          c10t4d0    ONLINE       0     0     0

# zpool attach zpool1 c10t4d0 c9t4d0

         logs
           mirror-1   ONLINE       0     0     0
             c10t4d0  ONLINE       0     0     0
             c9t4d0   ONLINE       0     0     0

And back in Solaris 10 u8:

# zpool import zpool1
# zpool status

        logs
          mirror    ONLINE       0     0     0
            c6t4d0  ONLINE       0     0     0
            c5t4d0  ONLINE       0     0     0


It does at least have a solution, even if it is rather unattractive. 12 servers, and has to be done at 2am means I will be testy for a while.

Lund


Jorgen Lundman wrote:

Interesting. Unfortunately, I can not "zpool offline", nor "zpool
detach", nor "zpool remove" the existing c6t4d0s0 device.


I thought perhaps we could boot something newer than b125 [*1] and I
would be able to remove the slog device that is too big.

The dev-127.iso does not boot [*2] due to splashimage, so I had to edit
the ISO to remove that for booting.

After booting with "-B console=ttya", I find that "it" can not add the
/dev/dsk entries for the 24 HDDs, since "/" is on a too-small ramdisk.
Disk-full messages ensue. Yay!

After I have finally imported the pools, without upgrading (since I have
to boot back to Sol 10 u8 for production), I attempt to remove the
"slog" that is no longer needed:


# zpool remove zpool1 c6t4d0s0
cannot remove c6t4d0s0: pool must be upgrade to support log removal


Sigh.


Lund



[*1]
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6574286

[*2]
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6739497





--
Jorgen Lundman       | <lund...@lundman.net>
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo    | +81 (0)90-5578-8500          (cell)
Japan                | +81 (0)3 -3375-1767          (home)
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to