This is pretty OT, but a bit ago there was some discussion of Mac OS
X's multipathing support (or its lack thereof). According to this
technote, multipathing support has been included Mac OS X since
10.3.5, but there are some particular requirements on the target
devices & HBAs.
http://de
Luke Scharf wrote:
This is also OT -- but what is the boot-archive, really?
Is it analogous to the initrd on Linux?
precisely.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Frank Cusack wrote:
On March 1, 2007 12:19:22 AM -0800 Jeff Bonwick <[EMAIL PROTECTED]>
wrote:
import it. Assuming this works, you can fix the stupid boot archive
thank you. i hate the boot archive. i have just spent MANY unnecessary
hours on some machines thanks to the stupid boot archive.
On March 1, 2007 12:19:22 AM -0800 Jeff Bonwick <[EMAIL PROTECTED]>
wrote:
import it. Assuming this works, you can fix the stupid boot archive
thank you. i hate the boot archive. i have just spent MANY unnecessary
hours on some machines thanks to the stupid boot archive.
(sorry, OT)
-frank
It should not matter if the controller numbers have changed. There are
a few scenarios:
1. If the pool is active, and the underlying driver supports devids,
then ZFS will still be able to open the correct devices. If the
underlying driver doesn't support devids, or if the devids have
al
Hi Jim,
here are the answers to your questions :
>
> What size and type of server?
SUNW,Sun-Fire-V240, Memory size: 2048 Megabytes
> What size and type of storage?
SAN-attached storage array, dual-path 2GB FC connection
4 LUNs 96GB each :
# mpathadm list lu
/dev/rdsk/c3t001738010140003
On Thu, Mar 01, 2007 at 11:05:44AM -0500, ozan s. yigit wrote:
> i am forced to reinstall s10u3 on my x4500. SP 1.1.1. exported zpool,
> and discovered during the reinstall that the controller numbers have
> changed. what used to be c5t0d0 is now c6t0d0. it it happens the exported
> zpool is using
[EMAIL PROTECTED] wrote on 03/01/2007 10:05:44 AM:
> i am forced to reinstall s10u3 on my x4500. SP 1.1.1. exported zpool,
> and discovered during the reinstall that the controller numbers have
> changed. what used to be c5t0d0 is now c6t0d0. it it happens the exported
> zpool is using only h
i am forced to reinstall s10u3 on my x4500. SP 1.1.1. exported zpool,
and discovered during the reinstall that the controller numbers have
changed. what used to be c5t0d0 is now c6t0d0. it it happens the exported
zpool is using only half the disks, and has no reference to a c6t0d0, but
still a d
Hello Stuart,
Thursday, March 1, 2007, 2:02:26 PM, you wrote:
>
Heya,
> SL> 1) Doing a zpool destroy on the volume
> SL> 2) Doing a zpool import -D on the volume
> SL> It would appear to me that primarily what has occurred is one or all of
> SL> the metadata stores ZFS has created have
Hi,
my main interest is sharing a zpool between machines, so the zfs filesystems on
different hosts can share a single lun. When you run several applications each
in a different zone and allow the zones to be run on one of several hosts
individually (!) this currently means at least one separat
Heya,
> SL> 1) Doing a zpool destroy on the volume
> SL> 2) Doing a zpool import -D on the volume
> SL> It would appear to me that primarily what has occurred is one or all of
> SL> the metadata stores ZFS has created have become corrupt? Will a zpool
> SL> import -D ignore metadata and rebuild us
Hello Stuart,
Thursday, March 1, 2007, 4:25:14 AM, you wrote:
SL> Further to this, I've considered doing the following:
SL> 1) Doing a zpool destroy on the volume
SL> 2) Doing a zpool import -D on the volume
SL> It would appear to me that primarily what has occurred is one or all of
SL> the met
Hi Jeff,
> One possibility: I've seen this happen when a system doesn't shut down
> cleanly after the last change to the pool configuration. In this case,
> what can happen is that the boot archive (an annoying implementation
> detail of the new boot architecture) can be out of date relative to
>
> However, I logged in this morning to discover that the ZFS volume could
> not be read. In addition, it appears to have marked all drives, mirrors
> & the volume itself as 'corrupted'.
One possibility: I've seen this happen when a system doesn't shut down
cleanly after the last change to the pool
15 matches
Mail list logo