Hello all,

  It was often asked and discussed on the list about "how to
change rpool HDDs from AHCI to IDE mode" and back, with the
modern routine involving reconfiguration of the BIOS, bootup
from separate live media, simple import and export of the
rpool, and bootup from the rpool. The documented way is to
reinstall the OS upon HW changes. Both are inconvenient to
say the least.

  Linux and recent Windows are much more careless about
total changes of hardware underneath the OS image between
boots, they just boot up and work. Why do we shoot ourselves
in the foot with this boot-up problem?

  Now that I'm trying to dual-boot my OI-based system, I hit
the problem hard: I have either a HW SATA (AMD Hudson, often
not recognized upon bootup, but that's another story) and a
VirtualBox SATA on different pci dev/vendor IDs, or Physical
and Virtual IDE which result in the same device path to cmdk
and pci-ide - so I'm stuck with IDE mode at least for these
compatibility reasons.

  So the basic question is: WHY does the OS want to use the
device path (/pci... string) coded into the rpool's vdevs
mid-way in the bootup during vfs root-import routine, and
fail with a panic if the device naming changed, when the
loader (GRUB) for example already had no problem reading
the same rpool? Is there any rationale or historic baggage
to this situation? Is it a design error or shortsight?

  Isn't it possible to use the same routine as for other
pool imports, including import of this same rpool from a
live-media boot - just find the component devices (starting
with the one passed by the loader and/or matching by pool
name and/or GUID) and import the resulting pool? Perhaps,
this could be attempted if the current method fails, before
reverting to a kernel panic - try another method first.

  Would this be a sane thing to change, or are there known
beasts lurking in the dark?

Thanks,
//Jim Klimov
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to