Wow! Thanks for the information James, after consulting with my manager we're 
going to install the text-install version.

 

I'm going to try that as we're installing it on a new disk. Just curious, if I 
do an export of about 3 zvols and reimport them, the mounts will be there but 
will I have to reconfigure CIFS, permissions and users etc?

 

Sorry, I'm but a n00b.

 

Thanks,

Em
 
> Date: Tue, 3 Aug 2010 22:48:36 +1000
> From: j...@opensolaris.org
> To: emilygrettelis...@hotmail.com
> CC: carls...@workingcode.com; install-disc...@opensolaris.org
> Subject: Re: [install-discuss] Installing on alternate hardware
> 
> On 3/08/10 10:20 PM, Emily Grettel wrote:
> > Thanks for the reply James,
> >
> > > If it were my system, I'd export the ZFS volumes containing my data,
> > > reinstall on the new motherboard, and then reimport ZFS.
> >
> > I was thinking that too, but unfortunately I've created quite a few
> > zones and there are quite a few users on the system.
> >
> > Redoing the entire server will take a week :(
> >
> > Thanks though, I shall try driver-discuss too!
> 
> The essential problem is that your new motherboard will have
> different paths to each device.
> 
> As James mentioned, you could change the first line of
> /etc/path_to_inst, or.....
> 
> here's the _unsupported_ totally ugly hack way of getting a
> new motherboard up and running.
> 
> Before you start, BE VERY GRATEFUL you're running ZFS. (I'll
> explain why a little later).
> 
> 
> 
> * touch /reconfigure
> * poweroff
> * replace motherboard
> 
> * turn system on
> * do whatever bios futzing is needed in order to find your
> primary boot device
> 
> * at the grub boot menu, select your desired BE, navigate to
> the kernel$ line and hit 'e'
> 
> * go to the end of this line, and hit 'a' (to add), then add
> " -arvs" (ie, a space, then -arvs) and hit escape
> 
> * hit 'b' to boot
> 
> * Unless you're prompted for where /etc/path_to_inst is,
> hit enter each time you're prompted during the boot process.
> 
> * When you're asked for a username for single-user mode, type
> root and enter your root password.
> 
> * Run these operations to test:
> 
> format < /dev/null
> zpool status -v
> zpool import -a
> zfs list
> dladm show-link
> dladm show-ether
> 
> 
> 
> The format test will print out the device paths for the
> devices which the kernel has probed. Note these for later.
> 
> The zpool status -xv test will show you the paths to each
> vdev in your pools.
> 
> The zpool import -a test will attempt to import as many
> pools as can be found. This should work seamlessly, and
> you should then see all your datasets in the zfs list test.
> 
> The dladm tests will show you what NICs you have installed.
> Note the instance numbers - they almost certainly will have
> changed from what you have configured with /etc/hostname.$nic$inst.
> Change the /etc/hostname.... file to reflect the new instance
> number(s).
> 
> Also, if you are running a graphics head on this system, and
> you've got a customised /etc/X11/xorg.conf, make sure you
> check the BusID settings to make sure that they're correct.
> Use the /usr/bin/scanpci utility for this.
> 
> 
> Now, why should be grateful for ZFS? Because ZFS uses the
> cXtYdZ number as a fallback for detecting and opening
> devices. What it uses as a primary method is the device id,
> or devid. This is closely related to the GUID aka Globally
> Unique IDentifier. If you want more info about devids and
> guids, you can review a presentation I wrote about them a
> while back:
> 
> http://www.slideshare.net/JamesCMcPherson/what-is-a-guid
> 
> 
> James C. McPherson
> --
> Oracle
> http://www.jmcp.homeunix.com/blog
                                          
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to