To do step no 4, you need to login as root, or create new user which  
home dir not at export.

Sent from my iPhone

On Jan 9, 2009, at 10:10 AM, noz <sf2...@gmail.com> wrote:

> Kyle wrote:
>> So if preserving the home filesystem through
>> re-installs are really
>> important, putting the home filesystem in a separate
>> pool may be in
>> order.
>
> My problem similar to the original thread author, and this scenario  
> is exactly the one I had in mind.  I figured out a workable solution  
> from the zfs admin guide, but I've only tested this in virtualbox.   
> I have no idea how well this would work if I actually had hundreds  
> of gigabytes of data.  I also don't know if my solution is the  
> recommended way to do this, so please let me know if anyone has a  
> better method.
>
> Here's my solution:
> (1) n...@holodeck:~# zpool create epool mirror c4t1d0 c4t2d0 c4t3d0
>
> n...@holodeck:~# zfs list
> NAME                     USED  AVAIL  REFER  MOUNTPOINT
> epool                     69K  15.6G    18K  /epool
> rpool                   3.68G  11.9G    72K  /rpool
> rpool/ROOT              2.81G  11.9G    18K  legacy
> rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
> rpool/dump               383M  11.9G   383M  -
> rpool/export             632K  11.9G    19K  /export
> rpool/export/home        612K  11.9G    19K  /export/home
> rpool/export/home/noz    594K  11.9G   594K  /export/home/noz
> rpool/swap               512M  12.4G  21.1M  -
> n...@holodeck:~#
>
> (2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
> (3) n...@holodeck:~# zfs send -R rpool/exp...@now &#62; /tmp/export_now
> (4) n...@holodeck:~# zfs destroy -r -f rpool/export
> (5) n...@holodeck:~# zfs recv -d epool &#60; /tmp/export_now
>
> n...@holodeck:~# zfs list
> NAME                     USED  AVAIL  REFER  MOUNTPOINT
> epool                    756K  15.6G    18K  /epool
> epool/export             630K  15.6G    19K  /export
> epool/export/home        612K  15.6G    19K  /export/home
> epool/export/home/noz    592K  15.6G   592K  /export/home/noz
> rpool                   3.68G  11.9G    72K  /rpool
> rpool/ROOT              2.81G  11.9G    18K  legacy
> rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
> rpool/dump               383M  11.9G   383M  -
> rpool/swap               512M  12.4G  21.1M  -
> n...@holodeck:~#
>
> (6) n...@holodeck:~# zfs mount -a
>
> or
>
> (6) reboot
>
> The only part I'm uncomfortable with is when I have to destroy  
> rpool's export filesystem (step 4), because trying to destroy  
> without the -f switch results in a "filesystem is active" error.
> -- 
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to