$ sudo -s
# zfs send -r rpool@snap01 | zfs recv -F rpool2@snap01

?

On Sun, 21 Jun 2020, 00:25 Judah Richardson, <[email protected]>
wrote:

> I am absolutely thoroughly confused here. It seems a lot of details are
> being left out in the docs? Here are my current rpool filesystems:
>
> # zfs list
> NAME                                               USED  AVAIL  REFER
> MOUNTPOINT
> rpool                                             27.0G  1.53G  33.5K
> /rpool
> rpool/ROOT                                        17.9G  1.53G    24K
> legacy
> rpool/ROOT/openindiana                            15.2M  1.53G  6.09G  /
> rpool/ROOT/openindiana-2019:12:01                  970M  1.53G  7.32G  /
> rpool/ROOT/openindiana-2019:12:02                 48.0M  1.53G  7.32G  /
> rpool/ROOT/openindiana-2019:12:10                  813K  1.53G  8.34G  /
> rpool/ROOT/openindiana-2020:01:14                 15.7M  1.53G  7.88G  /
> rpool/ROOT/openindiana-2020:02:12                  858K  1.53G  7.82G  /
> rpool/ROOT/openindiana-2020:02:27                  650K  1.53G  7.92G  /
> rpool/ROOT/openindiana-2020:03:10                  656K  1.53G  8.23G  /
> rpool/ROOT/openindiana-2020:03:26                 16.8G  1.53G  8.85G  /
> rpool/ROOT/pre_activate_18.12_1575387063           239K  1.53G  7.31G  /
> rpool/ROOT/pre_activate_19.10.homeuse_1575387229   239K  1.53G  7.31G  /
> rpool/ROOT/pre_download_19.10.homeuse_1575354576   223K  1.53G  7.29G  /
> rpool/ROOT/pre_download_19.12.homeuse_1581739687     1K  1.53G  7.68G  /
> rpool/ROOT/pre_napp-it-18.12                       273K  1.53G  6.63G  /
> rpool/dump                                        3.95G  1.53G  3.95G  -
> rpool/export                                       987M  1.53G    24K
> /export
> rpool/export/home                                  987M  1.53G    24K
> /export/home
> rpool/export/home/judah                            987M  1.53G   270M
> /export/home/judah
> rpool/swap                                        4.20G  4.51G  1.22G  -
>
> I then ran
>
> # zfs snapshot -r rpool@snap01
>
> and then
>
> sudo zfs send rpool@snap01 | sudo zfs recv -F rpool2@snap01
>
> where rpool2 is the ZFS pool on the new SSD.
>
> It doesn't seem anything was copied over.
>
> What am I doing wrong?
>
>
>
> On Sat, Jun 20, 2020 at 6:09 PM Till Wegmüller <[email protected]>
> wrote:
>
> > Have a look at zfs-send(1m)
> >
> > it's -r. You must have a snapshot to send you cannot sent datasets
> > directly. The snapshots must be named exaclty the same on the whole
> > pool. You can achieve this with zfs snap -r very easily.
> >
> > Hope this helps
> > Greetings
> > Till
> >
> > On 21.06.20 01:03, Judah Richardson wrote:
> > > I can't seem to find any command that recursively sends all the
> datasets
> > on
> > > 1 zpool to another ...
> > >
> > > This is all very confusing and frustrating. Disk upgrades must have
> been
> > a
> > > considered user operation, no? Why not make it intuitive and simple? :(
> > >
> > > On Sat, Jun 20, 2020 at 5:58 PM Jonathan Adams <[email protected]>
> > > wrote:
> > >
> > >> I've never done that, but it must be worth a go, unless you want to
> just
> > >> install a new system in the new disk and copy over the files you want
> to
> > >> change afterwards ...
> > >>
> > >> On Sat, 20 Jun 2020, 23:46 Judah Richardson, <
> [email protected]
> > >
> > >> wrote:
> > >>
> > >>> On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka <[email protected]>
> > >> wrote:
> > >>>
> > >>>> Another option is to backup the current BE to the datapool via zfs
> > >> send.
> > >>>>
> > >>> Would it be possible to just zfs send everything from the current SSD
> > to
> > >>> the new one, then enable autoexpand on the new SSD and make it
> > bootable?
> > >>>
> > >>> This can be done continously via incremental send for ongoing
> backups.
> > >>>> If the system disk fails (or you want to replace), add a new disk,
> > >>>> install a default OS, import the datapool and restore the BE via zfs
> > >>>> send. Then activate this BE and reboot to have the exact former OS
> > >>>> installation restored.
> > >>>>
> > >>>>   Gea
> > >>>> @napp-it.org
> > >>>>
> > >>>> Am 19.06.2020 um 21:19 schrieb Gary Mills:
> > >>>>> On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:
> > >>>>>> I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move
> > >> that
> > >>>>>> installation to a 128 GB SSD. What's the easiest way to do this?
> > >>>>> The easiest way is to use zpool commands.  First, add the large SSD
> > >> as
> > >>>>> half a mirror to the smaller one.  Then, detach the smaller one.
> > >>>>> These options are all described in the zpool man page.
> > >>>>>
> > >>>>> You will likely need to use the installboot command on the large
> SSD
> > >>>>> to make it bootable before you do the detach.  This operation is
> > >>>>> described in the installboot man page.
> > >>>>>
> > >>>>>> I was thinking of using Clonezilla, but I'm not sure if that's the
> > >> way
> > >>>> to
> > >>>>>> go here.
> > >>>>> I'd recommend using native illumos commands instead.
> > >>>>>
> > >>>>
> > >>>> _______________________________________________
> > >>>> openindiana-discuss mailing list
> > >>>> [email protected]
> > >>>> https://openindiana.org/mailman/listinfo/openindiana-discuss
> > >>>>
> > >>> _______________________________________________
> > >>> openindiana-discuss mailing list
> > >>> [email protected]
> > >>> https://openindiana.org/mailman/listinfo/openindiana-discuss
> > >>>
> > >> _______________________________________________
> > >> openindiana-discuss mailing list
> > >> [email protected]
> > >> https://openindiana.org/mailman/listinfo/openindiana-discuss
> > >>
> > > _______________________________________________
> > > openindiana-discuss mailing list
> > > [email protected]
> > > https://openindiana.org/mailman/listinfo/openindiana-discuss
> > >
> >
> > _______________________________________________
> > openindiana-discuss mailing list
> > [email protected]
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> _______________________________________________
> openindiana-discuss mailing list
> [email protected]
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
_______________________________________________
openindiana-discuss mailing list
[email protected]
https://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to