On February 28, 2021 9:18:09 PM UTC, Stephan Althaus 
<stephan.alth...@duedinghausen.eu> wrote:
>On 02/26/21 09:07 PM, Andreas Wacknitz wrote:
>> Am 23.02.21 um 08:00 schrieb Stephan Althaus:
>>> On 02/23/21 12:13 AM, Tim Mooney via openindiana-discuss wrote:
>>>> In regard to: Re: [OpenIndiana-discuss] safely cleanup pkg cache?, 
>>>> Andreas...:
>>>>
>>>>> Am 21.02.21 um 22:42 schrieb Stephan Althaus:
>>>>>> Hello!
>>>>>>
>>>>>> The "-s" option does the minimal obvious remove of the
>corresponding
>>>>>> snapshot:
>>>>
>>>> My experience seems to match what Andreas and Toomas are saying: -s
>
>>>> isn't
>>>> doing what it's supposed to be doing (?).
>>>>
>>>> After using
>>>>
>>>>     sudo beadm destroy -F -s -v <bename>
>>>>
>>>> to destroy a dozen or so boot environments, I'm down to just this
>>>> for boot environments:
>>>>
>>>> $ beadm list
>>>> BE                                Active Mountpoint Space Policy 
>>>> Created
>>>> openindiana                       -      -          12.05M static 
>>>> 2019-05-17 10:37
>>>> openindiana-2021:02:07            -      -          27.27M static 
>>>> 2021-02-07 01:01
>>>> openindiana-2021:02:07-backup-1   -      -          117K static 
>>>> 2021-02-07 13:06
>>>> openindiana-2021:02:07-backup-2   -      -          117K static 
>>>> 2021-02-07 13:08
>>>> openindiana-2021:02:07-1          NR     /          51.90G static 
>>>> 2021-02-07 17:23
>>>> openindiana-2021:02:07-1-backup-1 -      -          186K static 
>>>> 2021-02-07 17:48
>>>> openindiana-2021:02:07-1-backup-2 -      -          665K static 
>>>> 2021-02-07 17:58
>>>> openindiana-2021:02:07-1-backup-3 -      -          666K static 
>>>> 2021-02-07 18:02
>>>>
>>>>
>>>> However, zfs list still shows (I think) snapshots for some of the
>>>> intermediate boot environments that I destroyed:
>>>>
>>>> $ zfs list -t snapshot
>>>> NAME                                                      USED 
>>>> AVAIL  REFER  MOUNTPOINT
>>>> rpool/ROOT/openindiana-2021:02:07-1@install 559M      -  5.94G -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:34:55 472M     
>-  
>>>> 6.28G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:46:32 555K     
>-  
>>>> 6.28G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:48:56 2.18M      
>>>> -  6.45G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-06-13-22:13:18 1015M      
>>>> -  9.74G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-06-21-16:25:04 1.21G      
>>>> -  9.85G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-08-23-16:17:28 833M     
>-  
>>>> 9.74G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-08-28-21:51:55 1.40G      
>>>> -  10.8G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-09-12-23:35:08 643M     
>-  
>>>> 11.7G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-10-02-22:55:57 660M     
>-  
>>>> 12.0G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-11-09-00:04:17 736M     
>-  
>>>> 12.4G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-12-05-01:02:10 1.02G      
>>>> -  12.7G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-12-20-19:55:51 788M     
>-  
>>>> 12.9G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2020-02-13-23:17:35 918M     
>-  
>>>> 13.3G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-01-21-02:27:31 1.74G      
>>>> -  13.9G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-06-22:47:15 1.71G      
>>>> -  18.8G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-06:59:02 1.22G      
>>>> -  19.1G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:06:07 280M     
>-  
>>>> 19.3G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:08:29 280M     
>-  
>>>> 19.3G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:21:52 640K     
>-  
>>>> 19.1G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:23:46 868K     
>-  
>>>> 19.2G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:48:07 294M     
>-  
>>>> 19.3G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:58:44 280M     
>-  
>>>> 19.3G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-08-00:02:17 280M     
>-  
>>>> 19.3G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-21-06:24:56 3.49M      
>>>> -  19.4G  -
>>>>
>>>> Now I have to figure out how to map the zfs snapshots to the boot
>>>> environments that I kept, so that I can "weed out" the zfs
>snapshots
>>>> that I don't need.
>>>>
>>>> I appreciate all the discussion and info my question has spawned! I
>>>> didn't anticipate the issue being as complicated as it appears it
>is.
>>>>
>>>> Tim
>>>
>>> Hello!
>>>
>>> "beadm -s " destroys snapshots.
>>>
>>> "rpool/ROOT/openindiana-2021:02:07-1" is the filesystem of the 
>>> current BE.
>>>
>>> i don't know why these snapshots are in there,
>>> but these are left there from the "pkg upgrade" somehow.
>>>
>>> I don't think that "beadm -s" is to blame here.
>>>
>>> Maybe an additional Parameter would be nice to get rid of old 
>>> snaphots within the BE-filesystem(s).
>>>
>>> Greetings,
>>>
>>> Stephan
>>>
>>>
>>> _______________________________________________
>>> openindiana-discuss mailing list
>>> openindiana-discuss@openindiana.org
>>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>> Hi,
>>
>> I think I hit the bug again, even when using beadm destroy -s
>>
>> ╰─➤  zfs list -t snapshot
>> NAME                                                         USED 
>> AVAIL  REFER  MOUNTPOINT
>> rpool1/ROOT/openindiana-2021:02:26@2021-02-22-16:33:39 489M -  26.5G 
>-
>> rpool1/ROOT/openindiana-2021:02:26@2021-02-24-12:32:24 472M -  26.5G 
>
>> -                            <- only one snapshop here from Feb. 24th
>> rpool1/ROOT/openindiana-2021:02:26@2021-02-25-13:03:15 0      -
>26.5G  -
>> rpool1/ROOT/openindiana-2021:02:26@2021-02-25-13:03:50 0      -
>26.5G  -
>> rpool1/ROOT/openindiana-2021:02:26@2021-02-26-08:35:10 0      -
>26.5G  -
>> rpool1/ROOT/openindiana-2021:02:26@2021-02-26-08:35:57 0      -
>26.5G  -
>> rpool1/ROOT/openindiana-2021:02:26/var@2021-02-22-16:33:39 682M      
>> -  1.99G  -
>> rpool1/ROOT/openindiana-2021:02:26/var@2021-02-24-12:32:24 653M      
>> -  1.99G  -
>> rpool1/ROOT/openindiana-2021:02:26/var@2021-02-25-13:03:15 632K      
>> -  2.00G  -
>> rpool1/ROOT/openindiana-2021:02:26/var@2021-02-25-13:03:50 130M      
>> -  2.12G  -
>> rpool1/ROOT/openindiana-2021:02:26/var@2021-02-26-08:35:10 691K      
>> -  2.07G  -
>> rpool1/ROOT/openindiana-2021:02:26/var@2021-02-26-08:35:57 178M      
>> -  2.25G  -
>> ╭─andreas@skoll ~
>> ╰─➤  pfexec zfs destroy 
>> rpool1/ROOT/openindiana-2021:02:26@2021-02-22-16:33:39
>> ╭─andreas@skoll ~
>> ╰─➤  pfexec zfs destroy 
>> rpool1/ROOT/openindiana-2021:02:26/var@2021-02-22-16:33:39
>> ╭─andreas@skoll ~               <- Two older snapshots removed
>> ╰─➤  beadm list
>> BE                     Active Mountpoint Space  Policy Created
>> openindiana-2021:02:24 -      -          23.70M static 2021-02-24
>13:33
>> openindiana-2021:02:25 -      -          14.08M static 2021-02-25
>14:03
>> openindiana-2021:02:26 NR     /          32.54G static 2021-02-26 
>> 09:35                                                      <- Three 
>> BE's, let's remove the oldest
>> ╭─andreas@skoll ~
>> ╰─➤  pfexec beadm destroy -s openindiana-2021:02:24                 
>>                                 <- See, used with -s!
>> Are you sure you want to destroy openindiana-2021:02:24?
>> This action cannot be undone (y/[n]): y
>> Destroyed successfully
>> ╭─andreas@skoll ~
>> ╰─➤  beadm list
>> BE                     Active Mountpoint Space  Policy Created
>> openindiana-2021:02:25 -      -          14.08M static 2021-02-25 
>> 14:03                                                        <- BE 
>> removed
>> openindiana-2021:02:26 NR     /          32.41G static 2021-02-26
>09:35
>> ╭─andreas@skoll ~
>> ╰─➤  beadm list -a
>> BE/Dataset/Snapshot Active Mountpoint Space   Policy Created
>> openindiana-2021:02:25
>>    rpool1/ROOT/openindiana-2021:02:25                         - 
>> -          14.08M  static 2021-02-25 14:03
>> openindiana-2021:02:26
>>    rpool1/ROOT/openindiana-2021:02:26                         NR 
>> /          32.41G  static 2021-02-26 09:35
>>    rpool1/ROOT/openindiana-2021:02:26/var@2021-02-24-12:32:24 - 
>> -          685.24M static 2021-02-24 13:32            <- This
>snapshot 
>> also survived the beadm destroy -s command
>>    rpool1/ROOT/openindiana-2021:02:26/var@2021-02-25-13:03:15 - 
>> -          654.72M static 2021-02-25 14:03
>>    rpool1/ROOT/openindiana-2021:02:26/var@2021-02-26-08:35:10 - 
>> -          691K    static 2021-02-26 09:35
>>    rpool1/ROOT/openindiana-2021:02:26/var@2021-02-26-08:35:57 - 
>> -          177.52M static 2021-02-26 09:35
>>    rpool1/ROOT/openindiana-2021:02:26@2021-02-24-12:32:24     - 
>> -          502.54M static 2021-02-24 13:32                <- Snapshot
>
>> still there
>>    rpool1/ROOT/openindiana-2021:02:26@2021-02-25-13:03:15     - 
>> -          479.87M static 2021-02-25 14:03
>>    rpool1/ROOT/openindiana-2021:02:26@2021-02-26-08:35:10     - 
>> -          0       static 2021-02-26 09:35
>>    rpool1/ROOT/openindiana-2021:02:26@2021-02-26-08:35:57     - 
>> -          0       static 2021-02-26 09:35
>>
>> Andreas
>> _______________________________________________
>> openindiana-discuss mailing list
>> openindiana-discuss@openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
>Hi,
>
>now i think we are (or better: i am ) confusing snapshots with 
>filesystems in this case..
>
>Reading the following command outputs, i interpret that there is always
>
>a filesystem corresponding to a BE,
>maybe the snapshots in the ZFS of the current BE have nothing to do
>with 
>older BEs.
>
>$ beadm list
>BE                                Active Mountpoint Space   Policy
>Created
>openindiana-2020:11:26            -      - 40.50M  static 2020-11-26
>13:52
>openindiana-2020:11:26-backup-1   -      - 263K    static 2020-12-11
>22:27
>openindiana-2020:12:29            -      - 34.60M  static 2020-12-29
>22:07
>openindiana-2021:01:13            -      - 34.68M  static 2021-01-13
>21:57
>openindiana-2021:02:18            -      - 409.54M static 2021-02-18
>22:31
>openindiana-2021:02:18-backup-1   -      - 42.21M  static 2021-02-19
>13:35
>openindiana-2021:02:20            -      - 42.67M  static 2021-02-20
>20:52
>openindiana-2021:02:20-1          NR     / 168.06G static 2021-02-20
>21:22
>steven@dell6510:~$ zfs list -t all -r rpool
>NAME USED  AVAIL  REFER  MOUNTPOINT
>rpool 207G  4.34G    33K  /rpool
>rpool/ROOT 169G  4.34G    23K  legacy
>rpool/ROOT/openindiana-2020:11:26 40.5M  4.34G  37.7G  /
>rpool/ROOT/openindiana-2020:11:26-backup-1 263K  4.34G  37.2G  /
>rpool/ROOT/openindiana-2020:12:29 34.6M  4.34G  38.4G  /
>rpool/ROOT/openindiana-2021:01:13 34.7M  4.34G  41.9G  /
>rpool/ROOT/openindiana-2021:02:18 410M  4.34G  41.9G  /
>rpool/ROOT/openindiana-2021:02:18-backup-1 42.2M  4.34G  42.2G  /
>rpool/ROOT/openindiana-2021:02:20 42.7M  4.34G  42.6G  /
>rpool/ROOT/openindiana-2021:02:20-1 168G  4.34G  42.7G  /
>
>Now th check if beadmdestroy -s works:
>
># zfs snapshot rpool/ROOT/openindiana-2020:11:26@test
>
>$ zfs list -t all -r rpool
>NAME USED  AVAIL  REFER  MOUNTPOINT
>rpool 207G  4.34G    33K  /rpool
>rpool/ROOT 169G  4.34G    23K  legacy
>rpool/ROOT/openindiana-2020:11:26 40.5M  4.34G  37.7G  /
>rpool/ROOT/openindiana-2020:11:26@test 0      -  37.7G  -
>rpool/ROOT/openindiana-2020:11:26-backup-1 263K  4.34G  37.2G  /
><snip>
>
># beadm destroy -s openindiana-2020:11:26
>Are you sure you want to destroy openindiana-2020:11:26?
>This action cannot be undone (y/[n]): y
>Destroyed successfully
>$ zfs list -t all -r rpool
>NAME USED  AVAIL  REFER  MOUNTPOINT
>rpool 207G  4.38G    34K  /rpool
>rpool/ROOT 169G  4.38G    23K  legacy
>rpool/ROOT/openindiana-2020:11:26-backup-1 263K  4.38G  37.2G  /
>rpool/ROOT/openindiana-2020:12:29 34.6M  4.38G  38.4G  /
><snip>
>
>
>This is what i personally expect to happen with "beadm destroy -s
><bename>".
>But maybe i am confusing things, as i am relatively new to this all..
>
>Greetings,
>
>Stephan
>
>
>
>_______________________________________________
>openindiana-discuss mailing list
>openindiana-discuss@openindiana.org
>https://openindiana.org/mailman/listinfo/openindiana-discuss

Well, snaps in current BE do relate to older BEs: when you update the OS and a 
new BE is cloned from current, you end the ritual with `beadm activate NEWBE && 
init 6` or similar. 

The activation in particular "zfs promotes" the new root dataset (and its 
children if any, and zoneroots that refer to it as contemporary "parentbe"), 
which makes it own all linear history of snapshots from oldest known; any older 
surviving BEs including the one you began the upgrade from "become" clones 
whose origin is some snapshot on the chain leading to new active boot 
environment. Just a matter of hanging the same graph by a different root node.

As a corollary, until you destroy those older BEs (after you make sure you 
won't want to roll back to them), their var/pkg/... directory or subdataset, in 
each of them, holds blocks for files of packages that were relevant back then.

Being a rolling release with a large userland, OI in particular has a lot of 
package versions to juggle (hence much metadata and hungry slow pkg(5) dealing 
with maybe millions of items to calculate your upgrade path on every run) and 
that's why every year or two the internet OI pkg repo is snapshotted (allowing 
for antiquated installations to hop a few snaps like this until modern age, 
going over a few historic "publisher" URLs) and restarts with a small scope of 
clean history. That's also why your current BE's package cache might not 
reference many or any of the files used by earlier ones. Also note that the 
cache is kept in part to serve synced and quick-cheap zone installations, not 
just trash after installing latest BE version.

For comparison, OmniOS Bloody probably does not suffer that effect visibly 
because they by design ship (and version-juggle) just a bare minimum of 
packages, and with LTS releases track even fewer iterations of that.

With my split-root ZFS scripts (see github) I did separate var/pkg/publisher 
into a separate dataset, among my other splits, so at least that separation is 
known to be viable :) I don't remember wiping it recently, so can't say quickly 
if that is still safe, but at worst you can snapshot it, wipe the live 
directory, try to update and if hell breaks loose - rollback to that snapshot 
before the wipe.

Hope that helps,
Jim Klimov

--
Typos courtesy of K-9 Mail on my Android

_______________________________________________
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to