Pieter Bowman <bow...@math.utah.edu> wrote:

> The problem seems to be caused by the changing of the inode of the
> root of that filesystem.  The inode for the test filesystem's root
> directory is 3, the inode for various snapshots are numbers like:
>
> 281474976666177
> 281474976671479
> 281474976673971

So it seems that there is a bug in your specific ZFS snapshot implementation.

The original ZFS snapshot implementation behaves as expected:

stat /pool/home/joerg /pool/home/joerg/.zfs/snapshot/snap

  File: `/pool/home/joerg'
  Size: 149             Blocks: 12         IO Block: 9728   directory
Device: 2d90007h/47775751d      Inode: 4           Links: 62
Access: (0755/drwxr-xr-x)  Uid: (  100/   joerg)   Gid: (    0/    root)
Access: 2018-06-22 17:53:19.060329104 +0200
Modify: 2018-05-18 11:24:02.368669378 +0200
Change: 2018-05-18 11:24:02.368669378 +0200

  File: `/pool/home/joerg/.zfs/snapshot/snap'
  Size: 149             Blocks: 12         IO Block: 9728   directory
Device: 2d90013h/47775763d      Inode: 4           Links: 62
Access: (0755/drwxr-xr-x)  Uid: (  100/   joerg)   Gid: (    0/    root)
Access: 2018-06-18 11:09:11.710687095 +0200
Modify: 2018-05-18 11:24:02.368669378 +0200
Change: 2018-05-18 11:24:02.368669378 +0200

You should make a bug report against your ZFS integration.

Jörg

-- 
 EMail:jo...@schily.net                    (home) Jörg Schilling D-13353 Berlin
    joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'

Reply via email to