Hello,
Triggered by an IRC discussion, I had a look at this old mail still in
my mbox:
Justus Winter, le lun. 08 févr. 2016 15:04:14 +0100, a ecrit:
> Quoting Svante Signell (2016-02-08 14:44:09)
> > Where to find the code setting the underlying node of the null translator
> > flags
> > to zero?
>
> It starts with /dev/null having a passive translator record. If that
> is first accessed, diskfs_S_dir_lookup will start the translator on
> demand. Somewhere between diskfs_S_dir_lookup, fshelp_fetch_root,
> fshelp_start_translator_long, there is a callback that opens the
> underlying node.
More precisely, it's trivfs_startup which calls fsys_startup to get the
underlying node. It's just passing its flags parameter, which happens to
be 0 in basically all translators, except the term translator in some
case. I don't really know why, except probably potential for spurious
side effects.
Quoting Svante Signell (2016-02-08 12:53:56)
> - I have a working solution, which adds two new structs to struct
> trivfs_peropen
> struct rlock_peropen lock_status;
> struct trivfs_node *tp;
>
> This solution was not accepted
I don't remember the discussion which refused this solution. I guess
your working solution is to implement the record-lock in trivfs
itself? It sort of makes sense to me actually: the data of the
underlying node and the data of the node exposed by the translator
are not really related, I don't see why we should necessarily proxy
the lock. Apparently the only parts which are proxied are file access
permissions and time, i.e. information of the inode itself.
Samuel