On Sun, 2018-11-18 at 13:50 +0100, Samuel Thibault wrote: > Samuel Thibault, le dim. 18 nov. 2018 13:46:22 +0100, a ecrit: > > Svante Signell, le dim. 18 nov. 2018 13:44:40 +0100, a ecrit: > > > On Sun, 2018-11-18 at 01:13 +0100, Samuel Thibault wrote: > > > > I don't remember the discussion which refused this solution.
Found some stuff in IRC logs: 2015-12-20: 14:43:53< gnu_srs1> youpi: So it's OK to add struct rlock_peropen lock_status to struct trivfs_peropen? 14:47:07< youpi> gnu_srs1: rlock: it should be fine, yes 2016-01-07: 11:13:35< gnu_srs> The only translator calling trivfs_open is magic. Maybe the creation of trivfs_node should be added directly to file_record_lock, file_lock and/or file_lock_stat. 11:14:04< gnu_srs> There is a call to trivfs_open also in term/users.c 11:16:56< teythoon> that might be b/c magic is the only trivfs translator exposing a directory 11:21:33< gnu_srs> What about adding the creation of trivfs_node directly? 11:45:33< gnu_srs> In case of single file translators, like null, is struct protid/struct peropen not needed? 11:47:48< teythoon> gnu_srs: yes it is 11:48:22< teythoon> a peropen object is needed for each file handle that is opened, hence the name 11:48:29< teythoon> but there is only one node object 11:48:34< teythoon> that must be initialized somewhere 11:52:47< gnu_srs> since initialisation cannot be placed in trivfs_open, can it be put directly in the routines: file_record_lock, file_lock and/or file_lock_stat? 12:09:01< teythoon> the object should be initialized where it is created 12:09:37< teythoon> if it is static (e.g. a global variable), then it should be initialized at program start 12:13:46< teythoon> gnu_srs: actually, libtrivfs doesn't do the actual node handling, it proxies everything to the underlying node 12:14:10< teythoon> you should make sure that the locking rpc get's proxied, and then libtrivfs is fine i guess 12:15:17< gnu_srs> How to do that? 12:16:26< teythoon> what is the name of the rpc call ? 12:16:51< teythoon> see e.g. libtrivfs/file-chmod.c 12:19:01< gnu_srs> The new RPC is file_record_lock, two others are file_lock and file_lock_stat 12:20:46< teythoon> try to forward them like file_chmod is forwarded 12:20:54< gnu_srs> So the proxy returns the call to the RPC? 12:21:28< teythoon> don't worry about that, the messy underlying details are hidden thanks to MIG 12:21:44< teythoon> you just call a function 12:22:11< gnu_srs> OK, I'll try. HTH :-D 14:22:34< gnu_srs> teythoon: ./checklock /dev/null: ./checklock: file_record_lock: Permission denied also as root. Is this due to setnullauth()? 14:43:16< teythoon> gnu_srs: hum, possibly, i don't know 14:43:32< teythoon> you can easily check this though, just don't call setnullauth 14:49:48< gnu_srs> No, it was not due to nullauth :( 14:53:48< gnu_srs> Back to the more complex patch then, or? 15:04:27< gnu_srs> How does this proxy stuff work, e.g. libtrivfs/file-chmod.c (trivfs_S_file_chmod) ? 15:43:33< teythoon> gnu_srs: i'm still pretty sure that proxying is the right thing to do (if it is for file_chmod, then also for this) 15:49:42< gnu_srs> How does this proxying work: Which implementation is executed? 16:21:24< gnu_srs> Ah I think I'm getting closer with the proxy solution. In trivfs.h the new struct trivfs_node has to be defined. How should it be defined: Like in diskfs.h or netfs.h, or? 16:21:45< teythoon> no 16:22:03< teythoon> libtrivfs has no node type 16:22:21< teythoon> instead, it delegates the file_* rpcs to the underlying node 16:22:25< teythoon> like file_chmod 6:43:03< gnu_srs> The function file_record_lock calls fshelp_rlock_tweak which has arguments: struct rlock_box *box,..., struct rlock_peropen *po,... 16:45:59< gnu_srs> Here is the implementation of trivfs_S_file_record_lock: return cred ? file_record_lock (cred->realnode, cmd, lock) : EOPNOTSUPP; 16:46:17< teythoon> yes 16:49:35< gnu_srs> See the line above too. 17:28:47< teythoon> gnu_srs: what about it ? 17:30:09< gnu_srs> How are these structs created and accessed? 17:30:47< gnu_srs> with the underlying trick 17:31:52< gnu_srs> Which implementation of file_record_lock is called? 17:34:42< teythoon> well, if you do settrans -a /dev/null /hurd/null, then /dev/null (the node on some ext2fs) is the underlying node handed to /hurd/null 17:35:28< teythoon> so, if /hurd/null relays hte request as it does for file_chmod, then the ext2fs that /dev/null is on is the receiver 17:36:19< teythoon> for details how this works, see hurd/fsys.defs 17:37:20< teythoon> see fsys_startup in particular 17:37:56< gnu_srs> k! 17:40:15< gnu_srs> But /dev/null is not a regular file? 17:41:59< gnu_srs> The error I get is EACCES, not EOPNOTSUPP. 17:43:14< gnu_srs> file /dev/null: /dev/null: character special (0/0) (when translated by /hurd/null) 18:41:58< gnu_srs> (18:36:49) teythoon: so, if /hurd/null relays hte request as it does for file_chmod, then the ext2fs that /dev/null is on is the receiver 18:41:59< gnu_srs> If you mean the device file /dev/null itself should it fall back to the libtrivfs implementation of file_record_lock, or? 18:42:35< gnu_srs> s/libtrivfs/libdiskfs/g 18:50:59< teythoon> yes 2016-01-08: 11:07:22< gnu_srs> Regarding the proxy solution, I'll look into that later on. Now I have a working solution by adding a node struct to trivfs.h 11:12:48< gnu_srs> I did implement the proxy version too, and obtain EACCES as a response. 12:00:44< teythoon> the other "solution" is no solution b/c it is likely just wrong 2016-01-13: 11:37:35< gnu_srs> teythoon: I've traced the EACCES to fsServer.c: file = diskfs_begin_using_protid_payload(In0P->Head.msgh_protected_payload) 11:37:35< gnu_srs> for checklock foo file->po->openstat=3 while for checklock /dev/null it is zero 11:42:42< gnu_srs> and setting it to 3 (O_RDWR) in gdb make the test finish OK 11:59:49< teythoon> gnu_srs: i guess the issue is that the underlying file is not writable, and hence the new locking rpc returns EACCESS when used on the underlying file > > > > I guess your working solution is to implement the record-lock in trivfs > > > > itself? It sort of makes sense to me actually: the data of the > > > > underlying node and the data of the node exposed by the translator > > > > are not really related, I don't see why we should necessarily proxy > > > > the lock. Apparently the only parts which are proxied are file access > > > > permissions and time, i.e. information of the inode itself. > > > > > > Do you say that you are suddenly interested in the proposal I made in > > > 2016? > > > > Well, yes? > > Just to be clear: it's not "sudden", I had that couple of mails between > Justus and you in my inbox since then, burried behind hundreds of other > mails. The IRC discussion just raised that something was blocked there. The updated (non-proxy) implementation now works fine. It does no longer crash the null translator :)