I hope you don't mind the top post, but... Here's a snippet of code from the krpc (I wasn't the author): if (stat == RPC_TIMEDOUT) { /* * Check for async send misfeature for NLM * protocol. */ if ((rc->rc_timeout.tv_sec == 0 && rc->rc_timeout.tv_usec == 0) || (rc->rc_timeout.tv_sec == -1 && utimeout.tv_sec == 0 && utimeout.tv_usec == 0)) { CLNT_RELEASE(client); break; } } This causes the xid to be reinitialized when a timeout occurs. The reinitialization uses __RPC_GETXID(&now) and it does an exclusive or of pid ^ time.sec ^ time.usec so it shouldn't end up the same anyhow. (Normally this initialization only occurs once, but because of the above, it could happen multiple times for the NLM. What does "async misfeature" mean? I have no idea.
If by "transaction id" they are referring to the svid in the lock RPC message, I have no idea if it should be unique for lock ops on different files. What does the spec. say? No idea, since there is no such thing. Anyhow, using TCP will avoid the DRC and whatever the Netapp filer thinks w.r.t. the uniqueness of this field. rick ________________________________________ From: Daniel Braniss <da...@cs.huji.ac.il> Sent: Wednesday, January 8, 2020 12:08 PM To: Rick Macklem Cc: Richard P Mackerras; Adam McDougall; freebsd-stable@freebsd.org Subject: Re: nfs lockd errors after NetApp software upgrade. top posting NetAPP reply: … Here you can see transaction ID (0x5e15f77a) being used over port 886 and the NFS server successfully responds. 4480695 2020-01-08 12:20:54 132.65.116.111 132.65.60.56 NLM 0x5e15f77a (1578497914) 886 V4 UNLOCK Call (Reply In 4480696) FH:0x54b075a0 svid:13629 pos:0-0 4480696 2020-01-08 12:20:54 132.65.60.56 132.65.116.111 NLM 0x5e15f77a (1578497914) 4045 V4 UNLOCK Reply (Call In 4480695) Here you see that 2 minutes later the client uses the same transaction ID (0x5e15f77a) and the same port again, but the file handle is different, so the client is unlocking a different file. 4591136 2020-01-08 12:22:54 132.65.116.111 132.65.60.56 NLM 0x5e15f77a (1578497914) 886 [RPC retransmission of #4480695]V4 UNLOCK Call (Reply In 4480696) FH:0xb14b75a8 svid:13629 pos:0-0 4592588 2020-01-08 12:22:57 132.65.116.111 132.65.60.56 NLM 0x5e15f77a (1578497914) 886 [RPC retransmission of #4480695]V4 UNLOCK Call (Reply In 4480696) FH:0xb14b75a8 svid:13629 pos:0-0 4598862 2020-01-08 12:23:03 132.65.116.111 132.65.60.56 NLM 0x5e15f77a (1578497914) 886 [RPC retransmission of #4480695]V4 UNLOCK Call (Reply In 4480696) FH:0xb14b75a8 svid:13629 pos:0-0 4608871 2020-01-08 12:23:21 132.65.116.111 132.65.60.56 NLM 0x5e15f77a (1578497914) 886 [RPC retransmission of #4480695]V4 UNLOCK Call (Reply In 4480696) FH:0xb14b75a8 svid:13629 pos:0-0 4635984 2020-01-08 12:23:59 132.65.116.111 132.65.60.56 NLM 0x5e15f77a (1578497914) 886 [RPC retransmission of #4480695]V4 UNLOCK Call (Reply In 4480696) FH:0xb14b75a8 svid:13629 pos:0-0 transaction ID reuse is also seen for a number of other transaction IDs starting at the same time. Withing ONTAP 9.3 we have changed the way our Replay-Cache tracks requests by including a checksum of the RPC request. Both in in this and earlier releases ONTAP would cache the call in frame 4480695, but starintg in 9.3 we then cache the checksum as part of that. When the client sends the request in frame 4591136 it uses the same transaction ID (0x5e15f77a) and same port again. Here the problem is that we already hold a checksum in cache for the “same transaction” … this seems to be happening after the client did not receive the response and re-transmits the request. danny On 24 Dec 2019, at 5:02, Rick Macklem <rmack...@uoguelph.ca<mailto:rmack...@uoguelph.ca>> wrote: Richard P Mackerras wrote: Hi, We had some bully type workloads emerge when we moved a lot of block storage from old XIV to new all flash 3PAR. I wonder if your IMAP issue might have emerged just because suddenly there was the opportunity with all flash. QOS is good on 9.x ONTAP. If anyone says it’s not then they last looked on 8.x. So I suggest you QOS the IMAP workload. Nobody should be using UDP with NFS unless they have a very specific set of circumstances. TCP was a real step forward. Well, I can't argue with this, considering I did the first working implementation of NFS over TCP. It was actually Mike Karels that suggested I try doing so, There's a paper in a very old Usenix Conference Proceedings, but it is so old that it isn't on the Usenix web page (around 1988 in Denver, if I recall). I don't even have a copy myself, although I was the author. Now, having said that, I must note that the Network Lock Manager (NLM) and Network Status Monitor (NSM) were not NFS. They were separate stateful protocols (poorly designed imho) that Sun never published. NFS as Sun designed it (NFSv2 and NFSv3) were "stateless server" protocols, so that they could work reliably without server crash recovery. However, the NLM was inherently stateful, since it was dealing with file locks. So, you can't really lump the NLM with NFS (and you should avoid use of the NLM over any transport imho). NFSv4 tackled the difficult problem of having a "stateful server" and crash recovery, which resulted in a much more complex protocol (compare the size of RFC-1813 vs RFC-5661 to get some idea of this). rick Cheers Richard _______________________________________________ freebsd-stable@freebsd.org<mailto:freebsd-stable@freebsd.org> mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org" _______________________________________________ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org" _______________________________________________ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"