I cannot reproduce the bug with the patch; in my failure scenarios, it seems that completing the request on errors in nvme_rdma_send_done makes __nvme_submit_sync_cmd to be unblocked. Also, I think this is safe from the double completions.
However, it seems that nvme_rdma_timeout code is still not free from the double completion problem. So, it looks promising to me if you could separate out the nvme_rdma_wr_error handling code as a new patch. On Tue, Dec 11, 2018 at 1:14 AM Nitzan Carmi <nitz...@mellanox.com> wrote: > > I was just in the middle of sending this to upstream when I saw your > mail, and thought too that it addresses the same bug, although I see a > little different call trace than yours. > > I would be happy if you can verify that this patch works for you too, > and we can push it to upstream. > > On 11/12/2018 01:40, Jaesoo Lee wrote: > > It seems that your patch is addressing the same bug. I will see if > > that works for our failure scenarios. > > > > Why don't you make it upstream? > > > > On Sun, Dec 9, 2018 at 6:22 AM Nitzan Carmi <nitz...@mellanox.com> wrote: > >> > >> Hi, > >> We encountered similar issue. > >> I think that the problem is that error_recovery might not even be > >> queued, in case we're in DELETING state (or CONNECTING state, for that > >> matter), because we cannot move from those states to RESETTING. > >> > >> We prepared some patches which handle completions in case such scenario > >> happens (which, in fact, might happen in numerous error flows). > >> > >> Does it solve your problem? > >> Nitzan. > >> > >> > >> On 30/11/2018 03:30, Sagi Grimberg wrote: > >>> > >>>> This does not hold at least for NVMe RDMA host driver. An example > >>>> scenario > >>>> is when the RDMA connection is gone while the controller is being > >>>> deleted. > >>>> In this case, the nvmf_reg_write32() for sending shutdown admin > >>>> command by > >>>> the delete_work could be hung forever if the command is not completed by > >>>> the timeout handler. > >>> > >>> If the queue is gone, this means that the queue has already flushed and > >>> any commands that were inflight has completed with a flush error > >>> completion... > >>> > >>> Can you describe the scenario that caused this hang? When has the > >>> queue became "gone" and when did the shutdown command execute? > >>> > >>> _______________________________________________ > >>> Linux-nvme mailing list > >>> linux-n...@lists.infradead.org > >>> http://lists.infradead.org/mailman/listinfo/linux-nvme