-----"Jason Gunthorpe" <j...@ziepe.ca> wrote: -----

>To: "Bernard Metzler" <b...@zurich.ibm.com>
>From: "Jason Gunthorpe" <j...@ziepe.ca>
>Date: 07/12/2019 03:53PM
>Cc: "Arnd Bergmann" <a...@arndb.de>, "Doug Ledford"
><dledf...@redhat.com>, "Peter Zijlstra" <pet...@infradead.org>,
>linux-r...@vger.kernel.org, linux-kernel@vger.kernel.org
>Subject: [EXTERNAL] Re: Re: [PATCH] rdma/siw: avoid smp_store_mb() on
>a u64
>
>On Fri, Jul 12, 2019 at 01:05:14PM +0000, Bernard Metzler wrote:
>> 
>> >To: "Bernard Metzler" <b...@zurich.ibm.com>
>> >From: "Jason Gunthorpe" <j...@ziepe.ca>
>> >Date: 07/12/2019 02:03PM
>> >Cc: "Arnd Bergmann" <a...@arndb.de>, "Doug Ledford"
>> ><dledf...@redhat.com>, "Peter Zijlstra" <pet...@infradead.org>,
>> >linux-r...@vger.kernel.org, linux-kernel@vger.kernel.org
>> >Subject: [EXTERNAL] Re: [PATCH] rdma/siw: avoid smp_store_mb() on
>a
>> >u64
>> >
>> >On Fri, Jul 12, 2019 at 11:33:46AM +0000, Bernard Metzler wrote:
>> >> >diff --git a/drivers/infiniband/sw/siw/siw_verbs.c
>> >> >b/drivers/infiniband/sw/siw/siw_verbs.c
>> >> >index 32dc79d0e898..41c5ab293fe1 100644
>> >> >+++ b/drivers/infiniband/sw/siw/siw_verbs.c
>> >> >@@ -1142,10 +1142,11 @@ int siw_req_notify_cq(struct ib_cq
>> >*base_cq,
>> >> >enum ib_cq_notify_flags flags)
>> >> > 
>> >> >         if ((flags & IB_CQ_SOLICITED_MASK) == IB_CQ_SOLICITED)
>> >> >                 /* CQ event for next solicited completion */
>> >> >-                smp_store_mb(*cq->notify, SIW_NOTIFY_SOLICITED);
>> >> >+                WRITE_ONCE(*cq->notify, SIW_NOTIFY_SOLICITED);
>> >> >         else
>> >> >                 /* CQ event for any signalled completion */
>> >> >-                smp_store_mb(*cq->notify, SIW_NOTIFY_ALL);
>> >> >+                WRITE_ONCE(*cq->notify, SIW_NOTIFY_ALL);
>> >> >+        smp_wmb();
>> >> > 
>> >> >         if (flags & IB_CQ_REPORT_MISSED_EVENTS)
>> >> >                 return cq->cq_put - cq->cq_get;
>> >> 
>> >> 
>> >> Hi Arnd,
>> >> Many thanks for pointing that out! Indeed, this CQ notification
>> >> mechanism does not take 32 bit architectures into account.
>> >> Since we have only three flags to hold here, it's probably
>better
>> >> to make it a 32bit value. That would remove the issue w/o
>> >> introducing extra smp_wmb(). 
>> >
>> >I also prefer not to see smp_wmb() in drivers..
>> >
>> >> I'd prefer smp_store_mb(), since on some architectures it shall
>be
>> >> more efficient.  That would also make it sufficient to use
>> >> READ_ONCE.
>> >
>> >The READ_ONCE is confusing to me too, if you need store_release
>> >semantics then the reader also needs to pair with load_acquite -
>> >otherwise it doesn't work.
>> >
>> >Still, we need to do something rapidly to fix the i386 build,
>please
>> >revise right away..
>> >
>> >Jason
>> >
>> >
>> 
>> We share CQ (completion queue) notification flags between
>application
>> (which may be user land) and producer (kernel QP's (queue pairs)).
>> Those flags can be written by both application and QP's. The
>application
>> writes those flags to let the driver know if it shall inform about
>new
>> work completions. It can write those flags at any time.
>> Only a kernel producer reads those flags to decide if
>> the CQ notification handler shall be kicked, if a new CQ element
>gets
>> added to the CQ. When kicking the completion handler, the driver
>resets the
>> notification flag, which must get re-armed by the application.
>
>This looks wrong to me.. a userspace notification re-arm cannot be
>lost, so have a split READ/TEST/WRITE sequence can't possibly work?
>
>I'd expect an atomic test and clear here?

We cannot avoid the case that the application re-arms the
CQ only after a CQE got placed. That is why folks are polling the
CQ once after re-arming it - to make sure they do not miss the
very last and single CQE which would have produced a CQ event.

I do not think an atomic test and clear would change that picture.

Also, per RDMA verbs semantics, if the user would re-arm the CQ
more then once before it gets a CQ event, it will still get only one
CQ event if a new CQ element becomes ready. 
>
>
>> @@ -1141,11 +1145,17 @@ int siw_req_notify_cq(struct ib_cq
>*base_cq, enum ib_cq_notify_flags flags)
>>      siw_dbg_cq(cq, "flags: 0x%02x\n", flags);
>>  
>>      if ((flags & IB_CQ_SOLICITED_MASK) == IB_CQ_SOLICITED)
>> -            /* CQ event for next solicited completion */
>> -            smp_store_mb(*cq->notify, SIW_NOTIFY_SOLICITED);
>> +            /*
>> +             * Enable CQ event for next solicited completion.
>> +             * and make it visible to all associated producers.
>> +             */
>> +            smp_store_mb(cq->notify->flags, SIW_NOTIFY_SOLICITED);
>
>But what is the 2nd piece of data to motivate the smp_store_mb?

Another core (such as a concurrent RX operation) shall see this
CQ being re-armed asap.

Best,
Bernard.
>
>Jason
>
>

Reply via email to