On Thu, Oct 02, 2014 at 02:07:09AM +0000, Hiroshi Shimamoto wrote:
> > Subject: Re: [dpdk-dev] [memnic PATCH v2 6/7] pmd: add branch hint in 
> > recv/xmit
> > 
> > On Wed, Oct 01, 2014 at 11:33:23PM +0000, Hiroshi Shimamoto wrote:
> > > > Subject: Re: [dpdk-dev] [memnic PATCH v2 6/7] pmd: add branch hint in 
> > > > recv/xmit
> > > >
> > > > On Wed, Oct 01, 2014 at 09:12:44AM +0000, Hiroshi Shimamoto wrote:
> > > > > > Subject: Re: [dpdk-dev] [memnic PATCH v2 6/7] pmd: add branch hint 
> > > > > > in recv/xmit
> > > > > >
> > > > > > On Tue, Sep 30, 2014 at 11:52:00PM +0000, Hiroshi Shimamoto wrote:
> > > > > > > Hi,
> > > > > > >
> > > > > > > > Subject: Re: [dpdk-dev] [memnic PATCH v2 6/7] pmd: add branch 
> > > > > > > > hint in recv/xmit
> > > > > > > >
> > > > > > > > On Tue, Sep 30, 2014 at 11:14:40AM +0000, Hiroshi Shimamoto 
> > > > > > > > wrote:
> > > > > > > > > From: Hiroshi Shimamoto <h-shimamoto at ct.jp.nec.com>
> > > > > > > > >
> > > > > > > > > To reduce instruction cache miss, add branch condition hints 
> > > > > > > > > into
> > > > > > > > > recv/xmit functions. This improves a bit performance.
> > > > > > > > >
> > > > > > > > > We can see performance improvements with memnic-tester.
> > > > > > > > > Using Xeon E5-2697 v2 @ 2.70GHz, 4 vCPU.
> > > > > > > > >  size |  before  |  after
> > > > > > > > >    64 | 5.54Mpps | 5.55Mpps
> > > > > > > > >   128 | 5.46Mpps | 5.44Mpps
> > > > > > > > >   256 | 5.21Mpps | 5.22Mpps
> > > > > > > > >   512 | 4.50Mpps | 4.52Mpps
> > > > > > > > >  1024 | 3.71Mpps | 3.73Mpps
> > > > > > > > >  1280 | 3.21Mpps | 3.22Mpps
> > > > > > > > >  1518 | 2.92Mpps | 2.93Mpps
> > > > > > > > >
> > > > > > > > > Signed-off-by: Hiroshi Shimamoto <h-shimamoto at 
> > > > > > > > > ct.jp.nec.com>
> > > > > > > > > Reviewed-by: Hayato Momma <h-momma at ce.jp.nec.com>
> > > > > > > > > ---
> > > > > > > > >  pmd/pmd_memnic.c | 18 +++++++++---------
> > > > > > > > >  1 file changed, 9 insertions(+), 9 deletions(-)
> > > > > > > > >
> > > > > > > > > diff --git a/pmd/pmd_memnic.c b/pmd/pmd_memnic.c
> > > > > > > > > index 7fc3093..875d3ea 100644
> > > > > > > > > --- a/pmd/pmd_memnic.c
> > > > > > > > > +++ b/pmd/pmd_memnic.c
> > > > > > > > > @@ -289,26 +289,26 @@ static uint16_t memnic_recv_pkts(void 
> > > > > > > > > *rx_queue,
> > > > > > > > >       int idx, next;
> > > > > > > > >       struct rte_eth_stats *st = 
> > > > > > > > > &adapter->stats[rte_lcore_id()];
> > > > > > > > >
> > > > > > > > > -     if (!adapter->nic->hdr.valid)
> > > > > > > > > +     if (unlikely(!adapter->nic->hdr.valid))
> > > > > > > > >               return 0;
> > > > > > > > >
> > > > > > > > >       pkts = bytes = errs = 0;
> > > > > > > > >       idx = adapter->up_idx;
> > > > > > > > >       for (nr = 0; nr < nb_pkts; nr++) {
> > > > > > > > >               p = &data->packets[idx];
> > > > > > > > > -             if (p->status != MEMNIC_PKT_ST_FILLED)
> > > > > > > > > +             if (unlikely(p->status != MEMNIC_PKT_ST_FILLED))
> > > > > > > > >                       break;
> > > > > > > > >               /* prefetch the next area */
> > > > > > > > >               next = idx;
> > > > > > > > > -             if (++next >= MEMNIC_NR_PACKET)
> > > > > > > > > +             if (unlikely(++next >= MEMNIC_NR_PACKET))
> > > > > > > > >                       next = 0;
> > > > > > > > >               rte_prefetch0(&data->packets[next]);
> > > > > > > > > -             if (p->len > framesz) {
> > > > > > > > > +             if (unlikely(p->len > framesz)) {
> > > > > > > > >                       errs++;
> > > > > > > > >                       goto drop;
> > > > > > > > >               }
> > > > > > > > >               mb = rte_pktmbuf_alloc(adapter->mp);
> > > > > > > > > -             if (!mb)
> > > > > > > > > +             if (unlikely(!mb))
> > > > > > > > >                       break;
> > > > > > > > >
> > > > > > > > >               rte_memcpy(rte_pktmbuf_mtod(mb, void *), 
> > > > > > > > > p->data, p->len);
> > > > > > > > > @@ -350,7 +350,7 @@ static uint16_t memnic_xmit_pkts(void 
> > > > > > > > > *tx_queue,
> > > > > > > > >       uint64_t pkts, bytes, errs;
> > > > > > > > >       uint32_t framesz = adapter->framesz;
> > > > > > > > >
> > > > > > > > > -     if (!adapter->nic->hdr.valid)
> > > > > > > > > +     if (unlikely(!adapter->nic->hdr.valid))
> > > > > > > > >               return 0;
> > > > > > > > >
> > > > > > > > >       pkts = bytes = errs = 0;
> > > > > > > > > @@ -360,7 +360,7 @@ static uint16_t memnic_xmit_pkts(void 
> > > > > > > > > *tx_queue,
> > > > > > > > >               struct rte_mbuf *sg;
> > > > > > > > >               void *ptr;
> > > > > > > > >
> > > > > > > > > -             if (pkt_len > framesz) {
> > > > > > > > > +             if (unlikely(pkt_len > framesz)) {
> > > > > > > > >                       errs++;
> > > > > > > > >                       break;
> > > > > > > > >               }
> > > > > > > > > @@ -379,7 +379,7 @@ retry:
> > > > > > > > >                       goto retry;
> > > > > > > > >               }
> > > > > > > > >
> > > > > > > > > -             if (idx != ACCESS_ONCE(adapter->down_idx)) {
> > > > > > > > > +             if (unlikely(idx != 
> > > > > > > > > ACCESS_ONCE(adapter->down_idx))) {
> > > > > > > > Why are you using ACCESS_ONCE here?  Or for that matter, 
> > > > > > > > anywhere else in this
> > > > > > > > PMD?  The whole idea of the ACCESS_ONCE macro is to assign a 
> > > > > > > > value to a variable
> > > > > > > > once and prevent it from getting reloaded from memory at a 
> > > > > > > > later time, this is
> > > > > > > > exactly contrary to that, both in the sense that you're 
> > > > > > > > explicitly reloading the
> > > > > > > > same variable multiple times, and that you're using it as part 
> > > > > > > > of a comparison
> > > > > > > > operation, rather than an asignment operation
> > > > > > >
> > > > > > > ACCESS_ONCE prevents compiler optimization and ensures load from 
> > > > > > > memory.
> > > > > > > There could be multiple threads which read/write that index.
> > > > > > > We should compare the value previous and the current value in 
> > > > > > > memory.
> > > > > > > In that reason, I use ACCESS_ONCE macro to get value in the 
> > > > > > > memory.
> > > > > >
> > > > > > Should you not just make the variable volatile? That's the normal 
> > > > > > way to
> > > > > > guarantee reads from memory and prevent the compiler caching things 
> > > > > > in
> > > > > > registers.
> > > > >
> > > > > We don't want always accessing to memory, it could cause performance 
> > > > > degradation.
> > > > > Like linux kernel, I use it in the place only we really load from 
> > > > > memory.
> > > > >
> > > > Thats not true at all.  Every single read of adapter->down_idx in
> > > > memnic_xmit_pkts() is wrapped in a ACCESS_ONCE call.  Theres no 
> > > > difference in
> > > > doing that and just declaring a volitile variable and pointing it to
> > > > &adapter->down_idx (save for the increased legibility of the code)
> > >
> > > You're right, at this moment there is no reference without ACCESS_ONCE.
> > > I'm not sure adding code to access that variable in the future, but
> > > would like to avoid accidentally a code which causes a performance issue,
> > > I think keeping the declaration in structure without volatile.
> > > As you mentioned, using local variable which points down_idx will be fine.
> > So you would like to continue using a macro incorrectly to avoid a possible
> > performance issue with code that hasn't been written yet?  Thats 
> > nonsensical.
> 
> No, I will replace ACCESS_ONCE macro with local volatile variable, then
> ACCESS_ONCE macro will disappear.
> 
Ah, sorry, misunderstood your intentions.

Thanks
Neil

> thanks,
> Hiroshi
> 
> > What performance issue to see occuring if you created a volatile variable 
> > and
> > then used it in conjunction with ACCESS_ONCE?
> > 
> > Neil
> > 
> > >
> > > I will submit a cleanup patch before starting the next development for 
> > > DPDK v1.8.
> > >
> > > thanks,
> > > Hiroshi
> > >
> > > >
> > > > Neil
> > > >
> > > > > thanks,
> > > > > Hiroshi
> > > > >
> > > > > >
> > > > > > /Bruce
> > > > > >
> > > > > > >
> > > > > > > thanks,
> > > > > > > Hiroshi
> > > > > > >
> > > > > > > >
> > > > > > > > Neil
> > > > > > >
> > > > >
> > >
> 

Reply via email to