On Wed, 2017-09-06 at 14:53 +0000, Ananyev, Konstantin wrote: > > > -----Original Message----- > > From: Chas Williams [mailto:3ch...@gmail.com] > > Sent: Wednesday, September 6, 2017 2:56 PM > > To: Ananyev, Konstantin <konstantin.anan...@intel.com>; Nicolau, Radu > > <radu.nico...@intel.com>; dev@dpdk.org > > Cc: olivier.m...@6wind.com; cw8...@att.com > > Subject: Re: [dpdk-dev] [PATCH v2] mbuf: use refcnt = 0 when debugging > > > > On Wed, 2017-09-06 at 11:58 +0000, Ananyev, Konstantin wrote: > > > > > > > -----Original Message----- > > > > From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Chas Williams > > > > Sent: Wednesday, September 6, 2017 11:46 AM > > > > To: Nicolau, Radu <radu.nico...@intel.com>; dev@dpdk.org > > > > Cc: olivier.m...@6wind.com; cw8...@att.com > > > > Subject: Re: [dpdk-dev] [PATCH v2] mbuf: use refcnt = 0 when debugging > > > > > > > > [Note: My former email address is going away eventually. I am moving > > > > the > > > > conversation to my other email address which is a bit more permanent.] > > > > > > > > On Mon, 2017-09-04 at 15:27 +0100, Radu Nicolau wrote: > > > > > > > > > > On 8/7/2017 5:11 PM, Charles (Chas) Williams wrote: > > > > > > After commit 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool") > > > > > > is it > > > > > > much harder to detect a "double free". If the developer makes a > > > > > > copy > > > > > > of an mbuf pointer and frees it twice, this condition is never > > > > > > detected > > > > > > and the mbuf gets returned to the pool twice. > > > > > > > > > > > > Since this requires extra work to track, make this behavior > > > > > > conditional > > > > > > on CONFIG_RTE_LIBRTE_MBUF_DEBUG. > > > > > > > > > > > > Signed-off-by: Chas Williams <ciwil...@brocade.com> > > > > > > --- > > > > > > > > > > > > @@ -1304,10 +1329,13 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m) > > > > > > m->next = NULL; > > > > > > m->nb_segs = 1; > > > > > > } > > > > > > +#ifdef RTE_LIBRTE_MBUF_DEBUG > > > > > > + rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT); > > > > > > +#endif > > > > > > > > > > > > return m; > > > > > > > > > > > > - } else if (rte_atomic16_add_return(&m->refcnt_atomic, -1) > > > > > > == 0) { > > > > > > + } else if (rte_mbuf_refcnt_update(m, -1) == 0) { > > > > > Why replace the use of atomic operation? > > > > > > > > It doesn't. rte_mbuf_refcnt_update() is also atomic(ish) but it > > > > slightly more > > > > optimal. This whole section is a little hazy actually. It looks like > > > > rte_pktmbuf_prefree_seg() unwraps rte_mbuf_refcnt_update() so they can > > > > avoid > > > > setting the refcnt when the refcnt is already the 'correct' value. > > > > > > You don't need to use refcnt_update() here - if you take that path it > > > already means > > > that m->refcnt_atomic != 1. > > > In fact, I think using refcnt_update () here might be a bit slower - as > > > it means extra read. > > > Konstantin > > > > Yes, that is somewhat the point. If a mbuf can have a refcnt of 0, > > then we want to go into rte_mbuf_refcnt_update() which detects 0 -> -1. > > Woulnd't __rte_mbuf_sanity_check(m, 0) at the start of prefree_seg() > already catch it? > Konstantin
Yes! I didn't notice that so I will just drop change and issue a v3 today sometime. Thanks! > > I could explicitly check this in prefree_seg but I was just restored the > > previous call into refcnt_update. I could explicitly check for refcnt = > > 0 in prefree_seg() but since we do have a routine for this... > > > > > > > > > > > > > > > > > > > > if (RTE_MBUF_INDIRECT(m)) > > > > > > @@ -1317,7 +1345,7 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m) > > > > > > m->next = NULL; > > > > > > m->nb_segs = 1; > > > > > > } > > > > > > - rte_mbuf_refcnt_set(m, 1); > > > > > > + rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT); > > > > > > > > > > > > return m; > > > > > > } > > > > > Reviewed-by: Radu Nicolau <radu.nico...@intel.com> > > > > > > > > Thanks for the review.