> -----Original Message-----
> From: Robin Murphy <robin.mur...@arm.com>
> Sent: Tuesday, December 10, 2019 3:34 AM
> To: Yin, Xiaotao <xiaotao....@windriver.com>; j...@8bytes.org;
> iommu@lists.linux-foundation.org
> Cc: linux-ker...@vger.kernel.org; Hao, Kexin <kexin....@windriver.com>
> Subject: Re: [PATCH V2] iommu/iova: Init the struct iova to fix the possible
> memleak
>
> On 09/12/2019 8:24 am, Xiaotao Yin wrote:
> > During ethernet(Marvell octeontx2) set ring buffer test:
> > ethtool -G eth1 rx <rx ring size> tx <tx ring size> following kmemleak
> > will happen sometimes:
> >
> > unreferenced object 0xffff000b85421340 (size 64):
> > comm "ethtool", pid 867, jiffies 4295323539 (age 550.500s)
> > hex dump (first 64 bytes):
> > 80 13 42 85 0b 00 ff ff ff ff ff ff ff ff ff ff ..B.............
> > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
> > ff ff ff ff ff ff ff ff 00 00 00 00 00 00 00 00 ................
> > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
> > backtrace:
> > [<000000001b204ddf>] kmem_cache_alloc+0x1b0/0x350
> > [<00000000d9ef2e50>] alloc_iova+0x3c/0x168
> > [<00000000ea30f99d>] alloc_iova_fast+0x7c/0x2d8
> > [<00000000b8bb2f1f>] iommu_dma_alloc_iova.isra.0+0x12c/0x138
> > [<000000002f1a43b5>] __iommu_dma_map+0x8c/0xf8
> > [<00000000ecde7899>] iommu_dma_map_page+0x98/0xf8
> > [<0000000082004e59>] otx2_alloc_rbuf+0xf4/0x158
> > [<000000002b107f6b>] otx2_rq_aura_pool_init+0x110/0x270
> > [<00000000c3d563c7>] otx2_open+0x15c/0x734
> > [<00000000a2f5f3a8>] otx2_dev_open+0x3c/0x68
> > [<00000000456a98b5>] otx2_set_ringparam+0x1ac/0x1d4
> > [<00000000f2fbb819>] dev_ethtool+0xb84/0x2028
> > [<0000000069b67c5a>] dev_ioctl+0x248/0x3a0
> > [<00000000af38663a>] sock_ioctl+0x280/0x638
> > [<000000002582384c>] do_vfs_ioctl+0x8b0/0xa80
> > [<000000004e1a2c02>] ksys_ioctl+0x84/0xb8
> >
> > The reason:
> > When alloc_iova_mem() without initial with Zero, sometimes fpn_lo will
> > equal to IOVA_ANCHOR by chance, so when return from
> > __alloc_and_insert_iova_range() with -ENOMEM(iova32_full), the
> new_iova will not be freed in free_iova_mem().
>
> Ooh, subtle... nice catch!
>
> I suppose we could also open-code the kmem_cache_free() call in
> alloc_iova() to bypass the check entirely because "we know what we're
> doing", but only if the zeroing proves to have a measurable overhead.
>
> > Fixes: bb68b2fbfbd6 ("iommu/iova: Add rbtree anchor node")
> > Signed-off-by: Xiaotao Yin <xiaotao....@windriver.com>
> > ---
> > drivers/iommu/iova.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index
> > 41c605b0058f..2c27a661632c 100644
> > --- a/drivers/iommu/iova.c
> > +++ b/drivers/iommu/iova.c
> > @@ -233,7 +233,7 @@ static DEFINE_MUTEX(iova_cache_mutex);
> >
> > struct iova *alloc_iova_mem(void)
> > {
> > - return kmem_cache_alloc(iova_cache, GFP_ATOMIC);
> > + return kmem_cache_alloc(iova_cache, GFP_ATOMIC | __GFP_ZERO);
>
> FWIW there is a kmem_cache_zalloc() helper, which seems fairly well-
> established. Either way, though,
Yes, zalloc seems neat than this change, I'd like use kmem_cache_zalloc()
instead, thank you~
>
> Reviewed-by: Robin Murphy <robin.mur...@arm.com>
>
> > }
> > EXPORT_SYMBOL(alloc_iova_mem);
> >
> >
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu