On Wed, Sep 07, 2022 at 07:18:06PM +0000, Parav Pandit wrote: > > > From: Michael S. Tsirkin <m...@redhat.com> > > Sent: Wednesday, September 7, 2022 3:12 PM > > > > Because of shallow queue of 16 entries deep. > > > > but why is the queue just 16 entries? > I explained the calculation in [1] about 16 entries. > > [1] > https://lore.kernel.org/netdev/ph0pr12mb54812ec7f4711c1ea4caa119dc...@ph0pr12mb5481.namprd12.prod.outlook.com/ > > > does the device not support indirect? > > > Yes, indirect feature bit is disabled on the device.
OK that explains it. > > because with indirect you get 256 entries, with 16 s/g each. > > > Sure. I explained below that indirect comes with 7x memory cost that is not > desired. > (Ignored the table memory allocation cost and extra latency). Oh sure, it's a waste. I wonder what effect does the patch have on bandwidth with indirect enabled though. > Hence don't want to enable indirect in this scenario. > This optimization also works with indirect with smaller indirect table. > > > > > > With driver turn around time to repost buffers, device is idle without any > > RQ buffers. > > > With this improvement, device has 85 buffers instead of 16 to receive > > packets. > > > > > > Enabling indirect in device can help at cost of 7x higher memory per VQ in > > the guest VM. _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization