10/12/2019 03:41, Tiwei Bie:
> On Mon, Dec 09, 2019 at 02:22:27PM +0300, Andrew Rybchenko wrote:
> > On 12/9/19 1:41 PM, Ori Kam wrote:
> > > From: Andrew Rybchenko
> > >> On 12/8/19 10:06 AM, Matan Azrad wrote:
> > >>> From: Andrew Rybchenko
> > >>>> On 12/6/19 8:32 AM, Liang, Cunming wrote:
> > >>>>> From: Bie, Tiwei <tiwei....@intel.com>
> > >>>>>> On Thu, Dec 05, 2019 at 01:26:36PM +0000, Matan Azrad wrote:
> > >>>>>>> Hi all
> > >>>>>>>
> > >>>>>>> As described in RFC “[RFC] net: new vdpa PMD for Mellanox devices”,
> > >>>>>>> a new vdpa drivers is going to be added for Mellanox devices –
> > >>>>>>> mlx5_vdpa
> > >>>>>>>
> > >>>>>>> The only vdpa driver now is the IFC driver that is located in net
> > >> directory.
> > >>>>>>>
> > >>>>>>> The IFC driver and the new mlx5_vdpa driver provide the vdpa ops
> > >> and
> > >>>>>>> not the eth_dev ops.
> > >>>>>>>
> > >>>>>>> All the others drivers in net provide the eth-dev ops.
> > >>>>>>>
> > >>>>>>> I suggest to create a new class for vdpa drivers, to move IFC to
> > >>>>>>> this class and to add the mlx5_vdpa to this class too.
> > >>>>>>>
> > >>>>>>> Later, all the new drivers that implements the vdpa ops will be
> > >>>>>>> added to the vdpa class.
> > >>>>>>
> > >>>>>> +1. Sounds like a good idea to me.
> > >>>>> +1
> > >>>>
> > >>>> vDPA drivers are vendor-specific and expected to talk to vendor NIC. 
> > >>>> I.e.
> > >>>> there are significant chances to share code with network drivers (e.g.
> > >> base
> > >>>> driver). Should base driver be moved to drivers/common in this case or 
> > >>>> is
> > >> it
> > >>>> still allows to have vdpa driver in drivers/net together with ethdev 
> > >>>> driver?
> > >>>
> > >>> Yes, I think this should be the method, shared code should be moved to
> > >> the drivers/common directory.
> > >>> I think there is a precedence with shared code in common which shares a
> > >> vendor specific code between crypto and net.
> > >>
> > >> I see motivation behind driver/vdpa. However, vdpa and net
> > >> drivers tightly related and I would prefer to avoid extra
> > >> complexity here. Right now simplicity over-weights for me.
> > >> No strong opinion on the topic, but it definitely requires
> > >> better and more clear motivation why a new class should be
> > >> introduced and existing drivers restructured.
> > >>
> > > 
> > > Why do you think there is extra complexity?
> > 
> > Even grep becomes a bit more complicated J
> > 
> > > I think from design correctness it is more correct to create a dedicated 
> > > class for the following reasons:
> > > 1. All of the classes implements a different set of ops. For example the 
> > > cryptodev has a defined set of ops, same goes for the compress driver and 
> > > the ethdev driver. Each ones of them 
> > > has different ops. Going by this definition since VDPA has a different 
> > > set of ops, it makes sense that it will be in a different class.
> > > 
> > > 2. even that both drivers (eth/vdpa) handle traffic from the nic most of 
> > > the code is different (the difference is also dependent on the 
> > > manufacture)
> > > So even the shared code base is not large and can be shared using the 
> > > common directory. For example ifc doesn't have any shared code.
> > > 
> > > What do you think?
> > 
> > The true reason is: if the difference in ops implemented
> > is a key difference which should enforce location in
> > different directories. Or underlying device class is a key.
> > Roughly:
> >  - net driver is a control+data path
> >  - vdpa driver is a control path only
> > My fear is that control path will grow more and more
> > (Rx mode, RSS, filters and so on)
> 
> I think this is a reasonable concern.
> 
> One thing needs to be clarified is that, the control
> path (ops) in vdpa driver is something very different
> with the control path in net driver. vdpa is very
> generic (or more precisely vhost-related), instead
> of network related:
> 
> https://github.com/DPDK/dpdk/blob/aef1d0733179/lib/librte_vhost/rte_vdpa.h#L40-L78
> 
> It's built on top of vhost-user protocol, manipulates
> virtqueues, virtio/vhost features, memory table, ...
> 
> Technically, it's possible to have blk, crypto, ...
> vdpa devices as well (we already have vhost-user-blk,
> vhost-user-crypto, ...).
> 
> But network specific features will come eventually,
> e.g. RSS. One possible way to solve it is to define a
> generic event callback in vdpa ops, and vdpa driver can
> request the corresponding info from vhost based on the
> event received.
> 
> Another thing needs to be clarified is that, the control
> path supposed to be used by DPDK apps directly in vdpa
> should always be generic, it should just be something like:
> 
> int rte_vdpa_find_device_id(struct rte_vdpa_dev_addr *addr);
> int rte_vhost_driver_attach_vdpa_device(const char *path, int did);
> int rte_vhost_driver_detach_vdpa_device(const char *path);
> ...
> 
> That is to say, users just need to bind the vdpa device
> to the vhost connection. The control path ops in vdpa is
> supposed to be called by vhost-library transparently
> based on the events on the vhost-user connection, i.e.
> the vdpa device will be configured (including RSS) by
> the guest driver in QEMU "directly" via the vhost-user
> protocol instead of the DPDK app in the host.

Tiwei, in order to be clear,
You think vDPA drivers should be in drivers/vdpa directory?


Reply via email to