> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Wednesday, November 14, 2018 3:08 PM
> To: Zhang, Qi Z <qi.z.zh...@intel.com>
> Cc: tho...@monjalon.net; dev@dpdk.org; Lin, Xueqin <xueqin....@intel.com>
> Subject: Re: [PATCH v3 2/2] net/pcap: enable data path for secondary
> 
> On 11/14/2018 7:56 PM, Qi Zhang wrote:
> > Private vdev was the way previously, when pdump developed, now with
> > shared device mode on virtual devices, pcap data path in secondary is not
> working.
> >
> > When secondary adds a virtual device, related data transferred to
> > primary and primary creates the device and shares device back with
> secondary.
> > When pcap device created in primary, pcap handlers (pointers) are
> > process local and they are not valid for secondary process. This breaks
> secondary.
> >
> > So we can't directly share the pcap handlers, but need to create a new
> > set of handlers for secondary, that's what we done in this patch.
> >
> > Signed-off-by: Ferruh Yigit <ferruh.yi...@intel.com>
> > Signed-off-by: Qi Zhang <qi.z.zh...@intel.com>
> 
> <...>
> 
> > @@ -1155,16 +1157,18 @@ pmd_pcap_probe(struct rte_vdev_device *dev)
> >                     PMD_LOG(ERR, "Failed to probe %s", name);
> >                     return -1;
> >             }
> > -           /* TODO: request info from primary to set up Rx and Tx */
> > -           eth_dev->dev_ops = &ops;
> > -           eth_dev->device = &dev->device;
> > -           rte_eth_dev_probing_finish(eth_dev);
> > -           return 0;
> > -   }
> >
> > -   kvlist = rte_kvargs_parse(rte_vdev_device_args(dev), valid_arguments);
> > -   if (kvlist == NULL)
> > -           return -1;
> > +           internal = eth_dev->data->dev_private;
> > +
> > +           kvlist = rte_kvargs_parse(internal->devargs, valid_arguments);
> > +           if (kvlist == NULL)
> > +                   return -1;
> 
> Copying devargs to internal->devargs seems missing, it is still needed right?

Yes it is missed, I just notice I forgot to git format-patch again after I add 
this...

Reply via email to