> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybche...@oktetlabs.ru>
> Sent: Friday, October 28, 2022 11:14 PM
> To: Guo, Junfeng <junfeng....@intel.com>; Zhang, Qi Z
> <qi.z.zh...@intel.com>; Wu, Jingjing <jingjing...@intel.com>; Xing, Beilei
> <beilei.x...@intel.com>
> Cc: dev@dpdk.org; Li, Xiaoyun <xiaoyun...@intel.com>; Wang, Xiao W
> <xiao.w.w...@intel.com>
> Subject: Re: [PATCH v11 02/18] net/idpf: add support for device initialization
> 
> On 10/25/22 11:57, Andrew Rybchenko wrote:
> > On 10/24/22 16:12, Junfeng Guo wrote:
> >> Support device init and add the following dev ops:
> >>   - dev_configure
> >>   - dev_close
> >>   - dev_infos_get
> >>
> >> Signed-off-by: Beilei Xing <beilei.x...@intel.com>
> >> Signed-off-by: Xiaoyun Li <xiaoyun...@intel.com>
> >> Signed-off-by: Xiao Wang <xiao.w.w...@intel.com>
> >> Signed-off-by: Junfeng Guo <junfeng....@intel.com>
> 
> [snip]
> 
> >> +struct idpf_adapter *
> >> +idpf_find_adapter(struct rte_pci_device *pci_dev)
> >
> > It looks like the function requires corresponding lock to be held. If
> > yes, it should be documented and code fixed. If no, it should be
> > explaiend why.
> 
> I still don't understand it is a new patch. It is hardly safe to return a 
> pointer
> to an element list when you drop lock.

Sorry I misunderstood your last comment, I thought you meant lock for 
adapter_list.
I don't think we need a lock for adapter, one adapter here doesn't map one 
ethdev,
but one PCI device, we can create some vports for one adapter, and one vport 
maps
one ethdev.
  
> 
> >> +    /* valid only if rxq_model is split Q */
> >> +    uint16_t num_rx_bufq;
> >> +
> >> +    uint16_t max_mtu;
> >
> > unused
> 
> Comments? It is still in place in a new version.

All the above info is returned by backend when creating a vport, so save it 
after creating vport. 

> 
> >> +int
> >> +idpf_vc_get_caps(struct idpf_adapter *adapter) {
> >> +    struct virtchnl2_get_capabilities caps_msg;
> >> +    struct idpf_cmd_info args;
> >> +    int err;
> >> +
> >> +     memset(&caps_msg, 0, sizeof(struct
> >> +virtchnl2_get_capabilities));
> >> +     caps_msg.csum_caps =
> >> +         VIRTCHNL2_CAP_TX_CSUM_L3_IPV4        |
> >> +         VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP    |
> >> +         VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP    |
> >> +         VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP    |
> >> +         VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP    |
> >> +         VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP    |
> >> +         VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP    |
> >> +         VIRTCHNL2_CAP_TX_CSUM_GENERIC        |
> >> +         VIRTCHNL2_CAP_RX_CSUM_L3_IPV4        |
> >> +         VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP    |
> >> +         VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP    |
> >> +         VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP    |
> >> +         VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP    |
> >> +         VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP    |
> >> +         VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP    |
> >> +         VIRTCHNL2_CAP_RX_CSUM_GENERIC;
> >> +
> >> +     caps_msg.seg_caps =
> >> +         VIRTCHNL2_CAP_SEG_IPV4_TCP        |
> >> +         VIRTCHNL2_CAP_SEG_IPV4_UDP        |
> >> +         VIRTCHNL2_CAP_SEG_IPV4_SCTP        |
> >> +         VIRTCHNL2_CAP_SEG_IPV6_TCP        |
> >> +         VIRTCHNL2_CAP_SEG_IPV6_UDP        |
> >> +         VIRTCHNL2_CAP_SEG_IPV6_SCTP        |
> >> +         VIRTCHNL2_CAP_SEG_GENERIC;
> >> +
> >> +     caps_msg.rss_caps =
> >> +         VIRTCHNL2_CAP_RSS_IPV4_TCP        |
> >> +         VIRTCHNL2_CAP_RSS_IPV4_UDP        |
> >> +         VIRTCHNL2_CAP_RSS_IPV4_SCTP        |
> >> +         VIRTCHNL2_CAP_RSS_IPV4_OTHER        |
> >> +         VIRTCHNL2_CAP_RSS_IPV6_TCP        |
> >> +         VIRTCHNL2_CAP_RSS_IPV6_UDP        |
> >> +         VIRTCHNL2_CAP_RSS_IPV6_SCTP        |
> >> +         VIRTCHNL2_CAP_RSS_IPV6_OTHER        |
> >> +         VIRTCHNL2_CAP_RSS_IPV4_AH        |
> >> +         VIRTCHNL2_CAP_RSS_IPV4_ESP        |
> >> +         VIRTCHNL2_CAP_RSS_IPV4_AH_ESP        |
> >> +         VIRTCHNL2_CAP_RSS_IPV6_AH        |
> >> +         VIRTCHNL2_CAP_RSS_IPV6_ESP        |
> >> +         VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
> >> +
> >> +     caps_msg.hsplit_caps =
> >> +         VIRTCHNL2_CAP_RX_HSPLIT_AT_L2        |
> >> +         VIRTCHNL2_CAP_RX_HSPLIT_AT_L3        |
> >> +         VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4    |
> >> +         VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6;
> >> +
> >> +     caps_msg.rsc_caps =
> >> +         VIRTCHNL2_CAP_RSC_IPV4_TCP        |
> >> +         VIRTCHNL2_CAP_RSC_IPV4_SCTP        |
> >> +         VIRTCHNL2_CAP_RSC_IPV6_TCP        |
> >> +         VIRTCHNL2_CAP_RSC_IPV6_SCTP;
> >> +
> >> +     caps_msg.other_caps =
> >> +         VIRTCHNL2_CAP_RDMA            |
> >> +         VIRTCHNL2_CAP_SRIOV            |
> >> +         VIRTCHNL2_CAP_MACFILTER        |
> >> +         VIRTCHNL2_CAP_FLOW_DIRECTOR        |
> >> +         VIRTCHNL2_CAP_SPLITQ_QSCHED        |
> >> +         VIRTCHNL2_CAP_CRC            |
> >> +         VIRTCHNL2_CAP_WB_ON_ITR        |
> >> +         VIRTCHNL2_CAP_PROMISC            |
> >> +         VIRTCHNL2_CAP_LINK_SPEED        |
> >> +         VIRTCHNL2_CAP_VLAN;
> >
> > I'm wondering why all above capabilities are mentioned in the patch?
> > What does the API do? Do it is request it? Negotiage?
> 
> Can I have answer on my question?

I removed some caps in v14, and I think it makes sense adding one cap when 
enable the offload.

Reply via email to