> When using tc qdisc to configure DCB parameter, dcb_ops->setup_tc > is used to tell hclge_dcb module to do the setup.
While this might be a step in the right direction, this causes an inconsistency in user experience - Some [well, most] vendors didn't allow the mqprio priority mapping to affect DCB, instead relying on the dcbnl functionality to control that configuration. A couple of options to consider: - Perhaps said logic shouldn't be contained inside the driver but rather in mqprio logic itself. I.e., rely on DCBNL functionality [if available] from within mqprio and try changing the configuration. - Add a new TC_MQPRIO_HW_OFFLOAD_ value to explicitly reflect user request to allow this configuration to affect DCB. > When using lldptool to configure DCB parameter, hclge_dcb module > call the client_ops->setup_tc to tell network stack which queue > and priority is using for specific tc. You're basically bypassing the mqprio logic. Since you're configuring the prio->queue mapping from DCB flow, you'll get an mqprio-like behavior [meaning a transmitted packet would reach a transmission queue associated with its priority] even if device wasn't grated with an mqprio qdisc. Why should your user even use mqprio? What benefit does he get from it? ... > +static int hns3_nic_set_real_num_queue(struct net_device *netdev) > +{ > + struct hns3_nic_priv *priv = netdev_priv(netdev); > + struct hnae3_handle *h = priv->ae_handle; > + struct hnae3_knic_private_info *kinfo = &h->kinfo; > + unsigned int queue_size = kinfo->rss_size * kinfo->num_tc; > + int ret; > + > + ret = netif_set_real_num_tx_queues(netdev, queue_size); > + if (ret) { > + netdev_err(netdev, > + "netif_set_real_num_tx_queues fail, ret=%d!\n", > + ret); > + return ret; > + } > + > + ret = netif_set_real_num_rx_queues(netdev, queue_size); I don't think you're changing the driver behavior, but why are you setting the real number of rx queues based on the number of TCs? Do you actually open (TC x RSS) Rx queues?