-----Original Message-----
From: Ferruh Yigit <ferruh.yi...@amd.com>
Sent: Wednesday, October 19, 2022 14:46
To: Guo, Junfeng <junfeng....@intel.com>; Zhang, Qi Z
<qi.z.zh...@intel.com>; Wu, Jingjing <jingjing...@intel.com>
Cc: ferruh.yi...@xilinx.com; dev@dpdk.org; Li, Xiaoyun
<xiaoyun...@intel.com>; awogbem...@google.com; Richardson, Bruce
<bruce.richard...@intel.com>; Lin, Xueqin <xueqin....@intel.com>; Wang,
Haiyue <haiyue.w...@intel.com>
Subject: Re: [PATCH v5 3/8] net/gve: add support for device initialization
On 10/10/2022 11:17 AM, Junfeng Guo wrote:
Support device init and add following devops skeleton:
- dev_configure
- dev_start
- dev_stop
- dev_close
Note that build system (including doc) is also added in this patch.
Signed-off-by: Haiyue Wang <haiyue.w...@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun...@intel.com>
Signed-off-by: Junfeng Guo <junfeng....@intel.com>
<...>
diff --git a/doc/guides/rel_notes/release_22_11.rst
b/doc/guides/rel_notes/release_22_11.rst
index fbb575255f..c1162ea1a4 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -200,6 +200,11 @@ New Features
into single event containing ``rte_event_vector``
whose event type is ``RTE_EVENT_TYPE_CRYPTODEV_VECTOR``.
+* **Added GVE net PMD**
+
+ * Added the new ``gve`` net driver for Google Virtual Ethernet devices.
+ * See the :doc:`../nics/gve` NIC guide for more details on this new driver.
+
Can you please move the block amaong the other ethdev drivers, as
alphabetically sorted?
<...>
+static int
+gve_dev_init(struct rte_eth_dev *eth_dev) {
+ struct gve_priv *priv = eth_dev->data->dev_private;
+ int max_tx_queues, max_rx_queues;
+ struct rte_pci_device *pci_dev;
+ struct gve_registers *reg_bar;
+ rte_be32_t *db_bar;
+ int err;
+
+ eth_dev->dev_ops = &gve_eth_dev_ops;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ pci_dev = RTE_DEV_TO_PCI(eth_dev->device);
+
+ reg_bar = pci_dev->mem_resource[GVE_REG_BAR].addr;
+ if (!reg_bar) {
+ PMD_DRV_LOG(ERR, "Failed to map pci bar!");
+ return -ENOMEM;
+ }
+
+ db_bar = pci_dev->mem_resource[GVE_DB_BAR].addr;
+ if (!db_bar) {
+ PMD_DRV_LOG(ERR, "Failed to map doorbell bar!");
+ return -ENOMEM;
+ }
+
+ gve_write_version(®_bar->driver_version);
+ /* Get max queues to alloc etherdev */
+ max_tx_queues = ioread32be(®_bar->max_tx_queues);
+ max_rx_queues = ioread32be(®_bar->max_rx_queues);
+
+ priv->reg_bar0 = reg_bar;
+ priv->db_bar2 = db_bar;
+ priv->pci_dev = pci_dev;
+ priv->state_flags = 0x0;
+
+ priv->max_nb_txq = max_tx_queues;
+ priv->max_nb_rxq = max_rx_queues;
+
+ err = gve_init_priv(priv, false);
+ if (err)
+ return err;
+
+ eth_dev->data->mac_addrs = rte_zmalloc("gve_mac", sizeof(struct
rte_ether_addr), 0);
+ if (!eth_dev->data->mac_addrs) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory to store mac
address");
+ return -ENOMEM;
+ }
+ rte_ether_addr_copy(&priv->dev_addr,
+ eth_dev->data->mac_addrs);
+
Is anything assinged to 'priv->dev_addr' to copy?
Also since there is a 'priv->dev_addr' field, why not use it directly, instead
of
allocating memory for 'eth_dev->data->mac_addrs'?
I mean why not "eth_dev->data->mac_addrs = &priv->dev_addr"?