Hi Xiaolong, > -----Original Message----- > From: Wang, Haiyue > Sent: Friday, July 12, 2019 10:12 > To: Ye, Xiaolong <xiaolong...@intel.com> > Cc: dev@dpdk.org; sta...@dpdk.org > Subject: RE: [dpdk-dev] [PATCH v1] net/ice: use rx/tx DMA iova instead of > phys_addr which is > deprecated > > > -----Original Message----- > > From: Ye, Xiaolong > > Sent: Friday, July 12, 2019 16:30 > > To: Wang, Haiyue <haiyue.w...@intel.com> > > Cc: dev@dpdk.org; sta...@dpdk.org > > Subject: Re: [dpdk-dev] [PATCH v1] net/ice: use rx/tx DMA iova instead of > > phys_addr which is > > deprecated > > > > Hi, Haiyue > > > > On 07/12, Haiyue Wang wrote: > > >The phys_addr concept is deprecated in rte_memzone, change it to access > > >iova member, and use the type 'rte_iova_t'. > > > > > > > It seems this issue also exists in other PMDs, like ixgbe, i40e, iavf..., do > > you have plan to fix them all? > > > I will check them later and submit series of patches. ;)
I found that ixgbe, i40e, iavf had used 'rz->iova' instead of 'rz->phys_addr', this was the main reason that I changed it for ice. For the name & type, I think it may be acceptable, need no more patches to change them. > > For the patch, Reviewed-by: Xiaolong Ye <xiaolong...@intel.com> > > > > Thanks, > > Xiaolong > > > > >Also rename the rx/tx_ring_phys_addr definitions to rx/tx_ring_dma that > > >matches the IOVA concept design. > > > > > >Fixes: 50370662b727 ("net/ice: support device and queue ops") > > >Cc: sta...@dpdk.org > > > > > >Signed-off-by: Haiyue Wang <haiyue.w...@intel.com> > > >--- > > > drivers/net/ice/ice_rxtx.c | 8 ++++---- > > > drivers/net/ice/ice_rxtx.h | 4 ++-- > > > 2 files changed, 6 insertions(+), 6 deletions(-) > > > > > >diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c > > >index 035ed84..3353f23 100644 > > >--- a/drivers/net/ice/ice_rxtx.c > > >+++ b/drivers/net/ice/ice_rxtx.c > > >@@ -70,7 +70,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) > > > > > > memset(&rx_ctx, 0, sizeof(rx_ctx)); > > > > > >- rx_ctx.base = rxq->rx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT; > > >+ rx_ctx.base = rxq->rx_ring_dma / ICE_QUEUE_BASE_ADDR_UNIT; > > > rx_ctx.qlen = rxq->nb_rx_desc; > > > rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S; > > > rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S; > > >@@ -442,7 +442,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t > > >tx_queue_id) > > > txq_elem.num_txqs = 1; > > > txq_elem.txqs[0].txq_id = rte_cpu_to_le_16(txq->reg_idx); > > > > > >- tx_ctx.base = txq->tx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT; > > >+ tx_ctx.base = txq->tx_ring_dma / ICE_QUEUE_BASE_ADDR_UNIT; > > > tx_ctx.qlen = txq->nb_tx_desc; > > > tx_ctx.pf_num = hw->pf_id; > > > tx_ctx.vmvf_type = ICE_TLAN_CTX_VMVF_TYPE_PF; > > >@@ -663,7 +663,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, > > > /* Zero all the descriptors in the ring. */ > > > memset(rz->addr, 0, ring_size); > > > > > >- rxq->rx_ring_phys_addr = rz->phys_addr; > > >+ rxq->rx_ring_dma = rz->iova; > > > rxq->rx_ring = (union ice_rx_desc *)rz->addr; > > > > > > #ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC > > >@@ -881,7 +881,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev, > > > txq->vsi = vsi; > > > txq->tx_deferred_start = tx_conf->tx_deferred_start; > > > > > >- txq->tx_ring_phys_addr = tz->phys_addr; > > >+ txq->tx_ring_dma = tz->iova; > > > txq->tx_ring = (struct ice_tx_desc *)tz->addr; > > > > > > /* Allocate software ring */ > > >diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h > > >index 9040e3f..e921411 100644 > > >--- a/drivers/net/ice/ice_rxtx.h > > >+++ b/drivers/net/ice/ice_rxtx.h > > >@@ -46,7 +46,7 @@ struct ice_rx_entry { > > > struct ice_rx_queue { > > > struct rte_mempool *mp; /* mbuf pool to populate RX ring */ > > > volatile union ice_rx_desc *rx_ring;/* RX ring virtual address */ > > >- uint64_t rx_ring_phys_addr; /* RX ring DMA address */ > > >+ rte_iova_t rx_ring_dma; /* RX ring DMA address */ > > > struct ice_rx_entry *sw_ring; /* address of RX soft ring */ > > > uint16_t nb_rx_desc; /* number of RX descriptors */ > > > uint16_t rx_free_thresh; /* max free RX desc to hold */ > > >@@ -87,7 +87,7 @@ struct ice_tx_entry { > > > > > > struct ice_tx_queue { > > > uint16_t nb_tx_desc; /* number of TX descriptors */ > > >- uint64_t tx_ring_phys_addr; /* TX ring DMA address */ > > >+ rte_iova_t tx_ring_dma; /* TX ring DMA address */ > > > volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */ > > > struct ice_tx_entry *sw_ring; /* virtual address of SW ring */ > > > uint16_t tx_tail; /* current value of tail register */ > > >-- > > >2.7.4 > > >