On 7/1/22 18:32, skotesh...@marvell.com wrote:
From: Satha Rao <skotesh...@marvell.com>
rte_eth_set_queue_rate_limit argument rate modified to uint64_t
to support more than 64Gbps.
Signed-off-by: Satha Rao <skotesh...@marvell.com>
---
doc/guides/rel_notes/deprecation.rst | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst
b/doc/guides/rel_notes/deprecation.rst
index 4e5b23c..5bf2b72 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -125,3 +125,8 @@ Deprecation Notices
applications should be updated to use the ``dmadev`` library instead,
with the underlying HW-functionality being provided by the ``ioat`` or
``idxd`` dma drivers
+
+* ethdev: The function ``rte_eth_set_queue_rate_limit`` takes ``rate`` in Mbps.
+ This parameter declared as uint16_t, queue rate limited to 64Gbps. ``rate``
+ parameter will be modified to uint64_t in DPDK 22.11 so that it can work for
+ more than 64Gbps.
I fully agree that uint16_t is not enough, but I'd like to understand
the reason behind uint64_t vs uint32_t. It looks like uint32_t is more
than enough.