On 07-08-2024 21:09, Ferruh Yigit wrote:
On 8/1/2024 11:53 AM, Hemant Agrawal wrote:
This patch enhances the received packet debugging capability.
This help displaying the full packet parsing output.

Signed-off-by: Jun Yang <jun.y...@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agra...@nxp.com>
---
  doc/guides/nics/dpaa.rst       |   5 ++
  drivers/net/dpaa/dpaa_ethdev.c |   6 ++
  drivers/net/dpaa/dpaa_rxtx.c   | 138 +++++++++++++++++++++++++++------
  drivers/net/dpaa/dpaa_rxtx.h   |   5 ++
  4 files changed, 130 insertions(+), 24 deletions(-)

diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 580edd9327..448607e9ac 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -227,6 +227,11 @@ state during application initialization:
    application want to use eventdev with DPAA device.
    Currently these queues are not used for LS1023/LS1043 platform by default.
+- ``DPAA_DISPLAY_FRAME_AND_PARSER_RESULT`` (default 0)
+
+  This defines the debug flag, whether to dump the detailed frame and packet
+  parsing result for the incoming packets.
+

If this is for debug, why not implement it as devarg.

Environment variables are supported by a few drivers, and as DPDK
drivers are userspace applications we can benefit from them, true.

But that is yet another way for configuration, I am feeling it is more
suitable way of configuration for applications. For drivers we already
have a devarg way, sticking to same configuration way provides more
consistency for users.

For the debugging flags, devargs don't work well, at least for us. Customers are using EAL argument embedded in their application code. Changing devargs means they recompile their code.

In the production, the env variables are working better for us. They just need to set it and they can re-use the existing production builds to collect more logs.


Driver compilation and testing
  ------------------------------
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index e92f1c25b2..979220a700 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -2094,6 +2094,12 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
                        td_tx_threshold = CGR_RX_PERFQ_THRESH;
        }
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+       penv = getenv("DPAA_DISPLAY_FRAME_AND_PARSER_RESULT");
+       if (penv)
+               dpaa_force_display_frame_set(atoi(penv));
+#endif

'penv' is not defined and build fails when macro enabled.

fixed


It seems even yourself not testing with the macro, this is why using
compile time macros is bad for long term, it is hard to detect when they
are broken.

Can't you convert this macro to some runtime configuration, that is
enabled/disable via devargs, so easier to test, ensuring it is not broken.
Or other option can be to switch ETHDEV debug macro, which is used more,
by multiple cases, so more likely to be tested better.

run time checks are effecting datapath.  We usually ask customers to re-compile only the net_dpaa libarry and replace it in their test for debugging purpose.


<...>

Reply via email to