Hi Thomas,

snipped
> 
> I feel this doc will be updated to provide a complete debug checklist,
Attempt is made to capture commonly seen filed issue. Saying so, I am clear 
that I will not be able to identify all debug check list. As time, experience 
and sharing increases (from the community), I am certain sure this will grow 

 and will

snipped
> One general comment about documentation, It is better to wrap lines
> logically, for example, always start sentences at the beginning of a new 
> line. It
> will make further update patches simpler to review.
> 
> Few more nits below,
> 
> 21/01/2019 11:41, Vipin Varghese:
> > +.. _debug_troubleshoot_via_pmd:
I need cross check with John or Marko on the same, as the PDF generator tool 
make a check for anchor and figure name. 

> 
> No need of such anchor
Please give me time to cross check.

.
> 
> > +
> > +Debug & Troubleshoot guide via PMD
> > +==================================
> 
> Why "via PMD"? Do we use PMD for troubleshooting?
I believe yes, we do collect information with enhanced procinfo tool.

> Or is it dedicated to troubleshoot the PMD behaviour?
I am not clear with this statement. Hence is the query 'Is this dedicated to 
troubleshooting Application. PMD and Library uses cases?'

> 
> > +
> > +DPDK applications can be designed to run as single thread simple
> > +stage to multiple threads with complex pipeline stages. These
> > +application can use poll
> 
> applications
Ok

> 
> > +mode devices which helps in offloading CPU cycles. A few models are
> 
> help
Ok

> 
> A colon would be nice at the end of the line before the list.
> 
> > +
> > +  *  single primary
> > +  *  multiple primary
> > +  *  single primary single secondary
> > +  *  single primary multiple secondary
> > +
> > +In all the above cases, it is a tedious task to isolate, debug and
> > +understand odd behaviour which occurs randomly or periodically. The
> > +goal of guide is to share and explore a few commonly seen patterns
> > +and behaviour. Then, isolate and identify the root cause via step by
> > +step debug at various processing stages.
> 
> I don't understand how this introduction is related to "via PMD" in the title.
I believe the information is shared ```The goal of guide is to share and 
explore a few commonly seen patterns and behaviour. Then, isolate and identify 
the root cause via step by step debug at various processing stages.'```

There would multiple ways to design application for solving a same problem. 
These are depended on user, platform, scaling factor and target. These various 
combinations make use PMD and libraries. Misconfiguration and not taking care 
of platform will cause throttling and even drops.

Example: application designed to run on single is now been deployed to run on 
multi NUMA model.

snipped
> 
> "pkt mbuf" can be called simply mbuf, but event, crypto and eth should be
> eventdev, cryptodev and ethdev.
Ok. I can make this change.

> 
snipped
> > +To debug the bottleneck and performance issues the desired
> > +application
> 
> missing comma after "issues"?
Ok

> 
> > +is made to run in an environment matching as below
> 
> colon missing
Ok

> 
> > +
> > +#. Linux 64-bit|32-bit
> > +#. DPDK PMD and libraries are used
> 
> Isn't it always the case with DPDK?
> 
> > +#. Libraries and PMD are either static or shared. But not both
> 
> Strange assumption. Why would it be both?
If applications are only build with DPDK libraries, then yes the assumption is 
correct. But when applications are build using DPDK as one of software layer 
(example DPDK network stack, DPDK suricata, DPDK hyperscan)  as per my 
understanding this is not true.

> 
> > +#. Machine flag optimizations of gcc or compiler are made constant
> 
> What do you mean?
I can reword as ```DPDK and the application libraries are built with same 
flags. ```

> 
snipped
> > +
> > +   RX send rate compared against Received rate
> 
> RX send ?
Thanks will correct this

> 
> > +
> > +#. Are generic configuration correct?
> 
> Are -> Is
> 
> > +    -  What is port Speed, Duplex? rte_eth_link_get()
> > +    -  Are packets of higher sizes are dropped? rte_eth_get_mtu()
> 
> are dropped -> dropped
Ok 

snipped
> > +    -  Is the application is build using processing pipeline with RX
> > +stage? If
> 
> is build -> built
Ok

> 
> > +       there are multiple port-pair tied to a single RX core, try to debug 
> > by
> > +       using rte_prefetch_non_temporal(). This will intimate the mbuf in 
> > cache
> > +       is temporary.
> 
> I stop nit-picking review here.
Thanks as any form of correction is always good.

> Marko, John, please could you check english grammar?
> Thanks
> 
> 

Reply via email to