On 7/24/2020 9:23 PM, Patrick Keroulas wrote:
The intention is to produce a pcap with nanosecond precision when
Rx timestamp offloading is activated on mlx5 NIC.

The packets forwarded by testpmd hold the raw counter but a pcap
requires a time unit. Assuming that the NIC clock is already synced
with external master clock, this patchset simply integrates the
nanosecond converter that derives from device frequency and start time.

v2 -> v3:
     - replace ib_verbs nanosecond converter with more generic method
       based on device frequency and start time.

Patrick Keroulas (3):
   net/mlx5: query device frequency
   ethdev: add API to query device frequency
   pdump: convert timestamp to nanoseconds on Rx path

Vivien Didelot (1):
   net/pcap: support hardware Tx timestamps


We have three patch/patchset for same issue,

1) Current one, https://patches.dpdk.org/user/todo/dpdk/?series=11294
2) Vivien's series: https://patches.dpdk.org/user/todo/dpdk/?series=10678
3) Vivien's pcap patch: https://patches.dpdk.org/user/todo/dpdk/?series=10403

And one related one from Slava,
4) 
https://patchwork.dpdk.org/project/dpdk/list/?series=10948&state=%2A&archive=both

I am replying to this one since this is the latest, but first can we clarify if all are still valid and can we combine the effort?


Second, the problems to solve,
1) Device provided timestamp value has no unit, it needs to be converted to nanosecond.
There are different approaches in above patches,
- One adds '.convert_ts_to_ns' dev_ops to make PMD convert the timestamp
- Other adds '.eth_get_clock_freq' dev_ops, to get frequency from device clock
  so that conversion can be done within the app.
- I wonder why existing 'rte_eth_read_clock()' can't be enough for conversion,
  as described in its documentation:
  https://doc.dpdk.org/api/rte__ethdev_8h.html#a4346bf07a0d302c9ba4fe06baffd3196
    rte_eth_read_clock(port, start);
    rte_delay_ms(100);
    rte_eth_read_clock(port, end);
    double freq = (end - start) * 10;

2) Where to put the timestamps data, is it to the 'mbuf->timestamp' or dynamic
   filed 'RTE_MBUF_DYNFIELD_TIMESTAMP_NAME'? Using dynamic field of requires
   more work on registering and looking up the fields.

3) Calculation in the datapath should be done in a performance optimized way, instead of using division.

4) Should the timestamp value provided by the Rx device used, or should the time when the packet transmitted used. I can see current use case seems first one, but can there be cases we would like to record when packet exactly sent.

5) Should we create a 'PKT_TX_TIMESTAMP' flag, instead of re-using the Rx one, to let the application explicitly define which packets to record timestamp.

Please add if I missing more.

Reply via email to