From: Paul Greenwalt <paul.greenw...@intel.com>

Earliest TxTime First Offload traffic result in an MDD event and Tx hang
due to a HW issue where the TS descriptor fetch logic does not wrap around
the tstamp ring properly. This occurs when the tail wraps around the ring
but the head has not, causing HW to fetch descriptors less than the head,
leading to an MDD event.

To prevent this, the driver creates additional TS descriptors when wrapping
the tstamp ring, equal to the fetch TS descriptors value stored in the
GLTXTIME_FETCH_PROFILE register. The additional TS descriptors will
reference the same Tx descriptor and contain the same timestamp, and HW
will merge the TS descriptors with the same timestamp into a single
descriptor.

The tstamp ring length will be increased to account for the additional TS
descriptors. The tstamp ring length is calculated as the Tx ring length
plus the fetch TS descriptors value, ensuring the same number of available
descriptors for both the Tx and tstamp rings.

Signed-off-by: Soumyadeep Hore <soumyadeep.h...@intel.com>
Signed-off-by: Paul Greenwalt <paul.greenw...@intel.com>
---
 drivers/net/intel/ice/base/ice_lan_tx_rx.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/intel/ice/base/ice_lan_tx_rx.h 
b/drivers/net/intel/ice/base/ice_lan_tx_rx.h
index f92382346f..15aabf321d 100644
--- a/drivers/net/intel/ice/base/ice_lan_tx_rx.h
+++ b/drivers/net/intel/ice/base/ice_lan_tx_rx.h
@@ -1278,6 +1278,8 @@ struct ice_ts_desc {
 #define ICE_TXTIME_MAX_QUEUE           2047
 #define ICE_SET_TXTIME_MAX_Q_AMOUNT    127
 #define ICE_OP_TXTIME_MAX_Q_AMOUNT     2047
+#define ICE_TXTIME_FETCH_TS_DESC_DFLT  8
+
 /* Tx Time queue context data
  *
  * The sizes of the variables may be larger than needed due to crossing byte
@@ -1303,6 +1305,7 @@ struct ice_txtime_ctx {
        u8 drbell_mode_32;
 #define ICE_TXTIME_CTX_DRBELL_MODE_32  1
        u8 ts_res;
+#define ICE_TXTIME_CTX_FETCH_PROF_ID_0 0
        u8 ts_round_type;
        u8 ts_pacing_slot;
        u8 merging_ena;
-- 
2.43.0

Reply via email to