Add read memory barrier to ensure the order of operations when accessing control queue descriptors. Specifically, we want to avoid cases where loads can be reordered:
1. Load #1 is dispatched to read descriptor flags. 2. Load #2 is dispatched to read some other field from the descriptor. 3. Load #2 completes, accessing memory/cache at a point in time when the DD flag is zero. 4. NIC DMA overwrites the descriptor, now the DD flag is one. 5. Any fields loaded before step 4 are now inconsistent with the actual descriptor state. Add read memory barrier between steps 1 and 2, so that load #2 is not executed until load has completed. Fixes: 8077c727561a ("idpf: add controlq init and reset checks") Reviewed-by: Przemek Kitszel <przemyslaw.kits...@intel.com> Reviewed-by: Sridhar Samudrala <sridhar.samudr...@intel.com> Suggested-by: Lance Richardson <rla...@google.com> Signed-off-by: Emil Tantilov <emil.s.tanti...@intel.com> --- drivers/net/ethernet/intel/idpf/idpf_controlq.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq.c b/drivers/net/ethernet/intel/idpf/idpf_controlq.c index 4849590a5591..61c7fafa54a1 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_controlq.c +++ b/drivers/net/ethernet/intel/idpf/idpf_controlq.c @@ -375,6 +375,11 @@ int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count, desc = IDPF_CTLQ_DESC(cq, ntc); if (!(le16_to_cpu(desc->flags) & IDPF_CTLQ_FLAG_DD)) break; + /* + * This barrier is needed to ensure that no other fields + * are read until we check the DD flag. + */ + dma_rmb(); /* strip off FW internal code */ desc_err = le16_to_cpu(desc->ret_val) & 0xff; @@ -562,6 +567,11 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, if (!(flags & IDPF_CTLQ_FLAG_DD)) break; + /* + * This barrier is needed to ensure that no other fields + * are read until we check the DD flag. + */ + dma_rmb(); q_msg[i].vmvf_type = (flags & (IDPF_CTLQ_FLAG_FTYPE_VM | -- 2.17.2