Forking the thread to discuss a memory consistency/ordering model.

AFAICT, dmadev can be anything from part of a CPU to a completely
separate PCI device.  However, I don't see any memory ordering being
enforced or even described in the dmadev API or documentation.
Please, point me to the correct documentation, if I somehow missed it.

We have a DMA device (A) and a CPU core (B) writing respectively
the data and the descriptor info.  CPU core (C) is reading the
descriptor and the data it points too.

A few things about that process:

1. There is no memory barrier between writes A and B (Did I miss
   them?).  Meaning that those operations can be seen by C in a
   different order regardless of barriers issued by C and regardless
   of the nature of devices A and B.

2. Even if there is a write barrier between A and B, there is
   no guarantee that C will see these writes in the same order
   as C doesn't use real memory barriers because vhost advertises
   VIRTIO_F_ORDER_PLATFORM.

So, I'm getting to conclusion that there is a missing write barrier
on the vhost side and vhost itself must not advertise the
VIRTIO_F_ORDER_PLATFORM, so the virtio driver can use actual memory
barriers.

Would like to hear some thoughts on that topic.  Is it a real issue?
Is it an issue considering all possible CPU architectures and DMA
HW variants?

Best regards, Ilya Maximets.

Reply via email to