> From: Jason Wang <[email protected]>
> Sent: Tuesday, April 25, 2023 2:31 AM

> >> 2, Typical virtualization environment. The workloads run in a guest,
> >> and QEMU handles virtio-pci(or MMIO), and forwards requests to target.
> >>          +----------+    +----------+       +----------+
> >>          |Map-Reduce|    |   nginx  |  ...  | processes|
> >>          +----------+    +----------+       +----------+
> >> ------------------------------------------------------------
> >> Guest        |               |                  | Kernel   +-------+
> >> +-------+          +-------+
> >>           | ext4  |       | LKCF  |          | HWRNG |
> >>           +-------+       +-------+          +-------+
> >>               |               |                  |
> >>           +-------+       +-------+          +-------+
> >>           |  vdx  |       |vCrypto|          | vRNG  |
> >>           +-------+       +-------+          +-------+
> >>               |               |                  | PCI
> >> --------------------------------------------------------
> >>                               |
> >> QEMU                 +--------------+
> >>                       |virtio backend|
> >>                       +--------------+
> >>                               |
> >>                           +------+
> >>                           |NIC/IB|
> >>                           +------+
> >>                               | +-------------+
> >>                               +--------------------->|virtio target|
> >> +-------------+
> >>

> > Use case 1 and 2 can be achieved directly without involving any
> > mediation layer or any other translation layer (for example virtio to
> > nfs).
> 
> 
> Not for at least use case 2? It said it has a virtio backend in Qemu. Or the 
> only
> possible way is to have virtio of in the guest.
>
Front end and back end both are virtio. So There is some layer of 
mediation/translation from PCI to fabric commands.
But not as radical as virtio blk to nfs or virtio blk to nvme.
 

Reply via email to