On Mon, 26 Feb 2024, Peng Fan wrote:
> Hi Stefano,
> 
> > Subject: Re: question about virtio-vsock on xen
> > 
> > Hi Peng,
> > 
> > We haven't tried to setup virtio-vsock yet.
> > 
> > In general, I am very supportive of using QEMU for virtio backends. We use
> > QEMU to provide virtio-net, virtio-block, virtio-console and more.
> 
> Would you mind share how to setup virtio-console using qemu + xen?

Vikram (CCed) has been working on it and should be able to share more
details

 
> > However, typically virtio-vsock comes into play for VM-to-VM
> > communication, which is different. Going via QEMU in Dom0 just to have 1
> > VM communicate with another VM is not an ideal design: it adds latency and
> > uses resources in Dom0 when actually we could do without it.
> > 
> > A better model for VM-to-VM communication would be to have the VM talk
> > to each other directly via grant table or pre-shared memory (see the static
> > shared memory feature) or via Xen hypercalls (see Argo.)
> 
> The goal is to make android trout VM run with XEN + i.MX95, so need vsock.

I am not familiar with the details of Android Trout... Where is vsock
used? Just asking for my own understanding.


> > For a good Xen design, I think the virtio-vsock backend would need to be in
> > Xen itself (the hypervisor).
> > 
> > Of course that is more work and it doesn't help you with the specific 
> > question
> > you had below :-)
> > 
> > For that, I don't have a pointer to help you but maybe others in CC have.
> > 
> > Cheers,
> > 
> > Stefano
> > 
> > 
> > On Fri, 23 Feb 2024, Peng Fan wrote:
> > > Hi All,
> > >
> > > Has anyone make virtio-vsock on xen work? My dm args as below:
> > >
> > > virtio = [
> > >
> > 'backend=0,type=virtio,device,transport=pci,bdf=05:00.0,backend_type=qem
> > u,grant_usage=true'
> > > ]
> > > device_model_args = [
> > > '-D', '/home/root/qemu_log.txt',
> > > '-d',
> > > 'trace:*vsock*,trace:*vhost*,trace:*virtio*,trace:*pci_update*,trace:*
> > > pci_route*,trace:*handle_ioreq*,trace:*xen*',
> > > '-device',
> > > 'vhost-vsock-pci,iommu_platform=false,id=vhost-vsock-pci0,bus=pcie.0,a
> > > ddr=5.0,guest-cid=3']
> > >
> > > During my test, it always return failure in dom0 kernel in below code:
> > >
> > > vhost_transport_do_send_pkt {
> > > ...
> > >                nbytes = copy_to_iter(hdr, sizeof(*hdr), &iov_iter);
> > >                 if (nbytes != sizeof(*hdr)) {
> > >                         vq_err(vq, "Faulted on copying pkt hdr %x %x %x 
> > > %px\n",
> > nbytes, sizeof(*hdr),
> > > __builtin_object_size(hdr, 0), &iov_iter);
> > >                         kfree_skb(skb);
> > >                         break;
> > >                 }
> > > }
> > >
> > > I checked copy_to_iter, it is copy data to __user addr, but it never
> > > pass, the copy to __user addr always return 0 bytes copied.
> > >
> > > The asm code "sttr x7, [x6]" will trigger data abort, the kernel will
> > > run into do_page_fault, but lock_mm_and_find_vma report it is
> > > VM_FAULT_BADMAP, that means the __user addr is not mapped, no vma
> > has this addr.
> > >
> > > I am not sure what may cause this. Appreciate if any comments.
> > >
> > > BTW: I tested blk pci, it works, so the virtio pci should work on my 
> > > setup.
> > >
> > > Thanks,
> > > Peng.
> > >
> 

Reply via email to