On 07/07/2017 01:53 PM, Dr. David Alan Gilbert wrote:
* Maxime Coquelin (maxime.coque...@redhat.com) wrote:
On 06/28/2017 09:00 PM, Dr. David Alan Gilbert (git) wrote:
From: "Dr. David Alan Gilbert"<dgilb...@redhat.com>
**HACK - better solution needed **
We have the situation where:
qemu bridge
send set_mem_table
map memory
a) mark area with UFD
send reply with map addresses
b) start using
c) receive reply
As soon as (a) happens qemu might start seeing faults
from memory accesses (but doesn't until b); but it can't
process those faults until (c) when it's received the
mmap addresses.
Make the fault handler spin until it gets the reply in (c).
At the very least this needs some proper locks, but preferably
we need to split the message.
Yes, maybe the slave channel could be used to send the ufds with
a dedicated request? The backend would set the reply-ack flag, so that
it starts accessing the guest memory only when Qemu is ready to handle
faults.
Yes, that would make life a lot easier.
Note that the slave channel support has not been implemented in Qemu's
libvhost-user yet, but this is something I can do if we feel the need.
Can you tell me a bit about how the slave channel works?
When the backend advertises VHOST_USER_PROTOCOL_F_SLAVE_REQ protocol
feature, Qemu creates a new channel using socketpair() and pass one of
the file descriptor to the backend using a dedicated request.
Then, the backend can send requests to Qemu, using the same protocol
Qemu uses to send requests to the backend. So, as "master" channel, the
backend can set the VHOST_USER_F_NEED_REPLY flag to the request, so that
it can wait for Qemu to ack (or nack) that the request has been handled.
It is currently only used by the backend to send IOTLB miss requests.
Note that you might be careful regarding deadlocks, as the libvhost-user
is single-threaded.
More info may be found in docs/interop/vhost-user.txt
(docs/specs/vhost-user.txt in older versions)
Maxime
Dave
Maxime
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK