On 06/25/2015 10:22 PM, Thibaut Collet wrote:
> On Thu, Jun 25, 2015 at 2:53 PM, Michael S. Tsirkin wrote:
>> On Thu, Jun 25, 2015 at 01:01:29PM +0200, Thibaut Collet wrote:
>>> On Thu, Jun 25, 2015 at 11:59 AM, Jason Wang wrote:
On 06/24/2015 07:05 PM, Michael S. Tsirkin wrote:
On Thu, Jun 25, 2015 at 2:53 PM, Michael S. Tsirkin wrote:
> On Thu, Jun 25, 2015 at 01:01:29PM +0200, Thibaut Collet wrote:
>> On Thu, Jun 25, 2015 at 11:59 AM, Jason Wang wrote:
>> >
>> >
>> >
>> > On 06/24/2015 07:05 PM, Michael S. Tsirkin wrote:
>> > > On Wed, Jun 24, 2015 at 04:31:15PM +0800
On Thu, Jun 25, 2015 at 01:01:29PM +0200, Thibaut Collet wrote:
> On Thu, Jun 25, 2015 at 11:59 AM, Jason Wang wrote:
> >
> >
> >
> > On 06/24/2015 07:05 PM, Michael S. Tsirkin wrote:
> > > On Wed, Jun 24, 2015 at 04:31:15PM +0800, Jason Wang wrote:
> > >>
> > >> On 06/23/2015 01:49 PM, Michael S.
On Thu, Jun 25, 2015 at 11:59 AM, Jason Wang wrote:
>
>
>
> On 06/24/2015 07:05 PM, Michael S. Tsirkin wrote:
> > On Wed, Jun 24, 2015 at 04:31:15PM +0800, Jason Wang wrote:
> >>
> >> On 06/23/2015 01:49 PM, Michael S. Tsirkin wrote:
> >>> On Tue, Jun 23, 2015 at 10:12:17AM +0800, Jason Wang wrote
On 06/24/2015 07:05 PM, Michael S. Tsirkin wrote:
> On Wed, Jun 24, 2015 at 04:31:15PM +0800, Jason Wang wrote:
>>
>> On 06/23/2015 01:49 PM, Michael S. Tsirkin wrote:
>>> On Tue, Jun 23, 2015 at 10:12:17AM +0800, Jason Wang wrote:
>
> On 06/18/2015 11:16 PM, Thibaut Collet wrote:
>>>
On Wed, Jun 24, 2015 at 04:31:15PM +0800, Jason Wang wrote:
>
>
> On 06/23/2015 01:49 PM, Michael S. Tsirkin wrote:
> > On Tue, Jun 23, 2015 at 10:12:17AM +0800, Jason Wang wrote:
> >> >
> >> >
> >> > On 06/18/2015 11:16 PM, Thibaut Collet wrote:
> >>> > > On Tue, Jun 16, 2015 at 10:05 AM, Jaso
On 06/23/2015 01:49 PM, Michael S. Tsirkin wrote:
> On Tue, Jun 23, 2015 at 10:12:17AM +0800, Jason Wang wrote:
>> >
>> >
>> > On 06/18/2015 11:16 PM, Thibaut Collet wrote:
>>> > > On Tue, Jun 16, 2015 at 10:05 AM, Jason Wang
>>> > > wrote:
> >>
> >> On 06/16/2015 03:24 PM, Thibaut
On Tue, Jun 23, 2015 at 10:12:17AM +0800, Jason Wang wrote:
>
>
> On 06/18/2015 11:16 PM, Thibaut Collet wrote:
> > On Tue, Jun 16, 2015 at 10:05 AM, Jason Wang wrote:
> >>
> >> On 06/16/2015 03:24 PM, Thibaut Collet wrote:
> >>> If my understanding is correct, on a resume operation, we have the
On 06/18/2015 11:16 PM, Thibaut Collet wrote:
> On Tue, Jun 16, 2015 at 10:05 AM, Jason Wang wrote:
>>
>> On 06/16/2015 03:24 PM, Thibaut Collet wrote:
>>> If my understanding is correct, on a resume operation, we have the
>>> following callback trace:
>>> 1. virtio_pci_restore function that cal
On Tue, Jun 16, 2015 at 10:05 AM, Jason Wang wrote:
>
>
> On 06/16/2015 03:24 PM, Thibaut Collet wrote:
>> If my understanding is correct, on a resume operation, we have the
>> following callback trace:
>> 1. virtio_pci_restore function that calls all restore call back of
>> virtio devices
>> 2. v
On Wed, Jun 17, 2015 at 8:42 AM, Michael S. Tsirkin wrote:
> On Wed, Jun 17, 2015 at 12:16:09PM +0800, Jason Wang wrote:
>>
>>
>> On 06/16/2015 04:16 PM, Thibaut Collet wrote:
>> > For a live migration my understanding is there are a suspend resume
>> > operation:
>> > - The VM image is regularly
On Wed, Jun 17, 2015 at 12:16:09PM +0800, Jason Wang wrote:
>
>
> On 06/16/2015 04:16 PM, Thibaut Collet wrote:
> > For a live migration my understanding is there are a suspend resume
> > operation:
> > - The VM image is regularly copied from the old host to the new one
> > (modified pages due t
On 06/16/2015 04:16 PM, Thibaut Collet wrote:
> For a live migration my understanding is there are a suspend resume operation:
> - The VM image is regularly copied from the old host to the new one
> (modified pages due to VM operation can be copied several time)
> - As soon as there are only few
For a live migration my understanding is there are a suspend resume operation:
- The VM image is regularly copied from the old host to the new one
(modified pages due to VM operation can be copied several time)
- As soon as there are only few pages to copy the VM is suspended on
the old host, the l
On 06/16/2015 03:24 PM, Thibaut Collet wrote:
> If my understanding is correct, on a resume operation, we have the
> following callback trace:
> 1. virtio_pci_restore function that calls all restore call back of
> virtio devices
> 2. virtnet_restore that calls try_fill_recv function for each virt
If my understanding is correct, on a resume operation, we have the
following callback trace:
1. virtio_pci_restore function that calls all restore call back of
virtio devices
2. virtnet_restore that calls try_fill_recv function for each virtual queues
3. try_fill_recv function kicks the virtual que
On 06/15/2015 08:12 PM, Thibaut Collet wrote:
> After a resume operation the guest always kicks the backend for each
> virtual queues.
> A live migration does a suspend operation on the old host and a resume
> operation on the new host. So the backend has a kick after migration.
>
> I have checke
On 06/15/2015 04:44 PM, Michael S. Tsirkin wrote:
> On Mon, Jun 15, 2015 at 03:43:13PM +0800, Jason Wang wrote:
>>
>> On 06/12/2015 10:28 PM, Michael S. Tsirkin wrote:
>>> On Fri, Jun 12, 2015 at 03:55:33PM +0800, Jason Wang wrote:
On 06/11/2015 08:13 PM, Michael S. Tsirkin wrote:
> On T
If we use DRIVER_OK status bit to send the RARP by the backend I am
afraid that some legacy guest are not supported.
Moreover the vhost user backend is not aware of the change of the
DRIVER_OK status bit. If this solution is chosen as event to send the
RARP a message between QEMU and vhost user bac
On Mon, Jun 15, 2015 at 02:12:40PM +0200, Thibaut Collet wrote:
> After a resume operation the guest always kicks the backend for each
> virtual queues.
> A live migration does a suspend operation on the old host and a resume
> operation on the new host. So the backend has a kick after migration.
>
After a resume operation the guest always kicks the backend for each
virtual queues.
A live migration does a suspend operation on the old host and a resume
operation on the new host. So the backend has a kick after migration.
I have checked this point with a legacy guest (redhat 6-5 with kernel
ve
On Mon, Jun 15, 2015 at 03:43:13PM +0800, Jason Wang wrote:
>
>
> On 06/12/2015 10:28 PM, Michael S. Tsirkin wrote:
> > On Fri, Jun 12, 2015 at 03:55:33PM +0800, Jason Wang wrote:
> >>
> >> On 06/11/2015 08:13 PM, Michael S. Tsirkin wrote:
> >>> On Thu, Jun 11, 2015 at 02:10:48PM +0200, Thibaut C
On 06/12/2015 10:28 PM, Michael S. Tsirkin wrote:
> On Fri, Jun 12, 2015 at 03:55:33PM +0800, Jason Wang wrote:
>>
>> On 06/11/2015 08:13 PM, Michael S. Tsirkin wrote:
>>> On Thu, Jun 11, 2015 at 02:10:48PM +0200, Thibaut Collet wrote:
I am not sure to understand your remark:
> It n
On Fri, Jun 12, 2015 at 03:55:33PM +0800, Jason Wang wrote:
>
>
> On 06/11/2015 08:13 PM, Michael S. Tsirkin wrote:
> > On Thu, Jun 11, 2015 at 02:10:48PM +0200, Thibaut Collet wrote:
> >> I am not sure to understand your remark:
> >>
> >>> It needs to be sent when backend is activated by guest k
If I correctly understand how vhost user / virtio works the solution
proposed by Michael is OK:
- Rings to exchange data between host and guest are allocated by the guest.
- As soon as the guest add rings in a queue (for RX or TX) a kick is
done on the eventfd associated to the queue
- On a live
On 06/11/2015 08:13 PM, Michael S. Tsirkin wrote:
> On Thu, Jun 11, 2015 at 02:10:48PM +0200, Thibaut Collet wrote:
>> I am not sure to understand your remark:
>>
>>> It needs to be sent when backend is activated by guest kick
>>> (in case of virtio 1, it's possible to use DRIVER_OK for this).
>>
Ok.
backend is able to know when the eventfd is kick the first time and then
can send a rarp for legacy guest.
With this backend modification the issue point by Jason is no more a QEMU
problem.
Full support of live migration for vhost user:
- Need my first patch.
- For legacy guest the backend
On Thu, Jun 11, 2015 at 02:10:48PM +0200, Thibaut Collet wrote:
> I am not sure to understand your remark:
>
> > It needs to be sent when backend is activated by guest kick
> > (in case of virtio 1, it's possible to use DRIVER_OK for this).
> > This does not happen when VM still runs on source.
>
I am not sure to understand your remark:
> It needs to be sent when backend is activated by guest kick
> (in case of virtio 1, it's possible to use DRIVER_OK for this).
> This does not happen when VM still runs on source.
Could you confirm rarp can be sent by backend when the
VHOST_USER_SET_VRING
On Thu, Jun 11, 2015 at 01:54:22PM +0800, Jason Wang wrote:
>
>
> On 06/11/2015 01:49 PM, Thibaut Collet wrote:
> > > Yes, but still need a mechanism to notify the backend of migration
> > > completion from qemu side if GUEST_ANNOUNCE is not negotiated.
> >
> > backend is aware of a connection wi
On 06/11/2015 01:49 PM, Thibaut Collet wrote:
> > Yes, but still need a mechanism to notify the backend of migration
> > completion from qemu side if GUEST_ANNOUNCE is not negotiated.
>
> backend is aware of a connection with the guest (with the feature
> negociation) and can send a rarp. This ra
> Yes, but still need a mechanism to notify the backend of migration
> completion from qemu side if GUEST_ANNOUNCE is not negotiated.
backend is aware of a connection with the guest (with the feature
negociation) and can send a rarp. This rarp will be always sent by the
backend when a VM is launch
On 06/11/2015 04:25 AM, Thibaut Collet wrote:
> Yes backend can save everything to be able to send the rarp alone
> after a live migration.
Yes, but still need a mechanism to notify the backend of migration
completion from qemu side if GUEST_ANNOUNCE is not negotiated.
> Main purpose of this pa
The warning message is not really necessary is just a reminder.
On Wed, Jun 10, 2015 at 10:50 PM, Michael S. Tsirkin wrote:
> On Wed, Jun 10, 2015 at 10:25:57PM +0200, Thibaut Collet wrote:
> > Yes backend can save everything to be able to send the rarp alone after
> a live
> > migration.
> > Ma
On Wed, Jun 10, 2015 at 10:25:57PM +0200, Thibaut Collet wrote:
> Yes backend can save everything to be able to send the rarp alone after a live
> migration.
> Main purpose of this patch is to answer to the point raise by Jason on the
> previous version of my patch:
> > Yes, your patch works well f
Yes backend can save everything to be able to send the rarp alone after a
live migration.
Main purpose of this patch is to answer to the point raise by Jason on the
previous version of my patch:
> Yes, your patch works well for recent drivers. But the problem is legacy
> guest/driver without VIRTIO
On Wed, Jun 10, 2015 at 05:48:47PM +0200, Thibaut Collet wrote:
> I have involved QEMU because QEMU prepares the rarp. I agree that backend has
> probably all the information to do that.
> But backend does not know if the guest supports theĀ
> VIRTIO_NET_F_GUEST_ANNOUNCE
Why not? Backend has the
I have involved QEMU because QEMU prepares the rarp. I agree that backend
has probably all the information to do that.
But backend does not know if the guest supports the VIRTIO_NET_F_GUEST_ANNOUNCE
and will send a useless rarp.
Maybe this duplication of requests is not very important and in this
On Wed, Jun 10, 2015 at 03:43:03PM +0200, Thibaut Collet wrote:
> In case of live migration with legacy guest (without
> VIRTIO_NET_F_GUEST_ANNOUNCE)
> a message is added between QEMU and the vhost client/backend.
> This message provides the RARP content, prepared by QEMU, to the vhost
> client/ba
In case of live migration with legacy guest (without
VIRTIO_NET_F_GUEST_ANNOUNCE)
a message is added between QEMU and the vhost client/backend.
This message provides the RARP content, prepared by QEMU, to the vhost
client/backend.
The vhost client/backend is responsible to send the RARP.
Signed-o
40 matches
Mail list logo