Hi Maciek,

I want to lay out the issues we've seen so far. Let me start with a little bit 
of a description of our setup. The whole stack contains OpenStack, ODL (with 
GBP and VBD components, which handle the configuration of HC/VPP), VPP with HC 
and qemu with kvm. From the OpenStack perspective, what doesn't work is 
assigning security groups to VMs. What that does is it tries to modify the VM's 
vhost-user port. The implementation in GBP does this by deleting the port from 
vpp and then recreating exactly the same port (with the same socket file and 
everything). Qemu is handling the socket and vpp is just 
connecting/reconnecting to it.

And for the issues themselves:

1.      Basic vhost reconnect was not working. This was resolved after we found 
that the feature is implemented in more recent versions (both VPP and Qemu) 
that we were working with. We were able to confirm that this worked on 
vpp-16.12-rc0~293_g63f70d2~b1318.x86_64

2.      We then hit an issue with multiple port recreations. One assignment of 
security group was working fine, multiple resulted in VMs not being able to 
communicate - https://jira.opnfv.org/browse/FDS-144. Sean also independently 
found what seems to be the same issue, although on 16.09 - 
https://jira.opnfv.org/browse/FDS-145

3.      Now we're hitting an issue with just vhost user deletion - 
https://jira.fd.io/browse/VPP-528. I can't reproduce 2) because of this so I 
can't remedy the situation of not creating a VPP ticket for 2 (since I'm not 
sure if it still exists in the latest VPP)

I hope this gives you a better insight on what we need – it's not user-mode 
vSwitch reboots e.g. crash, upgrade, reload, but rather recreating the same 
exact port which the VM uses (while nothing is happening with the VM).

Regards,
Juraj

From: fds-dev-boun...@lists.opnfv.org [mailto:fds-dev-boun...@lists.opnfv.org] 
On Behalf Of Maciek Konstantynowicz (mkonstan)
Sent: Monday, 14 November, 2016 20:42
To: Frank Brockners (fbrockne) <fbroc...@cisco.com>; fds-...@lists.opnfv.org
Cc: csit-...@lists.fd.io; Amnon Ilan <ai...@redhat.com>; vpp-dev@lists.fd.io
Subject: [Fds-dev] OPNFV/FDS - vhost negative scenarios [Was: REMINDER - 
actions from - csit weekly 161109]

Dear Frank, fds-dev,

re OPNFV/FDS vhost reconnect requirement that you and team have been
pursuing for OPNFV Colorado 2.0 - on the last CSIT call I got an action to
confirm which negative scenarios you’re actually covering.

Here is my understanding of the three use cases identified to handle
vhost-user-to-virtio (vhost-virtio for short) connectivity disruption at
the data plane level, without involving orchestration stack (libvirt and
above incl. OpenStack) - to keep me honest cc’ing Amnon who is coordinating
with vhost/qemu folks:-

1. vhost reconnect - compatible with virtio1.0 - all shipping VM-based VNFs
    a. user-mode vSwitch reboots e.g. crash, upgrade, reload
        - vhost backend goes away.
        - qemu remembers negotiated virtio features and vring memory region.
        - vhost backend comes back.
        - qemu reconnects to vhost backend.
    b. reconnect is transparent to VM, VM is not aware.
    c. reconnect is transparent to the orchestration stack.

2. vhost hot-plug - compatible with virtio1.0 - all shipping VM-based VNFs
    a. user-mode vSwitch reboots e.g. crash, upgrade, reload
        - vhost backend goes away.
        - qemu instructs VM to unload/destroy associated virtio device instance.
        - qemu puts the connection into inactive state (new state).
        - vhost backend comes back.
        - qemu instructs VM to load/create associated virtio device instance -
          hot-plug.
        - normal negotiation of virtio features takes place.
    b. reconnect is not transparent to VM, VM is aware.
    c. reconnect is transparent to the orchestration stack.

3. virtio renegotiation - part of new virtio 1.1 spec - will take years to
   get assimilated into VM-based VNFs
    a. user-mode vSwitch reboots e.g. crash, upgrade, reload
        - vhost backend goes away.
        - qemu relays vhost backend state to VM virtio driver.
        - vhost backend comes back.
        - qemu faciliates control channel vhost to VM-virtio.
        - negotiation of virtio features takes place vhost to VM-virtio.
    b. reconnect is not transparent to VM, VM is aware.
    c. reconnect is transparent to the orchestration stack.


I believe what you implementing in OPNFV/FDS based on qemu 2.7, is point 1.
We are discussing point 2. with Redhat and Intel QEMU folks - Damjan and
Pierre are driving this discussion, I'm assisting. Point 3. is about the
proper long-term solution to address 100% of situations in the future.
Points 2. and 3. are tracked in a Monthly DPDK-Virtio meeting coordinated
by Amnon.

-Maciek


Begin forwarded message:

From: Maciek Konstantynowicz <mkons...@cisco.com<mailto:mkons...@cisco.com>>
Subject: REMINDER - actions from - csit weekly 161109
Date: 14 November 2016 at 17:50:39 GMT
To: csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>

http://ircbot.wl.linuxfoundation.org/meetings/fdio-csit/2016/fdio-csit.2016-11-09-15.00.html

Hi, Didn’t see any activity on any of below, so most likely I missed progress.
Could those involved send a quick update by reply-all, with links as applicable.

Action items:
• Maciek to clarify with OPNFV/FDS situation re vhost reconnect
• tfherbert, dwallacelf - Create CentOS VIRL image (CSIT)
• tfherbert, edwarnicke - Create new set of verify jobs with CentOS executor 
(CI-MAN)
• tfherbert, pmikus - Create CentOS bootstrap script (CSIT)

-Maciek


_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
  • [vpp-dev] OPNFV... Maciek Konstantynowicz (mkonstan)
    • Re: [vpp-d... Juraj Linkes -X (jlinkes - PANTHEON TECHNOLOGIES at Cisco)

Reply via email to