Hi Akilesh,
please see inline
On Wed, Feb 4, 2015 at 11:32 AM, Akilesh K wrote:
> Hi,
> Issue 1:
> I do not understand what you mean. I did specify the physical_network.
> What I am trying to say is some physical networks exists only on the
> compute node and not on the network node. We are unab
y option to provide PCI passthrough vNIC is according to
what is described in the previously referenced wiki page: create neutron
port with vnic_type= direct and then 'nova boot' with pre-created port.
Do you still think this is correct ?
>
>
>
> On Wed, Feb 4, 2015 at 8:08 P
he profile with pci_slot details
can be very dangerous, since you skip the phase when this pci slot is
reserved by nova. The system may become inconsistent.
>
>
> Thank you,
> Ageeleshwar K
>
> On Thu, Feb 5, 2015 at 12:19 PM, Irena Berezovsky
> wrote:
>
>> Hi Akilesh
Hi Chris, Jiang,
We are also looking into enchantment of basic PCI pass-through to provide
SR-IOV based networking.
In order to support automatic provisioning, it requires the awareness to what
virtual network to connect the requested SR-IOV device.
This should be considered by the scheduler in
Hi Stefan,
You have to use the following form of the nova boot command:
nova boot --image cirros-0.3.1-x86_64-uec --flavor 1 --nic
port-id=a2183706-63c0-4468-8194-8fe4ce064558 vm1.
Port-id is the id of the port you previously created.
Hope it helps,
Irena
From: Stefan Apostoaie [mailto:ioss...@g
Hi Nishant,
Mellanox plugin supports two types of VF provisioning , as pci passthrough
(hostdev) and as macvtap (mlnx_direct) vNIC.
According to the log, you want to use the first flavor (hostdev).
Please follow the following instructions in:
http://Community.mellanox.com/community/develop/
Hi Nishant,
Following Salvatore suggestion, I think the best is to consider using ML2
plugin to make several backend technologies available in your setup.
If you are looking to deploy Mellanox solution aside with other technology,
there is Mellanox ML2 Mechanism Driver that currently under review
Hi Simon,
Please check you neuron server configuration.
To support VXLAN networks, you should have the following configuration in the
ml2_conf.ini:
[ml2]
type_drivers = vxlan,local
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_vxlan]
vni_ranges = 65537:6
For the res
Hi Don,
Seems that there is a problem at neutron side, that ML2 refuses to bind the
port.
Can you please share the error you get at neutron server?
I am not sure, but seems that neutron ml2 configuration is not accurate.
With commands you share, I think you should change it as following:
[ovs]
b
Hi Gideon,
Support for nested containers is not merged into kuryr repository yet.
You can try to experiment with this patch:
https://review.openstack.org/#/c/402462/
As for the proper devstack settings for such environment, the 'undercloud'
and 'overcloud' devstack settings will be added to this pa
Probably https://github.com/openstack/kuryr-kubernetes
On Sun, Sep 10, 2017 at 4:29 PM, Gary Kotton wrote:
> Hi,
>
> I suggest that you take a look at https://wiki.openstack.org/wiki/Kuryr.
> This most probably already has the relevant watchers implemented.
>
> Thanks
>
> Gary
>
>
>
> *From: *M
11 matches
Mail list logo