FYI, I took some time out this afternoon and wrote a detailed
certificate configuration guide. Hopefully this will help.
https://review.openstack.org/613454
Reviews would be welcome!
Michael
On Thu, Oct 25, 2018 at 7:00 AM Tobias Urdin wrote:
>
> Might as well throw it out here.
>
> After a lot
I have dug deep into the code for glance, shoving debug outputs to see what I
can find in our queens environment.
Here is my debug code (I have a lot more but this is the salient part)
LOG.debug("in enforce(), action='%s', policyvalues='%s'", action,
context.to_policy_values())
melanie witt wrote on 10/25/2018 02:14:40 AM:
> On Thu, 25 Oct 2018 14:12:51 +0900, ボーアディネシュ[bhor Dinesh] wrote:
> > We were having a similar use case like *Preemptible Instances* called
as
> > *Rich-VM’s* which
> >
> > are high in resources and are deployed each per hypervisor. We have a
> > cus
Can you use a provider network to expose galera to the vm?
alternately, you could put a db out in the vm side. You don't strictly need to
use the same db for every component. If crossing the streams is hard, maybe
avoiding crossing at all is easier?
Thanks,
Kevin
___
you mean deploy octavia into an openstack project? But I will than need
to connect the octavia services with my galera DBs... so same problem.
Am 10/25/18 um 5:31 PM schrieb Fox, Kevin M:
Would it make sense to move the control plane for this piece into the cluster?
(vm in a mangement tenant?)
I managed to configure o-hm0 on the compute nodes and I am able to
communicate with the amphorae:
# create Octavia management net
openstack network create lb-mgmt-net -f value -c id
# and the subnet
openstack subnet create --subnet-range 172.31.0.0/16 --allocation-pool
start=172.31.17.10,end=1
Would it make sense to move the control plane for this piece into the cluster?
(vm in a mangement tenant?)
Thanks,
Kevin
From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Thursday, October 25, 2018 7:39 AM
To: openstack-operators@lists.opensta
It looks like devstack implemented some o-hm0 interface to connect the
physical control host to a VxLAN.
In our case there is no VxLAN at the control nodes nor is OVS.
Is it a option to deploy those Octavia services needing this conenction
to the compute or network nodes and use o-hm0?
Am 10/
Might as well throw it out here.
After a lot of troubleshooting we were able to narrow our issue down to
our test environment running qemu virtualization, we moved our compute
node to hardware and
used kvm full virtualization instead.
We could properly reproduce the issue where generating a C
Hi everyone,
Time for a new meeting for PCWG - today 1400 UTC in
#openstack-publiccloud! Agenda found at
https://etherpad.openstack.org/p/publiccloud-wg
Cheers,
Tobias
--
Tobias Rydberg
Senior Developer
Twitter & IRC: tobberydberg
www.citynetwork.eu | www.citycloud.com
INNOVATION THROUGH O
Or could I create lb-mgmt-net as VxLAN and connect the control nodes to
this VxLAN? How to do something like that?
Am 10/25/18 um 10:03 AM schrieb Florian Engelmann:
Hmm - so right now I can't see any routed option because:
The gateway connected to the VLAN provider networks (bond1 on the
net
Hmm - so right now I can't see any routed option because:
The gateway connected to the VLAN provider networks (bond1 on the
network nodes) is not able to route any traffic to my control nodes in
the spine-leaf layer3 backend network.
And right now there is no br-ex at all nor any "streched" L
12 matches
Mail list logo