hake which caused our issue.
After going through the whole stack, we have Octavia working flawlessly
without any issues at all.
Best regards
Tobias
On 10/23/2018 04:31 PM, Tobias Urdin wrote:
Hello Erik,
Could you specify the DNs you used for all certificates just so that I
can rule it out on my
Hello,
Seems we are quite a few having difficulties getting it to work.
I missed adding operators ML to my previous reply, sent it again.
I'm at the point where SSL pretty much becomes a hassle for operations,
if there was an option to just
go with a shared secret I would've done a while ago, w
+operators
My bad.
On 10/22/2018 10:22 AM, Tobias Urdin wrote:
Hello,
I've been having a lot of issues with SSL certificates myself, on my
second trip now trying to get it working.
Before I spent a lot of time walking through every line in the DevStack
plugin and fixing my config op
Hello,
Due to bugs and fixes that has been needed we are probably going to
merge some changes to
Puppet modules which will cause a refresh of their services meaning they
will be restarted.
If you are following the stable branches (stable/rocky in this case) and
not using tagged releases when
?
Best regards
Tobias
On 08/20/2018 11:58 AM, Tobias Urdin wrote:
Continuing forward, these patches should've fixed that
https://review.openstack.org/#/q/topic:bug/1667756+(status:open+OR+status:merged)
I'm on Queens.
The two inside interfaces on the backup router:
[root@controll
to a L3 HA router?
Best regards
On 08/20/2018 11:50 AM, Tobias Urdin wrote:
Ok, so the issue here seems to be that I have a L3 HA router with
SLAAC, both the active and standby router will
configure the SLAAC obtained address causing a conflict since both
side share the same MAC address.
Is
the standby router?
Best regards
Tobias
On 08/20/2018 11:37 AM, Tobias Urdin wrote:
Forgot [neutron] tag.
On 08/20/2018 11:36 AM, Tobias Urdin wrote:
Hello,
Note: before reading, this router was a regular router but was then
disable, changed ha=true so it's now a L3 HA router, then i
Hello,
Note: before reading, this router was a regular router but was then
disable, changed ha=true so it's now a L3 HA router, then it was enabled
again.
CC openstack-dev for help or feedback if it's a possible bug.
I've been testing around with IPv6 and overall the experience has been
posi
Forgot [neutron] tag.
On 08/20/2018 11:36 AM, Tobias Urdin wrote:
Hello,
Note: before reading, this router was a regular router but was then
disable, changed ha=true so it's now a L3 HA router, then it was
enabled again.
CC openstack-dev for help or feedback if it's a possible
Hello Kendall,
I think you can just leave them in the group then, at your convenience.
If they are there we can start using them if so.
Best regards
Tobias
On 08/17/2018 11:08 PM, Kendall Nelson wrote:
On Fri, Aug 17, 2018 at 12:15 AM Tobias Urdin <mailto:tobias.ur...@binero.se>&
stack.org/#%21/project_group/60>
On Wed, Aug 15, 2018 at 12:08 AM Tobias Urdin <mailto:tobias.ur...@binero.se>> wrote:
Hello Kendall,
Thanks for your reply, that sounds awesome!
We can then dig around and see how everything looks when all
project bugs are imported to stories.
t need an infra core to kick off
the script once the change merges.
-Kendall (diablo_rojo)
On Tue, Aug 14, 2018 at 9:33 AM Tobias Urdin <mailto:tobias.ur...@binero.se>> wrote:
Hello all incredible Puppeters,
I've tested setting up an Storyboard instance and test migrated
Hello all incredible Puppeters,
I've tested setting up an Storyboard instance and test migrated
puppet-ceph and it went without any issues there using the documentation
[1] [2]
with just one minor issue during the SB setup [3].
My goal is that we will be able to swap to Storyboard during the
/98d3cf24d6d7b5eca55ca19eb19bdd2e7b1975ec/neutron_dynamic_routing/services/bgp/agent/driver/ryu/driver.py#L131
From: Tobias Urdin
Sent: Monday, July 30, 2018 7:08 PM
To: openstack-operators@lists.openstack.org
Subject: [Openstack-operators] [neutron] [neutron-dynamic
Hello,
I'm trying to get the neutron-bgp-dragent that is delivered by the
neutron-dynamic-routing project to work.
I've gotten it to open a BGP peer session without any issues but the no
BGP UPDATE messages seems to be sent from the
neutron-bgp-dragent daemon.
I'm having a BGP peer with a machine
Hello everybody,
I'm cross-posting this with operators list.
The openvswitch flows-based stateful firewall driver which uses the
conntrack support in Linux kernel >= 4.3 (iirc) has been
marked as experimental for several releases now, is there any
information about flaws in this and why it should
her way to fully isolate it.
Would've been cool if the RFE you mentioned [1] could be researched and
if it qualifies implemented.
Best regards
[1] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771523
On 06/04/2018 03:32 PM, Tobias Urdin wrote:
> Hello,
> Thanks for the rep
ts/
On 06/04/2018 02:50 PM, Matt Riedemann wrote:
> On 6/4/2018 6:43 AM, Tobias Urdin wrote:
>> I have received a question about a more specialized use case where we
>> need to isolate several hypervisors
>>
>> to a specific project. My first thinking was using nova
Hello,
I have received a question about a more specialized use case where we
need to isolate several hypervisors
to a specific project. My first thinking was using nova flavors for only
that project and add extra specs properties to use a specific host
aggregate but this
means I need to assign v
We are using the billing engine part of the commercial software provided
by Atomia [1].
Using ceilometer as of now, but they just recently added support for
Gnocchi which we are gonna use for our newer setups.
[1] https://www.atomia.com
On 03/14/2018 05:13 PM, Lars Kellogg-Stedman wrote:
> On
Hello,
The seems to assume tunnelled migrations, the live_migration_flag is removed in
later version but is there in Mitaka.
Do you have the VIR_MIGRATE_TUNNELLED flag set for [libvirt]live_migration_flag
in nova.conf?
Might be a long shot, but I've removed VIR_MIGRATE_TUNNELLED in our clouds
Hello Ulrich,
My personal opinion would be that you should not use Puppet for such
orchestration like creating resources,but it is possible with puppets node
implementations!
I think what you are looking for is something like this:
https://github.com/puppetlabs/puppetlabs-node_openstack
It m
Hello Lars,
This is great, we have been using our own checks previously but having a
great and cleaned up option upstream is gold worth.
This was we can all collaborate on having this as a standard toolbox for
health checks.
Looking forward to see moving forward and would love to contribute, an
the network node,
I only see the br-* interfaces and the physical ones.
I assume, I'd need to take down one of the interfaces that keepalived is
responsible for, but I'm not sure how to find them and make the right
connection interface to router in order to choose the right interface to
Hello,
Some work was done in that area however it was never completed.
https://bugs.launchpad.net/neutron/+bug/1370033
You can issue an ugly failover by taking down the "ha" interface in the router
namespace of the master with ifconfig down. But it's not pretty.
Best regards
On 09/28/2016 11:4
hat might help (specifically if
> using the Ceph backend)[2].
>
> [1] http://gnocchi.xyz/
> [2] http://www.slideshare.net/GordonChung/gnocchi-profiling-v2
>
> On 14/09/2016 9:57 AM, Tobias Urdin wrote:
>> Hello,
>>
>> We are running Ceilometer with MongoDB as storage b
Hello,
On your compute nodes in nova.conf
[DEFAULT]
resume_guests_state_on_host_boot = True
All instances that had a running state when the reboot occured will be started
again.
Best regards
On 09/19/2016 03:20 PM, Davíð Örn Jóhannsson wrote:
Ubuntu 14.04
OpenStack Liberty
I'm looking for
Hello,
We are running Ceilometer with MongoDB as storage backend in production
and it's building up quite fast.
I'm just having some simple thought on how large MongoDB setups people
are having with Ceilometer?
More details about backup, replicas and sharding would also be appreciated.
I think
t; errors in logs with DiskFilter removed related to scheduling? thx will
>
>
>
> On Thu, Aug 18, 2016 at 2:40 PM, Tobias Urdin
> wrote:
>> Hello William,
>>
>> We are booting our instances from Cinder volumes over ISCSI with
>> multipath and can perform live m
lliam Josefsson wrote:
> yes thx tobias, did you try live-migration after this change? I will
> try on my end too and share whether it worked with the DiskFilter
> disabled or not. thx will
>
> On Mon, Aug 15, 2016 at 4:22 PM, Tobias Urdin
> wrote:
>> Hello,
>>
>> We
Hello,
We are having this issue in production and I have since pushed for a fix
for this.
We solved it by removing the DiskFilter filter, ignore any RT data from
nova and seen in horizon and keep close monitoring on our block storage
usage.
nova.conf:
#scheduler_default_filters = RetryFilter, A
Hello,
I'm gonna give it a try here and see if anybody has a similar setup that
could answer some questions about scaling.
We are running Liberty with Nova with KVM and Neutron L3 HA and
ML2+Openvswitch.
* How many nova instances do you have?
* How many nova compute nodes do you have?
* How many
Hello Dale,
I would recommend the release notes for each project[1].
I have also had great help from the Grenade[2] project where you can see the
upgrade process between different versions.
Some projects also have their own, but the release notes is your best combined
source.
The only I know of
33 matches
Mail list logo