Hi,
That is fantastic, just what I am looking for!
Thanks for the help.
Kind Regards,
On 15/12/17 10:43, Volodymyr Litovka wrote:
Hi Grant,
in case of Octavia, when you create healthmonitor with parameters of
monitoring:
$ openstack loadbalancer healthmonitor create
usage: openstack
timeout settings for the "Health Monitor"
Any help will be much appreciated.
Regards,
--
Grant Morley
Senior Cloud Engineer
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/>
gr...@abso
Hi Amy,
Many thanks for this, pleased to know that it is doable :) - We will
test this on our dev environment first to see if there are any issues or
not.
Will be sure to join the #openstack-ansible channel if we get stuck.
Thanks again,
Grant
On 04/10/17 15:56, Amy Marrich wrote:
Hi
things.
We are going to test this on our dev environment and see what breaks,
but just wondered if anyone here has come across this already and
suffered the pain :)
Any suggestions would be much appreciated.
Many thanks,
--
Grant Morley
Senior Cloud Engineer
Absolute DevOps Ltd
Units H, J
the
endpoints to the HAProxy VIP address and the SSL certs are not valid for
an IP address.
Do you need to add the IP address to the cert? or disable it until you
have deployed ( which is what we have done to get around it ) Or is
there a better method?
Many thanks,
--
Grant Morley
Senior
Ignore that now all,
Managed to fix it by restarting the l3-agent. Looks like it must have
been cached in memory.
Thanks,
On 08/06/17 18:07, Grant Morley wrote:
Hi All,
We have noticed in our neutron-l3-agent logs that there are a number
of routers that neutron seems to think exist
l3.agent raise
RuntimeError(msg)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent RuntimeError:
Exit code: 1; Stdin: ; Stdout: ; Stderr: Cannot find device "qg-c4080b5c-c9"
Has anyone come across this before?
We don't seem to have an entry for them anywhere from what
et
> those flags do all your osd’s stay up and in?
>
> On Jun 2, 2017, at 8:00 AM, Grant Morley
> wrote:
>
> HEALTH_ERR 210 pgs are stuck inactive for more than 300 seconds; 296 pgs
> backfill_wait; 3 pgs backfilling; 1 pgs degraded; 202 pgs peering; 1 pgs
> recovery_wai
o, do you have big enough limits ?
>>
>> Check on any host the content of: /proc/`pid_of_the_osd`/limits
>>
>>
>> Saverio
>>
>> 2017-06-02 14:00 GMT+02:00 Grant Morley :
>> > HEALTH_ERR 210 pgs are stuck inactive for more than 300 seconds; 296 pgs
>
in the section [osd] what values you have for
> the following ?
>
> [osd]
> osd max backfills
> osd recovery max active
> osd recovery op priority
>
> these three settings can influence the recovery speed.
>
> Also, do you have big enough limits ?
>
> Check on any host
not sure how to get around that.
Thanks,
On Fri, Jun 2, 2017 at 12:55 PM, Saverio Proto wrote:
> Usually 'ceph health detail' gives better info on what is making
> everything stuck.
>
> Saverio
>
> 2017-06-02 13:51 GMT+02:00 Grant Morley :
> > Hi All,
&g
%)
1524 active+clean
298 active+remapped+wait_backfill
153 peering
47 remapped+peering
10 inactive
3 active+remapped+backfilling
1 active+recovery_wait+degraded+remapp
Thanks for that guys,
I will double check everything to make sure all clients are upgraded as
well before setting the flag.
Grant
On 17/05/17 16:53, Fox, Kevin M wrote:
its a flag like noout. set with the ceph cli command.
Make sure all clients are to jewel (all vms restarted after the
to
be running fine. Just more of an annoyance that we are getting a lot of
alerts from our monitoring systems.
Regards,
Grant
--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.a
Hi Saverio,
I have managed to fix it, turns out it was a HAPROXY issue and it wasn't
terminating the backend connection correctly. The glance error logs sent
me in the wrong direction.
Thank you for all of your suggestions to try and debug the issue.
Regards,
On 02/03/17 16:19,
03/17 16:08, Saverio Proto wrote:
select * from images where name=''Ubuntu-16.04";
--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/>
gr...@absolutedevop
| public |
+--+--+
On 02/03/17 15:56, Saverio Proto wrote:
Can you share with us the output of:
openstack image show
for that image ?
Saverio
2017-03-02 13:54 GMT+01:00 Grant Morley
Unfortunately not, I still get the same error.
Grant
On 02/03/17 12:54, Saverio Proto wrote:
If you pass the uuid of the image does it work ?
Saverio
2017-03-02 13:49 GMT+01:00 Grant Morley <mailto:gr...@absolutedevops.io>>:
Hi Saverio,
We are running Mitaka - sorry
Hi Saverio,
We are running Mitaka - sorry forgot to mention that.
Grant
On 02/03/17 12:45, Saverio Proto wrote:
What version of Openstack are we talking about ?
Saverio
2017-03-02 12:11 GMT+01:00 Grant Morley <mailto:gr...@absolutedevops.io>>:
Hi All,
Not sure if anyone
;: [{"href":
"http://10.6.0.3:9191/v1/";, "rel": "self"}]}, {"status": "SUPPORTED",
"id": "v1.0", "links": [{"href": "http://10.6.0.3:9191/v1/";, "rel":
"self"}]
Hi Andy,
Thank you for that, I will get straight onto that and make sure all of
the public endpoints are HTTPS. Those are the ones that I care about for
obvious reasons.
If I get stuck, I will be sure to chat in #openstack-ansible
Once again thanks for the speedy reply and help.
Grant
On
ternal |
http://10.6.0.3:5000/v3 |
Is there something else I have missed or do I need to put our SSL certs
in a different directory for OSA to setup the endpoints with HTTPS on
haproxy?
Grateful for any help.
Regards,
Grant
--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Hi Nick,
Thanks for the reply, looks like we could be hitting something similar.
We are running Ceph Jewel packages on the Glance node.
Thanks for the links to the bug reports.
Regards,
On 31/01/17 11:35, Nick Jones wrote:
Hi Grant.
Could be unrelated but I'm throwing it out there a
re working absolutely
fine, it is just Glance and ceph has all of a sudden stopped working.
Just wondered if anyone had any ideas?
Regards,
--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www
space from the same router when it was removed from the L3 agent.
For each L3 agent, can you shutdown the L3 agent, run the netns
cleanup script, ensure all keepalived processes are dead, and then
start the agent again?
On Tue, Dec 6, 2016 at 4:59 AM, Grant Morley <mailto:gr...@absoluted
ote:
Can you do a 'neutron port-show' for both of those HA ports to check
their status field?
On Tue, Dec 6, 2016 at 2:29 AM, Grant Morley <mailto:gr...@absolutedevops.io>> wrote:
Hi Kevin & Neil,
Many thanks for the reply. I have attached a screen shot showing
terface driver.
Has anyone else come across this at all or have any pointers? This was
working fine in Mitaka it just seems since the upgrade to Newton, we
have these issues.
I am able to provide more logs if they are needed.
Regards,
--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H,
dgren
Senior Linux Systems Engineer
GoDaddy
*From: *Grant Morley
*Date: *Friday, October 21, 2016 at 6:14 AM
*To: *OpenStack Operators
*Cc: *"ian.ba...@serverchoice.com"
*Subject: *[Openstack-operators] Instances failing to launch when rbd
backed (ansible Liberty setup)
Hi
7
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance:
5633d98e-5f79-4c13-8d45-7544069f0e6f] File
"*/openstack/venvs/*nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
line 117, in __init__
Chris
On Fri, 21 Oct 2016 at 13:19 Gra
bd1 10.2.3-1trustyamd64RADOS
block device client library
ii python-rbd 10.2.3-1trustyamd64Python
libraries for the Ceph librbd library
Has anyone come across this before? Ceph is working fine for Glance, it
just seems to be with th
n/ha_confs
ha_vrrp_advert_int = 2
ha_vrrp_auth_password = bee916a2589b14dd7f
ha_vrrp_auth_type = PASS
handle_internal_only_routers = False
send_arp_for_ha = 3
# Metadata
enable_metadata_proxy = True
Regards,
On 08/09/16 13:51, Vahric Muhtaryan wrote:
Hello Grant ,
Possible to share ml
on any other interface on the compute
host. No DHCP packet is observed on the network agent container running
the DHCP namespace.
output of the instance booting:
Starting network...
udhcpc (v1.20.1) started
Sending discover...
Sending discover...
Sending discover...
Usage: /sbin/cirros-dhc
Funny you should say that. We are using an external url using https via
a load balancer. I switched over to the internal endpoint and it has
made a very big improvement.
Grant
On 03/06/16 13:52, David Medberry wrote:
On Fri, Jun 3, 2016 at 6:22 AM, Grant Morley <mailto
timing out
when the snapshot is taking place.
I will get onto looking into that now.
Thanks again for the advice and help.
Grant
On 03/06/16 11:57, Saverio Proto wrote:
Hello,
what is the state of the instance before asking the snapshot ? Is it
running or paused ?
Check on the hypervisor
c2 a22e503869c34a92bceb66b0c1da7231 - - -]
Exception during message handling: Not authorized for imag
e f9844dd5-5a92-4cd4-956d-8ad04cfc5e84.
Any help will be appreciated.
Regards,
--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage
35 matches
Mail list logo