Mario,
If I remember right I had a similar issue with getting image_props when I
was doing this to pull in custom properties. Through some trial and error
and poking around with pdb I ended up with this:
image_props = spec_obj.get('request_spec', {}).\
get('image', {}).get('pr
There is a known issue where some providers fail when you have an openrc
sourced. I remember it being glance that failed. Bug #1524599
On Nov 11, 2016 4:15 AM, "Justin Cattle" wrote:
> There was two problems here!
>
> The puppet libs in use were coming from the wrong environment - so a
> pretty
bility=),
> '_obj_force_hosts': None, 'VERSION': u'1.5', '_obj_force_nodes': None,
> '_obj_pci_requests': InstancePCIRequests(instance_
> uuid=22313c7f-0338-4bed-9131-900b458347d9,requests=[]), '_obj_retry':
> SchedulerRetrie
As a part of our upgrades to Newton we are transitioning our services to
use pymysql rather than the deprecated MySQL-Python [1]. I believe pymsql
has been the default in devstack and the gate for sometime now and that
MySQL-Python is essentially untested and not updated, hence our desire to
switch
On Wed, Dec 28, 2016 at 6:11 AM, Koniszewski, Pawel <
pawel.koniszew...@intel.com> wrote:
> Hello everyone,
>
> We made a research to see how live migration performance varies between
> different configurations, especially we aimed to test tunneled vs
> non-tunneled live migrations. To test live m
MIke,
I did a bunch of research and experiments on this last fall. We are running
Rabbit 3.5.6 on our main cluster and 3.6.5 on our Trove cluster which has
significantly less load (and criticality). We were going to upgrade to
3.6.5 everywhere but in the end decided not to, mainly because there wa
On Tue, Jan 10, 2017 at 4:08 PM, Sam Morrison wrote:
>
> > On 10 Jan 2017, at 11:04 pm, Tomáš Vondra wrote:
> >
> > The version is 3.6.2, but the issue that I believe is relevant is still
> not fixed:
> > https://github.com/rabbitmq/rabbitmq-management/issues/41
> > Tomas
> >
>
> Yeah we found t
Another +1 for mult-attach please.
On Mon, Jan 16, 2017 at 6:09 AM, Amrith Kumar
wrote:
> I echo this sentiment; attaching a single Cinder volume or a group of
> volumes in a consistency group to multiple instances would be something I’d
> like to see in Pike.
>
>
>
> -amrith
>
>
>
> *From:* Yag
Will there be enough of us at the PTG for an impromptu session there as
well?
On Mon, Jan 23, 2017 at 9:18 AM, Mike Dorman wrote:
> +1! Thanks for driving this.
>
>
>
>
>
> *From: *Edgar Magana
> *Date: *Friday, January 20, 2017 at 1:23 PM
> *To: *"m...@mattjarvis.org.uk" , Melvin Hillsman <
>
You would know all this at install time as that's when this would be
determined. If that information is not available to you currently, you can
look at some other service's config files I suppose. You'll need enough
rabbit creds to create a Congress rabbit user unless you are just going to
re-use a
Do you mean sharing tokens or keys?
On Feb 7, 2017 11:34 AM, "Ignazio Cassano" wrote:
> Hi everybody,
> Can anyone talk me about Sebring fernet tokens in an openstack with more
> than one controller?
> Regards
> Ignazio
>
>
>
> ___
> OpenStack-operator
which
simplifies the problem for you.
On Tue, Feb 7, 2017 at 9:25 PM, Matt Fischer wrote:
> Do you mean sharing tokens or keys?
>
> On Feb 7, 2017 11:34 AM, "Ignazio Cassano"
> wrote:
>
>> Hi everybody,
>> Can anyone talk me about Sebring fernet tokens
http://www.mattfischer.com/blog/?p=648
https://www.youtube.com/watch?v=702SRZHdNW8
On Wed, Feb 8, 2017 at 8:14 AM, Matt Fischer wrote:
> I think that you just replied to me directly. But you are asking about
> sharing keys.
>
> Since keys do not need to be in-sync on all nodes at the
Are you proposing an Operators committee or do you mean the OpenStack BoD?
On Thu, Jul 2, 2015 at 12:15 PM, Jesse Keating wrote:
> Honestly I'm fine with the elected board helping to make this decision.
> Folks that want to underwrite the event can submit a proposal to host,
> board picks from t
Jumping in with another "us too" here. We have some custom Horizon
extensions that allow project owners to manage some of this stuff.
On Wed, Aug 5, 2015 at 4:14 PM, Marc Heckmann
wrote:
> Echoing what others have said, we too have an abstraction layer in the
> form of a custom UI to allow proje
On Sun, Aug 9, 2015 at 11:59 PM, Tony Breeds
wrote:
> Hi All,
> Nova has bug: https://bugs.launchpad.net/nova/+bug/1447679 (service
> No-VNC
> (port 6080) doesn't require authentication).
>
> Which explains that if you know the 'token'[1] associated with an instances
> console you can get acc
On Tue, Aug 11, 2015 at 8:16 PM, Tony Breeds
wrote:
> On Mon, Aug 10, 2015 at 07:16:43PM -0600, Matt Fischer wrote:
>
> > I'm not excited about making this the default until token revocations
> don't
> > impact performance the way that they do now. I don't kno
Oh.. oops. Yeah if that's the case then sorry, you can just ignore me!
On Tue, Aug 11, 2015 at 8:39 PM, Tony Breeds
wrote:
> On Tue, Aug 11, 2015 at 08:24:10PM -0600, Matt Fischer wrote:
> > It was covered some here:
> > http://lists.openstack.org/pipermail/openstack-dev/2
While I think there is probably some value in rate limiting API calls, I
think your "user wants to launch x000 instances" is extremely limited.
There's maybe 1 or 2 (or 0) operators that have that amount of spare
capacity just sitting around that they can allow a user to have a quota of
2000 instan
Tom,
Can you make the columns a bit wider? I don't seem to have permissions to
do so and I cant read everything. I've resorted to copying and pasting
stuff into another window so I can read it.
On Mon, Sep 21, 2015 at 11:04 PM, Tom Fifield wrote:
> Hi all,
>
> I've started wrangling things tow
On Fri, Sep 25, 2015 at 11:01 AM, Emilien Macchi wrote:
>
>
> So after 5 days, here is a bit of feedback (13 people did the poll [1]):
>
> 1/ Providers
> Except for 1, most of people are managing a few number of Keystone
> users/tenants.
> I would like to know if it's because the current implement
Yes. We have a separate DB cluster for global stuff like Keystone &
Designate, and a regional cluster for things like nova/neutron etc.
On Mon, Sep 28, 2015 at 10:43 AM, Curtis wrote:
> Hi,
>
> For organizations with the keystone database shared across regions via
> galera, do you just have keys
On Mon, Sep 28, 2015 at 1:46 PM, Jonathan Proulx wrote:
> On Mon, Sep 28, 2015 at 03:31:54PM -0400, Adam Young wrote:
> :On 09/26/2015 11:19 PM, RunnerCheng wrote:
> :>Hi All,
> :>I'm a newbie of keystone, and I'm doing some research about it
> :>recently. I have a question about how to deploy it
Yes, people are probably still using it. Last time I tried to use V2 it
didn't work because the clients were broken, and then it went back on the
bottom of my to do list. Is this mess fixed?
http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html
On Mon, Sep 28, 2015 at
>
>
>
> I agree with John Griffith. I don't have any empirical evidences to back
> my "feelings" on that one but it's true that we weren't enable to enable
> Cinder v2 until now.
>
> Which makes me wonder: When can we actually deprecate an API version? I
> *feel* we are fast to jump on the deprecat
I'd recommend a few things. The first is you need to disable notifications
in your services including nova. You just set the notifications driver to
noop. Second, you should have some monitoring in place that looks for
queues that go over a certain threshold. There's not a lot of queues that
shoul
M, Mark Voelker wrote:
>
> Mark T. Voelker
>
>
>
> > On Sep 29, 2015, at 12:36 PM, Matt Fischer wrote:
> >
> >
> >
> > I agree with John Griffith. I don't have any empirical evidences to back
> > my "feelings" on that one but it
One simple workaround for this if you ssh directly to your Keystone node
and run the admin commands from there. Once you bootstrap your project with
the proper tenants and users it's not an operation that most people do all
that often. We expose an admin endpoint on an internal load balancer URL
bu
n endpoint to a public url?
>
> > On Oct 20, 2015, at 5:28 PM, Matt Fischer wrote:
> >
> > One simple workaround for this if you ssh directly to your Keystone node
> and run the admin commands from there. Once you bootstrap your project with
> the proper tenants and user
What's your output from keystone endpoint-list or keystone catalog (or the
DB table)? Is it possible the admin URL is simply listed as http?
On Tue, Oct 27, 2015 at 9:32 PM, Alvise Dorigo
wrote:
> I have an IceHouse OpenStack installation, where the endpoints are using
> https as protocol (i.e.
I think that sticking with a singular official one is the plan. It's
difficult enough for the foundation to line up sponsors/hosts etc for a
single meet-up. I also think that there are some US/Asia folks that will
attend a midcycle in Europe and by also hosting a competing one locally you
may reduc
On Mon, Nov 16, 2015 at 1:00 PM, Donald Talton
wrote:
> I’ll +1 option 1 too, if we can get remote participation that would
> suffice.
>
>
>
Having been to several of these I think that we can call remote
participation a stretch goal at best, and if I'm being honest, I just don't
think it's going
>
>
> We're deciding not to innovate a solution to allow people to
> participate in a group that is attempting to provide innovative ideas.
> How ironic. I actually don't think it would require much innovation.
> The Ceph guys run their entire design summit remotely, and I'm certain
> that it way b
Is there a reason why we can't license the entire repo with Apache2 and if
you want to contribute you agree to that? Otherwise it might become a bit
of a nightmare. Or maybe at least do "Apache2 unless otherwise stated"?
On Thu, Nov 19, 2015 at 9:17 PM, Joe Topjian wrote:
> Thanks, JJ!
>
> It l
I'd second the vote for r10k. You need to do this however otherwise you'll
get the master branch:
mod 'nova',
:git => 'https://github.com/openstack/puppet-nova.git',
:ref => 'stable/kilo'
mod 'glance',
:git => 'https://github.com/openstack/puppet-glance.git',
:ref => 'stable/kilo'
mod 'c
I'm just going to be crystal clear. Use r10k with a Puppetfile that points
at specific branches (or tags) and all your problems will go away. World
peace, etc. I am never quite sure what librarian is up to, and I've found
it's caching annoying. r10k just works.
gem install --no-rdoc r10k
r10k pupp
For reference, neutron has similar issues when restarting some neutron
services, for example the ovs-agent plugin, The delay in coming back up
scales on the number of routers you are hosting. For this reason we don't
let puppet restart the senstitive services and our "rabbit connections are
broken
On Mon, Dec 7, 2015 at 3:54 AM, Ajaya Agrawal wrote:
> Hi everyone,
>
> We are deploying Openstack and planning to run multi-master Galera setup
> in production. My team is responsible for running a highly available
> Keystone. I have two questions when it comes to Galera with Keystone.
>
> 1. Ho
On Fri, Dec 11, 2015 at 12:25 AM, Ajaya Agrawal wrote:
> Thanks Matt. That surely is helpful. If you could share some numbers or
> problems you faced when you were storing UUID tokens in database, it would
> be awesome. In my test setup with Keystone Kilo, Fernet token creation and
> validation w
We've done the opposite, newer Keystone with older code. No issues that
we've seen.
On Wed, Jan 6, 2016 at 8:15 AM, Kevin Bringard (kevinbri) <
kevin...@cisco.com> wrote:
> We've even done later versions of keystone with older versions of other
> stuff (Specifically Kilo Keystone with Juno Glance
Personally, I'd just try to load the instance images like you said. If you
try to load Icehouse records onto Liberty code its not going to work.
Typically you'd do the upgrade one step at a time with database migrations
done at every step.
On Sun, Jan 10, 2016 at 9:58 PM, Liam Haworth
wrote:
> H
Are you seeing the cinder Volume limit error?
If that's the issue the work around is here in the bug description.
https://bugs.launchpad.net/tripleo/+bug/1521639
On Feb 4, 2016 10:31 PM, "Abel Lopez" wrote:
> Hey everyone,
> In my liberty testing, I've got keystone v3 setup, and everything seem
We also use 2 VIPs. public and internal, with admin being a CNAME for
internal.
On Fri, Feb 12, 2016 at 7:28 AM, Fox, Kevin M wrote:
> We usually use two vips.
>
> Thanks,
> Kevin
>
> --
> *From:* Steven Dake (stdake)
> *Sent:* Friday, February 12, 2016 6:04:45 AM
> *
I believe that either have your customers design their apps to handle
failures or have tools that are reactive to failures.
Unfortunately like many other private cloud operators we deal a lot with
legacy applications that aren't scaled horizontally or fault tolerant and
so we've built tooling to h
Cross-post to openstack-operators...
As an operator, there's value in me attending some of the design summit
sessions to provide feedback and guidance. But I don't really need to be in
the room for a week discussing minutiae of implementations. So I probably
can't justify 2 extra trips just to giv
On Wed, Feb 24, 2016 at 8:30 AM, Emilien Macchi wrote:
> Puppet OpenStack folks,
>
> As usual, Thierry Carrez sent an e-mail to PTLs about space needs for
> the next OpenStack Summit in Austin.
>
>
> We can have 3 kinds of slots:
>
> * Fishbowl slots (Wed-Thu) - we had 2 in Tokyo.
> Our tradition
The backport is pretty easy. You click on Cherry pick and if there's no
conflict it just works. Like so:
https://review.openstack.org/#/c/287928/
It still needs to go through the review process so you will need to ping
some horizon developers in IRC.
Getting that packaged may take longer.
On Th
I think you can ignore that no handlers message, it's not the issue. You
should check /var/log/keystone/keystone-manage.Log to find the original
issue. You can also run the dbsync with the verbose flag IIRC.
On Mar 5, 2016 3:38 PM, "Christopher Hull" wrote:
>
> Hi all;
>
> I'm attempting an insta
Fernet key rotation is easy.
1) You don't need a maintenance window
2) You can do one node at a time even with a long delay between
3) You don't need to restart anything
We rotate approximately weekly.
On Wed, Mar 16, 2016 at 3:44 PM, Ajay Kalambur (akalambu) <
akala...@cisco.com> wrote:
> Hi
>
On Mar 21, 2016 3:28 PM, "Tim Bell" wrote:
>
> On 21/03/16 17:24, "Markus Zoeller" wrote:
>
> >Hello dear ops,
> >
> >I'd like to make you aware of discussion [1] on the openstack-dev ML.
> >I'm in the role of maintaining the bug list in Nova and was looking
> >for a way to gain an overview agai
Another remove vote. The only people this may affect are people standing up
test clouds or new to OpenStack.
For those folks that use puppet, the puppet community will be adding a
provider to setup flavors since it's a feature that's been missing.
I'll add a vote for removal, given how varied priv
On May 11, 2016 10:03 PM, "Flavio Percoco" wrote:
>
> Greetings,
>
> The Glance team is evaluating the needs and usefulness of the Glance
Registry
> service and this email is a request for feedback from the overall
community
> before the team moves forward with anything.
>
> Historically, there ha
It's a google group. The only clue I had was this in the headers:
X-Auto-Response-Suppress: All
X-MS-Exchange-Inbox-Rules-Loop: tgree...@outlook.com
X-MS-TNEF-Correlator:
I reached out to that person and no response.
On Tue, May 17, 2016 at 10:42 AM, Jeremy Stanley wrote:
> On 2016-05-17 17:3
t;openstack-private@some.random.domain" and
> > nothing we're in control of, but I guess I'll find out when it
> > bounces back to my reply.
>
> Aah, as Matt Fischer pointed out in IRC just now, it seems to be
> forwarded through an outlook.com subscriber acco
We do this a few different ways, some of which may meet your needs.
For API calls we measure a simple, quick, and impactless call for each
service (like heat stack-list) and we monitor East from West and vice
versa. The goal here is nothing added to the DBs, so nothing like neutron
net-create. The
I will posit that anyone who is interested in rate limiting is probably
already load balancing their API servers. We've been looking into rate
limiting at the load balancers, but have not needed to implement it yet.
That will likely be our solution when its finally implemented.
Question: If there
On Tue, Jun 14, 2016 at 9:37 AM, Sean Dague wrote:
> On 06/14/2016 11:02 AM, Matt Riedemann wrote:
> > A question came up in the nova IRC channel this morning about the
> > api_rate_limit config option in nova which was only for the v2 API.
> >
> > Sean Dague explained that it never really worked
I don't have a solution for you, but I will concur that adding revocations
kills performance especially as that tree grows. I'm curious what you guys
are doing revocations on, anything other than logging out of Horizon?
On Tue, Jun 21, 2016 at 5:45 AM, Jose Castro Leon
wrote:
> Hi all,
>
> While
On Tue, Jun 21, 2016 at 4:21 PM, Sam Morrison wrote:
>
> On 22 Jun 2016, at 1:45 AM, Matt Fischer wrote:
>
> I don't have a solution for you, but I will concur that adding revocations
> kills performance especially as that tree grows. I'm curious what you guys
&
Have you setup token caching at the service level? Meaning a Memcache
cluster that glance, Nova etc would talk to directly? That will really cut
down the traffic.
On Jun 21, 2016 5:55 PM, "Sam Morrison" wrote:
>
> On 22 Jun 2016, at 9:42 AM, Matt Fischer wrote:
>
> On Tue
On Tue, Jun 21, 2016 at 7:04 PM, Sam Morrison wrote:
>
> On 22 Jun 2016, at 10:58 AM, Matt Fischer wrote:
>
> Have you setup token caching at the service level? Meaning a Memcache
> cluster that glance, Nova etc would talk to directly? That will really cut
> down the traffic
IIRC there are some debug/verbose flags you can pass in. Get anything from
them?
On Jun 23, 2016 5:37 AM, "Alvise Dorigo" wrote:
> Hi,
> I've a Kilo installation which I want to migrate to Liberty.
> I've installed the Liberty Keystone's RPMs and configured the minimun to
> upgrade the DB schema
cross-posting per Amrith Kumar to operators:
(note I'd recommend a reply to the openstack-dev thread or directly to
amr...@tesora.com)
After we discussed and announced this mid-cycle, there has been some
feedback that (a) it would be better to hold the mid-cycle earlier, and (b)
NYC was not the
We've been using this for some time now (since at least Kilo). We set them
per flavor not per instance.
https://wiki.openstack.org/wiki/InstanceResourceQuota
Bandwidth limits
Nova Extra Specs keys:
- vif_inbound_average
- vif_outbound_average
- vif_inbound_peak
- vif_outbound_peak
in Openstack,
> however I'd like them to be applied automatically. Using predefined flavors
> as described by Matt Fischer above seems like a good approach, are there
> any solutions for non-predefined flavors?
>
>
> - Original message -
> From: Assaf Muller
>
Yes! This happens often but I'd not call it a crash, just the mgmt db gets
behind then eats all the memory. We've started monitoring it and have
runbooks on how to bounce just the mgmt db. Here are my notes on that:
restart rabbitmq mgmt server - this seems to clear the memory usage.
rabbitmqctl
For the record we're on 3.5.6-1.
On Jul 5, 2016 11:27 AM, "Mike Lowe" wrote:
> I was having just this problem last week. We updated to 3.6.2 from 3.5.4
> on ubuntu and stated seeing crashes due to excessive memory usage. I did
> this on each node of my rabbit cluster and haven’t had any problems
We're using Designate but still on Juno. We're running puppet from around
then, summer of 2015. We'll likely try to upgrade to Mitaka at some point
but Juno Designate "just works" so it's been low priority. Look forward to
your efforts here.
On Tue, Jul 5, 2016 at 7:47 PM, David Moreau Simard wro
When you make the API calls you're going to get back a list python objects
which you need to iterate. I believe some APIs will let you ask for
specific fields only, but this is simple enough:
keystone = client.Client(username=Username, password=Password,
tenant_name=Tenant,
auth_url='h
That's my comment I spoke to Mark V about it this morning and he's working
on it already, so you may want to coordinate with him.
On Thu, Jul 7, 2016 at 11:20 AM, Amrith Kumar wrote:
> I see a comment in https://etherpad.openstack.org/p/NYC-ops-meetup about
> “OpenStack East Discount?”.
>
>
>
>
Thanks Erin. I did this just now and it charged me $22.09. Not a big deal,
but what's the extra? Taxes?
On Jul 14, 2016 3:43 PM, "Erin Disney" wrote:
> All-
>
> Thank you for your patience as we finalized details for the Ops MidCycle
> in New York this August. If you plan to attend, please RSVP
t;
>
>
> -amrith
>
>
>
> *From:* Matt Fischer [mailto:m...@mattfischer.com]
> *Sent:* Thursday, July 14, 2016 6:26 PM
> *To:* Erin Disney
> *Cc:* openstack-operators@lists.openstack.org
> *Subject:* Re: [Openstack-operators] Ops MidCycle Registration
>
>
>
&
I'd say that operators running Glance, which is probably almost everyone,
just put a public glance endpoint in the catalog. Maybe there's some
special cases beyond that but that's the base design.
On Jul 30, 2016 6:22 PM, "Serguei Bezverkhi (sbezverk)"
wrote:
> Hi Joseph,
>
>
>
> I am working on
I didn't see any plus ones on my idea for the db cleanup session so if we
need to drop it to fit something that works for me.
On Aug 9, 2016 12:29 PM, "Chris Morgan" wrote:
> WG6, day one? That's 40 minutes. Would run alongside Large Deployment.
> Currently that has the main room. Would nova be
morning (pre-lunch) on Friday, or Thursday please.
>>
>>
>>
>> Thanks,
>>
>>
>>
>> -amrith
>>
>>
>>
>> *From:* Chris Morgan [mailto:mihali...@gmail.com]
>> *Sent:* Tuesday, August 09, 2016 4:38 PM
>> *To:* Matt
Has anyone had any luck improving the statsdb issue by upgrading rabbit to
3.6.3 or newer? We're at 3.5.6 now and 3.6.2 has parallelized stats
processing, then 3.6.3 has additional memory leak fixes for it. What we've
been seeing is that we occasionally get slow & steady climbs of rabbit
memory usa
Jonathan,
Are you using caching for tokens (not the middleware cache but keystone
cache)? There's a bug in the caching so that when it tries to read the
cache and unpack the token its missing some fields. It's been fixed and
backported but may not be in your packages:
https://bugs.launchpad.net/ke
Hi Ed,
Good to meet you in NYC last week. And fortunate timing for the question, I
just published a summary of my experiences here:
http://www.mattfischer.com/blog/?p=744
I know that the Nova DB cleanup stuff was broken in the past, and IIRC you
are on Kilo, so it may not work for you until you g
On Fri, Sep 2, 2016 at 8:57 AM, Abel Lopez wrote:
> For cinder, since kilo, we've had 'cinder-manage db purge-deleted'
>
>
This is the issue we see with this tool in Liberty, I think this might be
fixed in M.
# cinder-manage db purge 365
(some stuff works here)
...
2016-09-02 15:07:02.196
+1 This was our concern also with Trove. If a tenant DoSes Trove we
probably don't all get fired. The rest of rabbit is just too important to
risk sharing.
On Sun, Sep 18, 2016 at 6:53 PM, Sam Morrison wrote:
> We run completely separate clusters. I’m sure vhosts give you acceptable
> security b
On Mon, Sep 19, 2016 at 7:29 AM, Tobias Urdin
wrote:
> Hello,
>
> On your compute nodes in nova.conf
>
> [DEFAULT]
>
> resume_guests_state_on_host_boot = True
>
>
> All instances that had a running state when the reboot occured will be
> started again.
>
> Best regards
>
And this works regardles
Other that #1 that's exactly the same design we used for Trove. Glad to see
someone else using it too for validation. Thanks.
On Sep 22, 2016 11:39 PM, "Serg Melikyan" wrote:
> Hi Joe,
>
> I can share some details on how murano is configured as part of the
> default Mirantis OpenStack configurat
The last time I tried this, which was probably 18 months ago to be fair,
there is no way for the VM to get it's own tenant name. You could pass it
in with cloud-init if you want but its not in the metadata that I recall.
For Designate however I don't know why you'd want this. You want the format
a
This does not cover all your issues but after seeing mysql bugs between I
and J and also J to K we now export and restore production control plane
data into a dev environment to test the upgrades. If we have issues we
destroy this environment and run it again.
For longer running instances that's t
On Wed, Oct 19, 2016 at 10:22 AM, Sean M. Collins
wrote:
> Zhang, Peng wrote:
> > [logger_root]
> > level = DEBUG
>
>
> So, you're setting the logging to level to DEBUG - if I understand
> correctly. In a production environment that is going to fill up your
> disks very quickly. Which is why even
Unless this has drastically changed I thought the multiple entries was sort
of like a "pick one" scenario rather than a "connect to all of them". You
specify all the nodes in case one or more is down. I don't think it can be
used to talk to multiple rabbit clusters.
On Thu, Nov 3, 2016 at 5:28 PM,
How to add yourself to Planet OpenStack:
https://wiki.openstack.org/wiki/AddingYourBlog
As for SuperUser you could reach out to them if you think it's interesting
for users/operators. Generally they'll want to publish it there first then
you follow-up with your blog post a few days later.
On Mon,
I think everyone is highly interested in running this change or a newer
OSLO messaging in general + this change in Juno rather than waiting for
Kilo. Hopefully everyone could provide updates as they do experiments.
On Thu, Mar 19, 2015 at 1:22 PM, Kevin Bringard (kevinbri) <
kevin...@cisco.com> wr
We've been having some issues with heat delete-stack in Juno. The issues
generally fall into three categories:
1) it takes multiple calls to heat to delete a stack. Presumably due to
heat being unable to figure out the ordering on deletion and resources
being in use.
2) undeleteable stacks. Stack
Nobody on the operators list had any ideas on this, so re-posting here.
We've been having some issues with heat delete-stack in Juno. The issues
generally fall into three categories:
1) it takes multiple calls to heat to delete a stack. Presumably due
to heat being
unable to figure out the orderi
Sorry operators. I fail at email today. This was for -dev.
On Thu, Mar 26, 2015 at 12:05 PM, Matt Fischer wrote:
> Nobody on the operators list had any ideas on this, so re-posting here.
>
> We've been having some issues with heat delete-stack in Juno. The issues
> generall
Mathieu,
We LDAP (AD) with a fallback to MySQL. This allows us to store service
accounts (like nova) and "team accounts" for use in Jenkins/scripts etc in
MySQL. We only do Identity via LDAP and we have a forked copy of this
driver (https://github.com/SUSE-Cloud/keystone-hybrid-backend) to do this
I'd like to have some better logging when certain CRUD operations happen in
Keystone, for example, when a project is deleted. I specifically mean "any"
when I say better since right now I'm not seeing anything even when Verbose
is enabled.
This is pretty frustrating for me because these are rather
> I’m not involved in the keystone project, but I’d recommend you to
>> > start by filling a blueprint
>> > asking for it, and explaining what you just said here:
>> >
>> > https://blueprints.launchpad.net/keystone
>> >
>> > Adding a blueprint
We do it with some of our databases (horizon, designate, and keystone) and
we run a arbitrator process (garbd) in a 3rd DC. We have lots of low
latency bandwidth which you have to be careful with. My recommendation
would be that you need to know your network well and have good monitoring
in place.
Tom,
This doesn't solve your problem, but I will gladly swap Database for
Deployment/CI/CD. I have more experience on that topic and am even
presenting on it.
On Sun, May 10, 2015 at 9:31 PM, Tom Fifield wrote:
> Hi all,
>
> We're in need of moderators for these ops sessions in Vancouver:
>
> 1
Greetings operators,
I am moderating the Deployments/CI/CD design session at the summit next
week on Tuesday at 3:40 PM in Room 220 [1]. This is a large, wide-ranging,
and important topic for operators, so I'd like to get some help filling out
the Etherpad [2] with things you'd like to discuss. I'
Thanks to everyone who attended the CI/CD, Deployments sessions. We had a
great discussion, but unfortunately etherpad was broken for the duration of
the time. If anyone would like to add any notes on some of the new tools
discussed during that talk, please add them. I don't recall any specific
act
Congrats and welcome!
On May 26, 2015 5:35 PM, "JJ Asghar" wrote:
> Hey everyone!
>
> I’d like to just drop a note to the list saying thank you and
> congratulations to our general community.
>
> As of 2015-05-26 we’ve been merged into the “big tent”[1] sanctioning us
> as an official OpenStack p
Cynthia,
There are a few things we're waiting to land, keystone v3 support being one
major one. The kilo branches should get cut soon though, until then please
use master. There will be an announcement on OpenStack dev when they're
ready.
___
OpenStack-o
1 - 100 of 105 matches
Mail list logo