On Tue, Aug 21, 2018 at 02:27:44PM -0500, Sean McGinnis wrote:
:On Tue, Aug 21, 2018 at 03:14:48PM -0400, Jonathan Proulx wrote:
:Hey Jon,
:
:Pretty much everything is in place now. There is one outstanding patch to
:officially add things under the SIG governance here:
:
:https
Hi All...
I'm still a little confused by the state of this :)
I know I made some promises then got distracted the looks like Sean
stepped up and got things a bit further, but where is it now? Do we
have an active repo?
It would be nice to have the repo in place before OPs meetup.
-Jon
On Tue,
Hi All,
In my continuing quest to install an OSA cluster with mitaka-eol in
hopes of digging our to a non-eol release eventually I've hit another
snag...
setup-hosts plays out fine
setup-infrastructure chodes soem where in galera-install
the 1st galera container gets properly bootstrapped into a
On May 25, 2018 5:30:40 AM PDT, Doug Hellmann wrote:
>Excerpts from Jonathan D. Proulx's message of 2018-05-24 07:19:29
>-0700:
>>
>> My intention based on current understandign would be to create a git
>> repo called "osops-docs" as this fits current naming an thin initial
>> document we inten
r off state for
an hour wile I was at lunch.
Still have no theory on why it broke or how that could be a fix...if
anyone else does please do tell :)
Thanks,
-JOn
On Mon, Apr 30, 2018 at 12:58:16PM -0400, Jonathan Proulx wrote:
:Hi All,
:
:I have a VM with ephemeral root on RBD spewing I/O erros o
Hi All,
I have a VM with ephemeral root on RBD spewing I/O erros on boot after
hypervisor crash. I've (unfortunately) seen a lot of hypervisors go
down badly with lots of VMs on them and this is a new one on me.
I can 'rbd export' the volume and I get a clean filesystem.
version details
OpenSt
On Tue, Apr 10, 2018 at 11:31:55AM -0400, Erik McCormick wrote:
:On Tue, Apr 10, 2018 at 11:19 AM, Jonathan Proulx wrote:
:>
:> Thanks for getting this kicked off Erik. The two things you have up
:> to start (fast forward upgrades, and extended maintenance) are the
:> exact two thing
Thanks for getting this kicked off Erik. The two things you have up
to start (fast forward upgrades, and extended maintenance) are the
exact two things I want out of my trip to YVR. At least understanding
current 'state of the art' and helping advance that in the right
directions best I can.
Th
On Thu, Mar 22, 2018 at 09:02:48PM -0700, Yih Leong, Sun. wrote:
:I support the ideas to try colocating the next Ops Midcycle and PTG.
:Although scheduling could be a potential challenge but it worth give it a
:try.
:
:Also having an joint social event in the evening can also help Dev/Ops to
:meet
On Wed, Mar 21, 2018 at 08:32:38PM -0400, Paul Belanger wrote:
:6. Spandau loses to Solar by 195–88, loses to Springer by 125–118
Given this is at #6 and formal vetting is yet to come it's probably
not much of an issue, but "Spandau's" first association for many will
be Nazi war criminals via Sp
On Tue, Jan 16, 2018 at 03:49:25PM -0500, Jonathan Proulx wrote:
:On Tue, Jan 16, 2018 at 08:42:00PM +, Tim Bell wrote:
::If you want to hide the VM signature, you can use the img_hide_hypervisor_id
property
(https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html
On Tue, Jan 16, 2018 at 08:42:00PM +, Tim Bell wrote:
:If you want to hide the VM signature, you can use the img_hide_hypervisor_id
property
(https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html)
Thanks Tim, I believe that's the magic I was looking for.
-Jon
:Tim
:
Hi All,
Looking for a way to inject:
into the libvirt.xml for instances of a particular flavor.
My needs could also be met by attatching it to the glance image or if
needs be per hypervisor.
My Googling is not turning up anything. Is there any way to set
arbitray (or this
7 at 10:06 AM, Jonathan Proulx wrote:
:
:> :On Tue, Nov 21, 2017 at 9:15 AM, Chris Morgan
:> wrote:
:>
:> :> The big topic of debate, however, was whether subsequent meetups should
:> be
:> :> co-located with OpenStack PTG. This is a question for the wider
:> OpenStack
:On Tue, Nov 21, 2017 at 9:15 AM, Chris Morgan wrote:
:> The big topic of debate, however, was whether subsequent meetups should be
:> co-located with OpenStack PTG. This is a question for the wider OpenStack
:> operators community.
For people who attend both I thnik this would be a big win, if
On Thu, Nov 09, 2017 at 04:34:24PM +, Jeremy Stanley wrote:
:On 2017-11-08 23:15:15 -0800 (-0800), Clint Byrum wrote:
:[...]
:> The biggest challenge will be ensuring that the skip-level upgrades
:> work. The current grenade based upgrade jobs are already quite a bear to
:> keep working IIRC. I
Hi All,
This is low priority poking but I have a sort of puzzling situation.
Wanted to play with Clear linux (which is uefi only) to see what it
was about so grabbed some images and obviously tried to throw them on
my OpenStack cloud.
After learning how to enable EUFI booting (hw_firmware_type=
You've done amazing things for OpenStack can't wait to see what amzing
thing you do next.
If I'd known sooner would have agreed to 60hrs of cattle class travel
to see you off...unfortunately as it is I won't be in Sydney. I do
sincerely hope our paths cross again.
All the best,
-Jon
On Wed, Oc
On Tue, Oct 03, 2017 at 08:29:45PM +, Jeremy Stanley wrote:
:On 2017-10-03 16:19:27 -0400 (-0400), Jonathan Proulx wrote:
:[...]
:> This works in our OpenStack where it's our IP space so PTR record also
:> matches, not so well in public cloud where we can reserve an IP and
:> s
On Tue, Oct 03, 2017 at 01:00:13PM -0700, Clint Byrum wrote:
:It's worth noting that AD and Kerberos were definitely not designed
:for clouds that have short lived VMs, so it does not surprise me that
:treating VMs as cattle and then putting them in AD would confuse it.
For instances we have that
on same committee.
Don't worry you can't get rid of me that easily :)
-Jon
:Thanks again for all that you have done!
:
:Sincerely,
:Shamail
:
:> On Aug 3, 2017, at 4:11 PM, Jonathan Proulx wrote:
:>
:> Hello All,
:>
:> It has been an honor and a privelege to serve o
cials, after verification of the electorate status of the
candidate.
-Jon
Jonathan Proulx
Sr. Technical Architect
MIT CSAIL
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/lis
Hi Conrad,
We boot to ephemeral disk by default but our ephemeral disk is Ceph
RBD just like out cinder volumes.
Using Ceph for Cinder Volumes and Glance Images storage it is possible
to very quickly create new Persistent Volumes from Glance Images
becasue on the backend it's just a CoW snapshot
;http://researcher.ibm.com/researcher/files/zurich-DCR/Got%20Loss%20Get%20zOVN.pdf>
:
:It’s a bit dated (2013) but may still apply.
:
:If you figure out a way of preventing this with modern OVS, I’d be very
interested to know.
:
:Best wishes,
:Stig
:
:
:> On 21 Jun 2017, at 16:24, Jonathan Pr
he
:vswitch to debug.
:
:
:
:On Tue, Jun 20, 2017 at 12:36 PM, Jonathan Proulx wrote:
:
:> Hi All,
:>
:> I have a very busy VM (well one of my users does I don't have access
:> but do have cooperative and copentent admin to interact with on th
:> eother end).
:>
:> At peak t
Hi All,
I have a very busy VM (well one of my users does I don't have access
but do have cooperative and copentent admin to interact with on th
eother end).
At peak times it *sometimes* misses packets. I've been didding in for
a bit ant it looks like they get dropped in OVS land.
The VM's main
Not sure if this is what you're looking for but...
For my private cloud in research environment we have a public provider
network available to all projects.
This is externally routed and has basically been in the same config
since Folsom (currently we're upto Mitaka). It provides public ipv4
ad
mmon (though I imagine for others where positive affinity is
important teh race may get lost mroe frequently)
-Jon
On Mon, May 22, 2017 at 03:00:09PM -0400, Jonathan Proulx wrote:
:On Mon, May 22, 2017 at 11:45:33AM -0700, James Penick wrote:
::On Mon, May 22, 2017 at 10:54 AM, Jay Pipes wrote:
::
:
On Mon, May 22, 2017 at 11:45:33AM -0700, James Penick wrote:
:On Mon, May 22, 2017 at 10:54 AM, Jay Pipes wrote:
:
:> Hi Ops,
:>
:> Hi!
:
:
:>
:> For class b) causes, we should be able to solve this issue when the
:> placement service understands affinity/anti-affinity (maybe Queens/Rocky).
:> Un
On Mon, Mar 13, 2017 at 03:45:14PM +, randy.perry...@dell.com wrote:
:Dell - Internal Use - Confidential
:Can someone post the schedule for these meeings?
Every two weeks (on odd weeks) on Monday at 1900 UTC in #openstack-meeting
http://eavesdrop.openstack.org/#User_Committee_Meeting
schedu
Oops...Sorry Christopher
Seems to me that the transposed letters would not create any voter
confusion and since Christopher seems to agree and is the person I'd
see with standing to challenge I think we're OK to proceed.
-Jon
On Tue, Feb 14, 2017 at 04:40:58PM +, Matt Jarvis wrote:
:I have
On Fri, Feb 03, 2017 at 04:34:20PM +0100, lebre.adr...@free.fr wrote:
:Hi,
:
:I don't know whether there is already a concrete/effective way to identify
overlapping between WGs.
:But if not, one way can be to arrange one general session in each summit where
all WG chairs could come and discuss
What Tim said :)
my ordering:
1) Preemptable Instances -- this would be huge and life changing I'd
give up any other improvements to get this.
2) Deeper utilization of nested projects -- mostly we find ways to
mange with out this but it would be great to have.
A) to allow research gro
We've been using Ceph as ephemeral backend (and glance store and
cinder backend) for > 2 years (maybe 3 ) and have been very happy.
cinder has been rock solid on RBD side. Early on when we had 6 osd
servers we lost one in production to a memory error. 1/6 is a large
fraction to loose but Ceph ha
On Tue, Dec 06, 2016 at 06:50:18PM +, Barrett, Carol L wrote:
:Congrats Shamail – Good work by the team and it’s also good to see that WGs
come together, complete a task and go away. ☺
I'll second that. The AUC Recognition WG is a great model for all
future Working Groups. Clear goal, well
t though
:> manual hackery I've only added 1 since the cloud was initially created
:> 4 years ago, so not a common action. Am I right in setting 9004 above
:> or should I still lie a little and provide the untagged MTU of 9000?
:>
:> Thanks,
:> -Jon
:>
:> :
:>
On Thu, Nov 17, 2016 at 01:27:39PM +, Brian Rosmaita wrote:
:On 11/17/16, 1:39 AM, "Sam Morrison"
mailto:sorri...@gmail.com>> wrote:
:
:On 17 Nov. 2016, at 3:49 pm, Brian Rosmaita
mailto:brian.rosma...@rackspace.com>> wrote:
:
:Ocata workflow: (1) create an image with default visibility, (2)
I have an odd issue that seems to just be affecting one private
network for one tenant, though I saw a similar thing on a different
project network recently which I 'fixed' by rebooting the hypervisor.
Since this has now (maybe) happened twice I figure I should try to
understand what it is.
Given
because of the behavior change.
Neat I didn't know support form changing MTU was even planned, but I
gues it's here (well not quite *here* but...)
-Jon
:
:
:On Fri, Nov 4, 2016 at 10:34 AM, Jonathan Proulx wrote:
:
:> Hi All,
:>
:>
:> So long story short how do I get my ml2
Hi All,
So long story short how do I get my ml2/ovs GRE tenant network to default to
MTU 9000 in Mitaka - or - get dhcp agents on on netork node to give
out different MTUs to different networks?
Seems between Kilo (my last release) and Mitaka (my current production
world) Neutron got a lot cle
Hi All,
Just on the other side of a Kilo->Mitaka upgrade (with a very brief
transit through Liberty in the middle).
As usual I've caught a few problems in production that I have no idea
how I could possibly have tested for because they relate to older
running instances and some remnants of older
So my sense from responses so far:
No one is doing unified SDN solutions across clouds and no one really
wants to.
Consensus is just treat each network island like another remote DC and
use normal VPN type stuff to glue them together.
( nod to http://romana.io an interesting looking network and
On Sat, Oct 01, 2016 at 11:47:56AM -0600, Curtis wrote:
:On Fri, Sep 30, 2016 at 8:15 AM, Jonathan Proulx wrote:
:>
:> Starting to think refactoring my SDN world (currently just neutron
:> ml2/ovs inside OpenStack) in preparation for maybe finally lighting up
:> that second Regi
On Sat, Oct 01, 2016 at 02:39:38PM -0700, Clint Byrum wrote:
:I know it's hard to believe, but this world was foretold long ago and
:what you want requires no special equipment or changes to OpenStack,
:just will-power. You can achieve it now if you can use operating system
:versions published in
Starting to think refactoring my SDN world (currently just neutron
ml2/ovs inside OpenStack) in preparation for maybe finally lighting up
that second Region I've been threatening for the past year...
Networking is always the hardest design challeng. Has anyone seen my
unicorn? I dream of someth
On Thu, Aug 25, 2016 at 10:55:51AM -0400, Jonathan Proulx wrote:
:Hi All,
:
:working on testing our Kilo-> Mitaka keystone upgrade, and I've
:clearly missied something I need to do or undo.
D'Oh why is it that public postings always lead me do discover my own
idiocy soon aft
have this issue if you use caching in Mitaka
:that will lead to intermittent API call failures -
:https://bugs.launchpad.net/keystone/+bug/1600394
:
:And finally, this Cinder bug will show up once you're on Keystone Mitaka:
:https://bugs.launchpad.net/cinder/+bug/1597045
:
:
:
:On Thu, Aug 25
Hi All,
working on testing our Kilo-> Mitaka keystone upgrade, and I've
clearly missied something I need to do or undo.
After DB migration and the edits I belive are required to paste and
conf files I can get tokens (using password auth) but it won't seem to
accept them (for example with an admin
On Mon, Aug 15, 2016 at 05:33:13PM +0200, Saverio Proto wrote:
:I found that Ubuntu has packages here:
:http://ubuntu-cloud.archive.canonical.com/ubuntu/pool/main/n/networking-l2gw/
:
:but I can't really get from the version number if these packages are
:supposed to be for Liberty or Mitaka.
:Has
On Thu, Aug 04, 2016 at 09:59:15AM +0200, Álvaro López García wrote:
:For the record, we proposed a spec [1] on this long ago (actually, 1
:year ago).
:
:[1] https://review.openstack.org/#/c/104883/
:
:Your input is much welcome!
Thanks for the pointer, actually looks like you put in patch set on
cloudarchive mitaka but
installed on a kilo hypervisor) I am working.
Thanks,
-Jon
:Cheers,
:
:On 7 July 2016 at 08:13, Jonathan Proulx wrote:
:> On Wed, Jul 06, 2016 at 12:32:26PM -0400, Jonathan D. Proulx wrote:
:> :
:> :I do have an odd remaining issue where I can run cuda jobs in
On Wed, Jul 06, 2016 at 12:32:26PM -0400, Jonathan D. Proulx wrote:
:
:I do have an odd remaining issue where I can run cuda jobs in the vm
:but snapshots fail and after pause (for snapshotting) the pci device
:can't be reattached (which is where i think it deletes the snapshot
:it took). Got same
Hi All,
I about to start testing for our Kilo->Mitaka migration.
I seem to recall many (well a few at least) people who were looking to
do a direct Kilo to Mitaka upgrade (skipping Liberty).
Blue Box apparently just did and I read Stefano's blog[1] about it,
and while it gives me hope my plan is
This means <24hr to add your choice to the non binding doodle poll:
http://doodle.com/poll/e4heruzps4g94syf
Venue details recored here:
https://etherpad.openstack.org/p/ops-meetup-venue-discuss
And of course if you are available to join us tomorrow at 1400UTC
please do
-Jon
On Mon, Jun 20,
On Wed, Jun 15, 2016 at 10:56:55AM +0200, Saverio Proto wrote:
:Hello all,
:
:I will need a visa to come to the US for the Mid-Cycle Ops Meetup.
:
:The process to obtain a Visa can take up to 8 weeks, and I cannot
:apply until dates and venue are decided.
:
:please set a date at least 8 weeks ahead
Hi All,
We carry one small local change in Horizon and I'm trying to determine
if there's enough community interst in similar things that we should
go through the process of upsreaming it.
In order to provide access to our preexisting self-service IPAM and
DNS registration services we allow and i
Hi All,
Having trouble finding any current info on best practices for
providing GPU instances. Most of what Google is feeding me is Grizzly
or older.
I'm currently on Kilo (Mitaka upgrade planned in 60-90 days) with
Ubuntu14.04 and kvm hypervisor. Looking to add some NVidia GPUs but
haven't inv
t their diagram
-Jon
:
:On Fri, May 6, 2016 at 11:50 AM, Jonathan Proulx wrote:
:
:> On Fri, May 06, 2016 at 11:39:03AM -0400, Silence Dogood wrote:
:> :this strikes me as a really bad idea from a security standpoint... in fact
:> :it would violently violate like every audit / policy r
On Fri, May 06, 2016 at 11:39:03AM -0400, Silence Dogood wrote:
:this strikes me as a really bad idea from a security standpoint... in fact
:it would violently violate like every audit / policy requirement I am aware
:of.
I come from a research environment with admittedly low thresholds for
this s
On Fri, Mar 04, 2016 at 12:20:44PM +, Jeremy Stanley wrote:
:On 2016-03-04 10:02:36 +0100 (+0100), Thierry Carrez wrote:
:[...]
:> Upstream contributors are represented by the Technical Committee
:> and vote for it. Downstream contributors are represented by the
:> User Committee and (imho) sho
On Fri, Mar 04, 2016 at 11:52:33AM +0800, gustavo panizzo (gfa) wrote:
:On Thu, Mar 03, 2016 at 03:52:49PM -0500, Jonathan Proulx wrote:
:>
:> I have a user who wants to specify their libvirt CPU type to restrict
:> performance because they're modeling embeded systems.
:>
:&g
I have a user who wants to specify their libvirt CPU type to restrict
performance because they're modeling embeded systems.
I seem to vaguely recall there is/was a way to specify this either in
the instance type or maybe even in the image metadata, but I can't
seem to find it.
Am I delusional or
On Wed, Mar 02, 2016 at 09:35:07PM -0500, Mathieu Gagné wrote:
:What would prevent the next user from having workloadB collocated with
:an other user's workloadA if that's the only capacity available?
:
:Unless aggregates are used, it will be hard to guaranty that workloadA
:and workloadB (from any
On Thu, Mar 03, 2016 at 03:57:22PM +0100, Pierre Freund wrote:
:>
:> *This needs a catchy name.*
:> Yes, yes it does. Suggestions?
:>
:
:Some suggestions, but I'm not a native english speaker, it might sounds not
:natural.
As a native (american) english speaker all these suggestions sound
natura
e
:to distance, visas, travel. Not many can justify to their management such
:expense.
:
:Behzad.
:
:On Tue, Nov 17, 2015 at 7:57 AM, Jonathan Proulx wrote:
:> There also exists a fairly strong community desire to address ways of
:> increasing remote participation (spokes) through coordinat
, 2015 at 10:50:52AM -0500, Jonathan Proulx wrote:
:Hi All,
:
:1st User Committee IRC meeting will be today at 19:00UTC on
:#openstack-meeting, we haven't exactly settled on an agenda yet but I
:hope to raise this issue the...
:
:It has been suggested that we make the February 15-16 Eur
:may reduce the attendance at the main one which defeats the purpose. Those
:midcycles work best when we have lots of different voices providing input.
:
:On Mon, Nov 16, 2015 at 10:06 AM, Jonathan Proulx wrote:
:
:> On Mon, Nov 16, 2015 at 04:55:33PM +, Kruithof, Piet wrote:
:> :Sorry, late to
o:matt.jar...@datacentred.co.uk>>
:Date: Monday, November 16, 2015 at 9:23 AM
:To: Jonathan Proulx mailto:j...@csail.mit.edu>>
:Cc:
"openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>"
mailto:openstack-operators@lists.openstack.org>>
:Sub
gh seems the tide is runiing toward option 2, multiple
meet-ups. Though wee're still at a very small sample size.
-Jon
On Mon, Nov 16, 2015 at 10:50:52AM -0500, Jonathan Proulx wrote:
:Hi All,
:
:1st User Committee IRC meeting will be today at 19:00UTC on
:#openstack-meeting, we haven'
Hi All,
1st User Committee IRC meeting will be today at 19:00UTC on
#openstack-meeting, we haven't exactly settled on an agenda yet but I
hope to raise this issue the...
It has been suggested that we make the February 15-16 European Ops
Meetup in Manchester UK [1] the 'official' OPs Midcycle. Pr
On Mon, Nov 09, 2015 at 04:05:51PM -0500, Sean Dague wrote:
:Can you be more specific about "upgrade process is hell!"? We continue
:to work on improvements in upgrade testing to block patches that will
:make life hell for upgrading. Getting a bunch of specifics on bugs that
:triggered during upgr
On Mon, Nov 09, 2015 at 07:18:16PM +, Jeremy Stanley wrote:
:On 2015-11-09 19:01:36 + (+), Tom Cameron wrote:
:[...]
:> What do the user/operator surveys say about the usage of older
:> releases? What portion of the user base is actually on releases
:> prior to Havana?
:
:The most recen
On Fri, Nov 06, 2015 at 05:28:13PM +, Mark Baker wrote:
:Worth mentioning that OpenStack releases that come out at the same time as
:Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka) are
:supported for 5 years by Canonical so are already kind of an LTS. Support
:in this cont
On Mon, Sep 28, 2015 at 02:06:00PM -0600, Matt Fischer wrote:
:On Mon, Sep 28, 2015 at 1:46 PM, Jonathan Proulx wrote:
:> I'm hearing conflicting advice about the suitibility of Fernet tokens
:> for production use.
:>
:> I like the idea. I did get them to work in kilo trivi
On Mon, Sep 28, 2015 at 03:31:54PM -0400, Adam Young wrote:
:On 09/26/2015 11:19 PM, RunnerCheng wrote:
:>Hi All,
:>I'm a newbie of keystone, and I'm doing some research about it
:>recently. I have a question about how to deploy it. The scenario is
:>on below:
:>
:>One comany has one headquarter dc
ck end
and rsync'ing but RBD gets us lots of fun things we want to keep
(quick start, copy on write thin cloned ephemeral storage etc...) so
decided to live with making our users copy images around.
-Jon
On Tue, Sep 8, 2015 at 5:00 PM, Jay Pipes wrote:
> On 09/08/2015 04:44 PM, Jon
Hi All,
I'm pretty close to opening a second region in my cloud at a second
physical location.
The plan so far had been to only share keystone between the regions
(nova, glance, cinder etc would be distinct) and implement this by
using MariaDB with galera replication between sites with each site
Juno clients with Kilo endpoints are fine.
We recently upgraded from Juno to Kilo and many of our clients
(including Horizon) are still Juno with no problems.
We also had a Kilo Horizon (for testing) running against Juno
endpoints prior to the upgrade and that seemed to work as well but it
wasn't
On Wed, Jul 1, 2015 at 3:29 AM, Tom Fifield wrote:
> Team,
>
> It's great to see so much passion! :)
>
> Here's an attempt at a summary email. I'll wait until a later email to
> wade into the discussion myself ;) Feel free to jump in on any point.
>
> =Things we tend to agree on=
I agree on all
On Thu, Jul 2, 2015 at 2:26 PM, Jesse Keating wrote:
> BoD, unless they feel the need to delegate, at which point then maybe an
> Operators committee. But I'd hate to see more committees created.
I feel like this may be a User Committee thing, which is an existing
committee and sort-of-kind-of ho
On Mon, Jun 29, 2015 at 1:52 PM, Brendan Johnson wrote:
> What are people using to perform image based backups of Windows and Linux VMs
> in OpenStack? I am using KVM as the hypervisor, Ceph for block storage and
> Swift for object storage. I know Cinder can backup volumes that are not in
> u
On Mon, Jun 15, 2015 at 7:46 PM, Richard Raseley wrote:
> As part of wrapping up the few remaining 'loose ends' in moving the Puppet
> modules under the big tent, we are pressing forward with deprecating the
> previously used 'puppet-openst...@puppetlabs.com' mailing list in favor of
> both the op
On Fri, Jun 5, 2015 at 4:34 AM, Thierry Carrez wrote:
> One option is to abandon the idea and converge to using the same
> concept. Another option is to rename that rich data ("project
> operational metadata" ?) to avoid the confusion of calling with same
> name what is essentially two different
Been sort of chewing on the various opinions expressed ...
clearly this decision needs to be one of the first things to come
out of the ops-tags team
I think we could do most of what we need with binary tags if those
were hierarchical ie this tag means those other five are also applied,
but multi
Definitely think IRC is the way to go for same reasons others have stated
On Wed, Jun 3, 2015 at 6:52 AM, Tom Fifield wrote:
> Hi all,
>
>
> As agreed at the summit, let's have a monthly meeting for the Ops Tags Team.
>
> On the agenda for this round:
> 0. Announcements (new repo, wiki page)
> 1.
On Thu, May 28, 2015 at 3:21 PM, Fox, Kevin M wrote:
> I've experienced the opposite problem though. Downloading raw images and
> uploading them to the cloud is very slow. Doing it through qcow2 allows them
> to be compressed over the slow links. Ideally, the Ceph driver would take a
> qcow2 an
On Thu, May 28, 2015 at 3:34 PM, Warren Wang wrote:
> Even though we're using Ceph as a backend, we still use qcow2 images as our
> golden images, since we still have a significant (maybe majority) number of
> users using true ephemeral disks. It would be nice if glance was clever
> enough to conv
Hi All,
If you look at the Sched for Wednesday there's a substantial pile of
sessions just called "Ops: Work session".
They are all actually about pretty specific things and typically run 3
in parallel so you have some choices to make, but it's not real easy
to see what those are.
I've lovingly
On Mon, May 4, 2015 at 9:42 PM, Tom Fifield wrote:
> Do you need users to be able to see it as one cloud, with a single API
> endpoint?
Define need :)
As many of you know my cloud is a University system and researchers
are nothing if not lazy, in the best possible sense of course :) So
having
Hi All,
We're about to expand our OpenStack Cloud to a second datacenter.
Anyone one have opinions they'd like to share as to what I would and
should be worrying about or how to structure this? Should I be
thinking cells or regions (or maybe both)? Any obvious or not so
obvious pitfalls I should
On Mon, Mar 30, 2015 at 3:19 PM, Daniel Comnea wrote:
> No thoughts?
You don't need to do sequential upgrades, you *should* be able to
cherry pick which components to upgrade because there *shouldn't* be
and RPC format changes in back ports. That said I once had issues
becasue of an RPC change wh
I use Puppet because that's what the rest of my infrastructure was
built on and it worked. When I was at the decision point (2-3yrs ago)
for a replacement to CFEngine2 for site wide configuration management
Puppet and Chef were the only serious contenders and seemed equivalent
enough, deciding fac
On Mon, Mar 16, 2015 at 12:33 PM, Caius Howcroft
wrote:
> For what its worth all bloomberg's configs are open source (apart from
> things like ips, tokens and such) and in chef templates:
> https://github.com/bloomberg/chef-bcpc/tree/master/cookbooks/bcpc/templates/default
>
> thats what we run in
Hi All,
One of the requests that's come up a few times around here has been
for 'real' config example.
During the PHL Ops Midccyle we finally mad ea place to put them:
https://github.com/osops/example-configs
And over the past couple days I pushed up the configs from MIT CSAIL's
deploy. There's
On Fri, Mar 13, 2015 at 4:56 PM, Subbu Allamaraju wrote:
> Regarding the discussion on tags, here is my take from the discussion on
> Monday.
>
> 1. There is vehement agreement that having an integrated release consisting
> of a small “inner ring” set of service is desirable. There are 6-7 project
Hi All,
I'll be moderating the "Burning Issues" break out session in
Philadelphia https://etherpad.openstack.org/p/PHL-ops-burning-issues
The idea is to high light the things that most urgently need attention
from an Operator stand point and identify what the next steps to
fixing them (or convinc
Recently (4 weeks?) moved from Icehouse to Juno. It was pretty smooth
(neutron has been much more well behaved though I know that's not
relevant to you).
One negative difference I noticed, but haven't really dug into yet
since it's not a common pattern here:
If I schedule >20 instances in one API
I had the misfortune of doing an in production roll back,
grizzly->havana->grizzly I think, though thankfuly before anything new
had occurred after upgrade...
Restore from DB dump is what I did and what I'd do again. For all
reasons previously stated programatic reverse migration just doesn't
make
Sorry to have missed the lobby discussion on this this morning.
An "Ops Project" feels weird to me.
Most things I can think of going into this space seem to be better
served as part of existing projects. For one example logging filters
& parsers should probably be distributed with Oslo which def
I see Ceph as the most unified storage solution for OpenStack. We're
starting to use it for Cinder, and are currently using it for Glance
and Nova. We haven't used Swift for the 2.5 years we've been running,
but since we have recently deployed Ceph for these other uses will do
plan on rolling out
1 - 100 of 109 matches
Mail list logo