kolla installs it via containers.
Thanks,
Kevin
From: Ignazio Cassano [ignaziocass...@gmail.com]
Sent: Wednesday, November 14, 2018 10:48 AM
To: Eduardo Gonzalez
Cc: OpenStack Operators
Subject: Re: [Openstack-operators] Openstack zun on centos???
Hi Edoardo,
does
Kevin
From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Thursday, October 25, 2018 8:37 AM
To: Fox, Kevin M; openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without
DVR
you mean deploy octavia into an openstack pr
Would it make sense to move the control plane for this piece into the cluster?
(vm in a mangement tenant?)
Thanks,
Kevin
From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Thursday, October 25, 2018 7:39 AM
To: openstack-operators@lists.opensta
+1 :)
From: Tim Bell [tim.b...@cern.ch]
Sent: Wednesday, September 26, 2018 11:55 AM
To: OpenStack Development Mailing List (not for usage questions);
openstack-operators; openstack-sigs
Subject: Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] sta
How about stated this way,
Its the tc's responsibility to get it done. Either by delegating the activity,
or by doing it themselves. But either way, it needs to get done. Its a ball
that has been dropped too much in OpenStacks history. If no one is ultimately
responsible, balls will keep getting
I don't think openstack itself can meet full zero downtime requirements. But if
it can, then I also think none of the deployment tools try and support that use
case either.
Thanks,
Kevin
From: Amit Kumar [ebiib...@gmail.com]
Sent: Friday, May 18, 2018 3:46 AM
To:
Stack Development Mailing List (not for usage questions)
Cc: openstack-oper.
Subject: Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases
On Wed, Nov 15, 2017 at 10:44 AM, John Dickinson wrote:
>
>
> On 14 Nov 2017, at 15:18, Mathieu Gagné wrote:
>
>> On Tue, Nov 14, 2017 a
The pressure for #2 comes from the inability to skip upgrades and the fact that
upgrades are hugely time consuming still.
If you want to reduce the push for number #2 and help developers get their wish
of getting features into users hands sooner, the path to upgrade really needs
to be much less
kolla has various containerization tools. one based on ansible, another based
on kubernetes.
From: Imtiaz Chowdhury [imtiaz.chowdh...@workday.com]
Sent: Monday, August 28, 2017 5:24 PM
To: Curtis
Cc: openstack-operators@lists.openstack.org
Subject: Re: [Ope
Oh, yes please! We've had to go through a lot of hoops to migrate ceph-mon's
around while keeping their ip's consistent to avoid vm breakage. All the rest
of the ceph ecosystem (at least that we've dealt with) works fine without the
level of effort the current nova/cinder implementation imposes
So one gut reaction is this is going to make more heat stacks fail. If
pushing the orchestration stuff out of nova's the goal, you probably should
involve heat so that it knows the difference between a vm that failed because
it was scheduled poorly and can just be resubmitted and a vm that f
its a flag like noout. set with the ceph cli command.
Make sure all clients are to jewel (all vms restarted after the client
iupgraded) before you set it though. We ran into some issues with that.
Thanks,
Kevin
From: Grant Morley [gr...@absolutedevops.io]
Sent: W
I think the really short answer is something like: It greatly simplifies
scheduling and billing.
From: Vladimir Prokofev [v...@prokofev.me]
Sent: Wednesday, March 15, 2017 2:41 PM
To: OpenStack Operators
Subject: [Openstack-operators] Flavors
A question of curios
We've run it in a test cloud meant to identify production issues before
supporting them on our production cloud.
We ran into a few issues that may or may not apply in your situation:
There's a security issue with the Trove Rabbitmq. The easiest way around it is
to use the feature that lets the
The issue is, as I understand it, that there are no tests currently to check if
changes to the Kolla code base will break the Debian based containers, and no
one has stepped up to write the tests in a long time.
So, no one can rely on the containers being in a usable state.
If someone is willin
We're running dvr on one of our clouds. ~70 hypervisors currently and more
getting ready to join. We have not played with dvr+l3ha as it was unstable at
the time but once there is a migration path, we would like to go there.
So far, DVR has seemed pretty stable and is performing well.
Thanks,
K
d in jewel too.
which has bitten us on multiple occasions.
Thanks,
Kevin
From: Xav Paice [xavpa...@gmail.com]
Sent: Wednesday, October 05, 2016 12:39 PM
To: Fox, Kevin M
Cc: George Mihaiescu; OpenStack Operators
Subject: Re: [Openstack-operators] Rados Gatewa
Did you try it with jewel? If not, what version?
Thanks,
Kevin
From: Xav Paice [xavpa...@gmail.com]
Sent: Wednesday, October 05, 2016 12:12 PM
To: George Mihaiescu
Cc: OpenStack Operators
Subject: Re: [Openstack-operators] Rados Gateway to Swift migration
OpenStack really needs a way to have a control api for selecting a swift
"flavor", and letting you have multiple swift endpoints within, so swift the
software, radowgw, and vendor endpoints can all co'exist.
Kevin
From: Xav Paice [xavpa...@gmail.com]
Sent
I'd also add it depends on feature set of the cloud. If you have extra
services, or your users keep asking for more and more openstack features to be
added to the cloud (dnsaas, dbaas, hadoopaas, coeaas,) then the ratio is much
higher then say, with just a basic cloud with vmaas & naas.
Thanks,
+1
From: Tim Bell [tim.b...@cern.ch]
Sent: Thursday, August 18, 2016 10:50 AM
To: Jonathan D. Proulx
Cc: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Shelving
I was interested to establish a consensus that
- Shelved instances
we run 3, one per controller. 40 seems like you might run into some issues and
really shouldn't hit your controllers that hard at only 40 nodes.
Thanks,
Kevin
From: David Wahlstrom [david.wahlst...@gmail.com]
Sent: Wednesday, August 17, 2016 2:27 PM
To: OpenStack
+1
From: Melvin Hillsman
Sent: Friday, July 29, 2016 9:22:02 PM
To: Steven Dake (stdake); openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [kolla] question of default users for
operators
Personally, if OpenStack default is to have a memb
Kevin
From: Davanum Srinivas [dava...@gmail.com]
Sent: Thursday, July 28, 2016 5:31 AM
To: Dmitry Mescheryakov
Cc: Fox, Kevin M; OpenStack Operators
Subject: Re: [Openstack-operators] [oslo] RabbitMQ queue TTL issues moving to
Liberty
Dima, Kevin,
There are PreStop
It does send a sigterm and wait.
I'm saying, I'm concerned the services aren't all cleaning up after themselves
today.
Thanks,
Kevin
From: Dmitry Mescheryakov [dmescherya...@mirantis.com]
Sent: Thursday, July 28, 2016 5:22 AM
To: Fox, Kevin M
C
vices shut
things down properly? Seems like its a cross project issue? Should a spec be
submitted?
Thanks,
Kevin
From: Dmitry Mescheryakov [dmescherya...@mirantis.com]
Sent: Tuesday, July 26, 2016 11:01 AM
To: Fox, Kevin M
Cc: Sam Morrison; OpenStack Operators
Subjec
Yeah, we've experienced it but hadn't had time yet to really dig in like this,
or gotten a good workaround. If you file a bug, please let me know what number.
Thanks,
Kevin
From: Sam Morrison [sorri...@gmail.com]
Sent: Sunday, July 24, 2016 11:27 PM
To: Op
Ah. Interesting.
The graceful shutdown would really help the Kubernetes situation too.
Kubernetes can do easy rolling upgrades and having the processes being able to
clean up after themselves as they are upgraded is important. Is this something
that needs to go into oslo.messaging or does it ha
Cool. Maybe this could be contributed to the Kolla project?
Thanks,
Kevin
From: Gerard Braad
Sent: Monday, June 27, 2016 8:58:18 PM
To: openst...@lists.openstack.org; openstack-operators
Subject: [Openstack-operators] [openstack] [tools] OpenStack client in a Dock
Is there a copy-from-url method that's not deprecated yet?
The app catalog is still pointing users at the command line in v1 mode
Thanks,
Kevin
From: Matt Fischer [m...@mattfischer.com]
Sent: Thursday, May 12, 2016 4:43 PM
To: Flavio Percoco
Cc: openstack-...@
_
From: mikalst...@gmail.com [mikalst...@gmail.com] on behalf of Michael Still
[mi...@stillhq.com]
Sent: Thursday, May 12, 2016 4:06 PM
To: Fox, Kevin M
Cc: David Medberry; Ned Rhudy; openstack-operators@lists.openstack.org; Sean
Dague
Subject: Re: [Openstack-operators] Anyone else use vendo
There are a couple of reasons I think this may cause you problems even if it
was technically feasible:
* VMs deployed on vmware are built as pets and really need/benifit from the
features of vmware. if migrating to anything but a vmware openstack cloud then
those underlying expectations built
From: Mathieu Gagné [mga...@calavera.ca]
Sent: Tuesday, May 03, 2016 3:25 PM
To: Fox, Kevin M
Cc: Michael Still; openstack-operators@lists.openstack.org; Sean Dague
Subject: Re: [Openstack-operators] Anyone else use vendordata_driver in
nova.conf?
On Tue, May 3, 2016 at 5:51 PM, Fox, Kevin M wrote:
@gmail.com [mikalst...@gmail.com] on behalf of Michael Still
[mi...@stillhq.com]
Sent: Tuesday, May 03, 2016 2:37 PM
To: Fox, Kevin M
Cc: David Medberry; Ned Rhudy; openstack-operators@lists.openstack.org; Sean
Dague
Subject: Re: [Openstack-operators] Anyone else use vendordata_driver in
nova.conf?
He
We've used it too to work around the lack of instance users in nova. Please
keep it until a viable solution can be reached.
Thanks,
Kevin
From: David Medberry [openst...@medberry.net]
Sent: Monday, April 18, 2016 7:16 AM
To: Ned Rhudy
Cc: openstack-operators@lists
Kolla folks, the word opinionated in the deploymentbconfig tool is usually seen
by ops as heavily tieing your hands to the point of being very painful or a
show stopper. I get that your trying to say that now its "opinionated" for easy
install but supports being unopinionated, but some ops wont
We saw a problem recently with layer3+4. We're still working on it, but a
possible datapoint:
We had them connected through a cisco switch and saw iperf only use 50% of
capacity about 50% of the runs, and most of the rest of the time full.
Seemingly randomly.
We connected the two nodes directly
You can just rotate without restarting services.
We're rotating currently only once a day.
We rotate on one machine, then rsync the data to the others in a cron job. Has
been working well for a couple of months now.
Thanks,
Kevin
From: Ajay Kalambur (aka
We use them all the time, and openstack in one version actually broke them on
us. (I wrote and contributed a unit test so it shouldn't happen again.)
Use case:
You have two external networks.
1. Internet - One that's directly connected to the internet.
2. One that is a private network space and
I guess my current focus is on network HA, since these popped into my head
right away and all seem to be network related:
* l3+dvr in neutron's a huge, huge thing.
* neutron external rbac. (https://review.openstack.org/#/c/282295/ really
useful for more then one reason)
* Octavia anti-affinity (
you usually do that on an instance level with server groups. do you have an
example where you might want to do it at the flavor level?
Thanks,
Kevin
From: Adam Lawson [alaw...@aqorn.com]
Sent: Wednesday, March 02, 2016 2:48 PM
To: openstack-operators@lists.opensta
We usually use two vips.
Thanks,
Kevin
From: Steven Dake (stdake)
Sent: Friday, February 12, 2016 6:04:45 AM
To: openstack-operators@lists.openstack.org
Subject: [Openstack-operators] [kolla] Question about how Operators deploy
Hi folks,
Unfortunately I won't be
They were used indirectly I think when you had some services configured for v2
only authentication support because the service didn't work with v2. For
example nova->neutron was v2 only for a while. I think all services are
supporting v3 these days so that would no longer be necessary?
Thanks,
We've used ceph to address the storage requirement in small clouds pretty well.
it works pretty well with only two storage nodes with replication set to 2, and
because of the radosgw, you can share your small amount of storage between the
object store and the block store avoiding the need to ove
ds to stay.
I'd also really like to see them for tenant networks for similar reasons. Them
only working on external networks is limiting.
Thanks,
Kevin
From: Robert Starmer [rob...@kumul.us]
Sent: Friday, January 29, 2016 1:21 AM
To: Fox, Kevin M
Cc: Ca
networks to integrate with the rest of the datacenter. No BGP L3VPN tunnels,
which cannot be done in open-source.
Tomas
>
> On Wed, Jan 27, 2016 at 3:33 PM, Fox, Kevin M
wrote:
>
>
>
>
>
> But there already is a second external address, the fip address that's
nating. I
e existing code to function the way it
always has, greatly simplifying implementation?
Thanks,
Kevin
From: Robert Starmer [rob...@kumul.us]
Sent: Wednesday, January 27, 2016 8:34 PM
To: Fox, Kevin M
Cc: Carl Baldwin; OpenStack Operators; Tomas Vondra
Subject: Re:
But there already is a second external address, the fip address that's nating.
Is there a double nat? I'm a little confused.
Thanks,
Kevin
From: Robert Starmer [rob...@kumul.us]
Sent: Wednesday, January 27, 2016 3:20 PM
To: Carl Baldwin
Cc: OpenStack Operators; To
ceph would work pretty well for that use case too. We've run a ceph with two
ost's, with the replication set to 2, to back both cinder and glance for HA.
Nothing complicated needed to get it working. Less complicated then drbd I
think. You can then also easily scale it out as needed.
Thanks,
Ke
great. thanks for letting us know. :)
Kevin
From: raju [raju.r...@gmail.com]
Sent: Tuesday, January 26, 2016 11:25 AM
To: Fox, Kevin M
Cc: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Cinder-backup to swift
Kevin/Swami,
Now it is
crank up debugging and look in the logs? anything interesting?
Thanks,
Kevin
From: raju [raju.r...@gmail.com]
Sent: Tuesday, January 26, 2016 9:55 AM
To: Fox, Kevin M
Cc: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Cinder-backup to
There is a "feature" where cinder-backup and cinder-volume must be running on
the same node. If they are not, you get an error like that. I tripped over it
myself.
Thanks,
Kevin
From: raju [raju.r...@gmail.com]
Sent: Tuesday, January 26, 2016 8:16 AM
To: openstac
Awsome news. Should there be a tag added for gallera multimaster safe to let us
ops know about these things?
Thanks,
Kevin
From: Kevin Benton
Sent: Monday, December 07, 2015 6:08:12 PM
To: Matteo Panella
Cc: OpenStack Operators
Subject: Re: [Openstack-operators]
[alopg...@gmail.com]
Sent: Monday, December 07, 2015 1:57 PM
To: Fox, Kevin M
Cc: Clint Byrum; openstack-operators
Subject: Re: [Openstack-operators] DIB in container vs VM
Thanks for the suggestion.
I am indeed running it privileged.
One of the examples I wanted to share was: building Fedor
Yeah, switching them that way makes a lot of sense.
Thanks,
Kevin
From: Dan Sneddon
Sent: Thursday, December 03, 2015 12:39:25 PM
To: Fox, Kevin M; Jesse Keating; Sean Dague
Cc: openstack-operators
Subject: Re: [Openstack-operators] Service Catalog TNG urls
On
We use internal to be a private network between the controllers and the compute
nodes that no one else has access to. Without that, we'd be stuck.
An OpenStack network that's where all the public services go, that isn't
external to the cloud for billing purposes does make sense too though. Maybe
Did you try launching the docker container with the privilege flag? dib does a
lot of privileged things last I looked.
Thanks,
Kevin
From: Clint Byrum [cl...@fewbar.com]
Sent: Wednesday, December 02, 2015 3:49 PM
To: openstack-operators
Subject: Re: [Opens
One thing more to consider, live upgrades still arn't a thing yet, but getting
closer. Being able to do it with single version upgrades is a pretty hard
thing. Doing it across LTS style releases wouldn't work without a huge amount
of effort all on its own.
We may be better off waiting until we
Kind of related, as an op, we see a lot of 3rd party repositories that recently
only supported rhel5 move to finally supporting rhel6 because rhel7 came out
and rhel5 went to long term support contract only. This caused us to have to
support rhel5 way longer then we would have liked. Now, we're
+1
From: Christopher Aedo
Sent: Monday, October 12, 2015 3:51:23 PM
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Tokyo Ops Design Summit Tracks - Agenda on
Sched
On Sun, Oct 11, 2015 at 8:55 PM, Tom Fifield wrote:
> Hi all,
>
>
Awesome. Thanks for the link. :)
Kevin
From: jonathan.pro...@gmail.com [jonathan.pro...@gmail.com] on behalf of
Jonathan Proulx [j...@jonproulx.com]
Sent: Thursday, May 28, 2015 12:59 PM
To: Fox, Kevin M
Cc: Dmitry Borodaenko; David Medberry; openstack
As an Op, I've ran into this problem and keep running into it. I would very
much like a solution.
Its also quite related to the nova instance user issue I've been working on,
that's needed by the App Catalog project.
So, yes, please keep fighting the good fight.
Thanks,
Kevin
_
You can find ubuntu images in the app catalog. For example:
http://apps.openstack.org/#tab=glance-images&asset=Ubuntu%20Trusty%2014
Thanks,
Kevin
From: Abhishek Talwar [abhishek.tal...@tcs.com]
Sent: Friday, July 31, 2015 4:05 AM
To: openstack-operators
Subject: [O
Vendor data can do that. See the json metadata plugin to the nova metadata
server and the vendor data section of cloud init.
Thanks,
Kevin
From: Kris G. Lindgren
Sent: Thursday, July 23, 2015 8:38:33 AM
To: Openstack Guru; openstack-operators@lists.openstack.org
Sounds like a good plan to me.
Thanks,
Kevin
From: David Medberry
Sent: Tuesday, July 21, 2015 7:51:50 AM
To: Michael Still
Cc: openstack-operators@lists.openstack.org; Andrew Laski
Subject: Re: [Openstack-operators] Nova cells v2 and operational impacts
Also, if
The nova instance user workflow could be used for that?
https://review.openstack.org/#/c/186617/
A template could start the vm, register the instance user id with the IdP, and
then the instance can call the IdP to register.
Thanks,
Kevin
From: Adam Young [ayo...@
The biggest issue we have run into with multiple public networks is restricting
which users can use which networks. We have the same issue, where we may have
an internal public network for the datacenter, but also, say, a DMZ network we
want to put some vm's on, but can't currently extend that n
Awesome. Are they ready enough that they should go into the app catalog?
(http://apps.openstack.org)
Thanks,
Kevin
From: Matthew Thode [prometheanf...@gentoo.org]
Sent: Monday, June 08, 2015 8:26 AM
To: openstack-operators@lists.openstack.org
Subject: [Ope
Kevin
From: Tom Fifield
Sent: Sunday, June 07, 2015 6:54:42 PM
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [Tags] Tags Team Repo & our first tags!
On 06/06/15 03:12, Fox, Kevin M wrote:
> With my op hat on, I'd very much p
With my op hat on, I'd very much prefer packaged-in-ubuntu/packaged-in-centos
or packaged=ubuntu,centos. If its just packaged=True, I'd still have to go look
up if its in my distro of choice.
Thanks,
Kevin
From: Jeremy Stanley [fu...@yuggoth.org]
Sent: Fr
Does DVR work with other drivers such as the Mellenox or Cisco drivers? What
about with anything SRIOV?
Thanks,
Kevin
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openst
Since you are passing the tagged physical network device eth1.803 into the
bridge, I think you need to use a flat network in the config/ external network
create command. Otherwise it may do nested vlan tags.
Thanks,
Kevin
From: Geo Varghese
Sent: Friday, June 05
1:02 PM
To: Fox, Kevin M
Cc: openstack-operators@lists.openstack.org; openst...@lists.openstack.org
Subject: Re: Help with multiple external network in openstack
Kevin,
Thanks. Can you please explain these values
pub:br-pub,scz:br-scz,osg:br-osg,mgmt:br-mgmt
These 4 networks are external network
In /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini currently we have:
bridge_mappings = pub:br-pub,scz:br-scz,osg:br-osg,mgmt:br-mgmt
Thanks,
Kevin
From: Geo Varghese [gvargh...@aqorn.com]
Sent: Thursday, June 04, 2015 12:29 PM
To: Fox, Kevin M
Cc
Great. Thanks for sharing. I'll have to try it myself. :)
Kevin
From: Cynthia Lopes [clsacrame...@gmail.com]
Sent: Thursday, June 04, 2015 9:08 AM
To: Fox, Kevin M
Cc: Steve Gordon; OpenStack Operations Mailing List
Subject: Re: [Openstack-operators]
e versions.
Thanks,
Kevin
From: Cynthia Lopes [clsacrame...@gmail.com]
Sent: Thursday, June 04, 2015 8:05 AM
To: Fox, Kevin M
Cc: Steve Gordon; OpenStack Operations Mailing List
Subject: Re: [Openstack-operators] Venom vulnerability
Hi,
I dit not update my ceph c
Bridge mappings set in plugin.ini?
Thanks,
Kevin
From: Geo Varghese
Sent: Thursday, June 04, 2015 6:25:46 AM
To: openstack-operators@lists.openstack.org; openst...@lists.openstack.org
Subject: [Openstack-operators] Help with multiple external network in openstack
For the record, what version of ceph are you using before and after?
Thanks,
Kevin
From: Cynthia Lopes
Sent: Thursday, June 04, 2015 1:27:53 AM
To: Steve Gordon
Cc: OpenStack Operations Mailing List
Subject: Re: [Openstack-operators] Venom vulnerability
Hi guys,
If you disable discovery/unknown hosts on both of the pxe servers, you should
be able to define the machine in only one of the pxe servers and boot it
reliably. I've mixed Ironic and Cobbler managed hosts on the same network this
way.
Thanks,
Kevin
From: Adam La
I've experienced the opposite problem though. Downloading raw images and
uploading them to the cloud is very slow. Doing it through qcow2 allows them to
be compressed over the slow links. Ideally, the Ceph driver would take a qcow2
and convert it to raw on glance ingest rather then at boot.
Tha
I would agree from the standpoint that most users don't want to care about the
network but the majority of the users are folks that want a higher level of
service then just raw compute resources too.
They want an app catalog where they can get fully featured, reliable, scalable,
and secure clou
Have you looked at the kolla project?
Thanks,
Kevin
From: CoreOS
Sent: Wednesday, April 29, 2015 5:08:12 AM
To: openstack-operators@lists.openstack.org
Subject: [Openstack-operators] OpenStack Dockerizing on CoreOS
Hello,
I’m trying to develop fault tolerance su
> -Original Message-
> From: Richard Raseley [mailto:rich...@raseley.com]
> Sent: Thursday, April 23, 2015 2:34 PM
> To: Fox, Kevin M
> Cc: openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] Sharing resources across OpenStack
> instances
&g
...@raseley.com]
Sent: Thursday, April 23, 2015 12:01 PM
To: Fox, Kevin M
Cc: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Sharing resources across OpenStack instances
Fox, Kevin M wrote:
> Some folks have been discussing an app store model. perhaps in
> Murano,
Some folks have been discussing an app store model. perhaps in Murano, but more
global, that would allow images/template to be registered somewhere like on
openstack.org and show up on all clouds that have the global repo enabled.
Murano would be extended to fetch the images/templates to the loc
This is a case for a cross project cloud (institutional?). It costs more to run
two little clouds then one bigger one. Both in terms of man power, and in cases
like these. under utilized resources.
#3 is interesting though. If there is to be an openstack app catalog, it would
be inportant to be
Its part of the keystone v3 api I think. I've only seen it show up when I
configure horizon to specifically talk v3 to keystone.
Thanks,
Kevin
From: Mike Smith [mism...@overstock.com]
Sent: Wednesday, April 15, 2015 4:33 PM
To:
Subject: [Openstack-operato
OpenStack with Ceph
On 03/19/2015 10:33 AM, Fox, Kevin M wrote:
> We've running it both ways. We have clouds with dedicated storage nodes, and
> clouds sharing storage/compute.
>
> The storage/compute solution with ceph is working ok for us. But, that
> particular cloud is 1gi
.com]
Sent: Thursday, March 19, 2015 9:36 AM
To: Fox, Kevin M
Cc: maishsk+openst...@maishsk.com; openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Hypervisor decision
I was under the impression hyper-v didn't charge a per seat license on non
windows instances?
On
We've running it both ways. We have clouds with dedicated storage nodes, and
clouds sharing storage/compute.
The storage/compute solution with ceph is working ok for us. But, that
particular cloud is 1gigabit only and seems very slow compared to our other
clouds. But because of the gigabit inte
So, in the pets vs cattle cloud philosophy, you want to be able to have as many
cattle as you need, rather then limit the sets to a smaller number of more pet
like things.
kvm allows unlimited numbers of vm's, which is very cloudy. but due to Windows
licensing, tends to only work well with linu
What about the other glance logfiles? It looks like it may be calling out to a
different server and thats failing...
Thanks,
Kevin
From: Nathan Stratton [nat...@robotics.net]
Sent: Friday, March 06, 2015 11:42 AM
To: openstack-oper.
Subject: [Openstack-operators] H
See the id_mapping table.
Thanks,
Kevin
From: Antonio Messina [antonio.s.mess...@gmail.com]
Sent: Tuesday, March 03, 2015 11:28 AM
To: Fox, Kevin M
Cc: Caius Howcroft; openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Migrating
You can leave the roles/projects outside of ldap by just using the LDAP
identity plugin, leaving the rest in sql. It sounds like they will be
deprecating putting roles/projects in LDAP in the future anyway.
That leaves identity mapping. There is a table of ldap users to unique id's in
the datab
We have 2 public networks, one for the internet and one public nonroutable one.
Then we use per tenant private networks and two routers per tenant. One on each
net. Then default the internet one and the internet router config provides an
extra route to the nonroutable net router. Has worked well
95 matches
Mail list logo