Still confused by:
[base] -> [service] -> [+ puppet]
not:
[base] -> [puppet]
and
[base] -> [service]
?
Thanks,
Kevin
From: Bogdan Dobrelya [bdobr...@redhat.com]
Sent: Friday, November 30, 2018 5:31 AM
To: Dan Prince; openstack-dev@lists.openstack.org;
open
(systemd)? That avoids using --nodeps.
Thanks,
Kevin
From: Fox, Kevin M [kevin@pnnl.gov]
Sent: Thursday, November 29, 2018 11:20 AM
To: Former OpenStack Development Mailing List, use openstack-discuss now
Subject: Re: [openstack-dev] [TripleO][Edge
If the base layers are shared, you won't pay extra for the separate puppet
container unless you have another container also installing ruby in an upper
layer. With OpenStack, thats unlikely.
the apparent size of a container is not equal to its actual size.
Thanks,
Kevin
-disc...@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers
for security and size of images (maintenance) sakes
On Wed, 2018-11-28 at 00:31 +0000, Fox, Kevin M wrote:
> The pod concept allows you to have one tool per container do one
> thing and do it well.
The pod concept allows you to have one tool per container do one thing and do
it well.
You can have a container for generating config, and another container for
consuming it.
In a Kubernetes pod, if you still wanted to do puppet,
you could have a pod that:
1. had an init container that ran pupp
#x27;s just
> someting consul brings with it...
>
> consul is very strong in doing health checks
>
> Am 10/9/18 um 6:09 PM schrieb Fox, Kevin M:
>> etcd is an already approved openstack dependency. Could that be used instead
>> of consul so as to not add yet an
ul? That's just
someting consul brings with it...
consul is very strong in doing health checks
Am 10/9/18 um 6:09 PM schrieb Fox, Kevin M:
> etcd is an already approved openstack dependency. Could that be used instead
> of consul so as to not add yet another storage system? coredns with
pic
>
> On 10/10/2018 11:49 AM, Fox, Kevin M wrote:
>> Sorry. Couldn't quite think of the name. I was meaning, openstack
>> project tags.
>
> I think having a tag that indicates the project is no longer using
> SELECT FOR UPDATE (and thus is safe to use multi-writer Ga
] add service discovery, proxysql, vault,
fabio and FQDN endpoints
On 10/09/2018 03:10 PM, Fox, Kevin M wrote:
> Oh, this does raise an interesting question... Should such information be
> reported by the projects up to users through labels? Something like,
> "percona_multimaster=safe&qu
Oh, this does raise an interesting question... Should such information be
reported by the projects up to users through labels? Something like,
"percona_multimaster=safe" Its really difficult for folks to know which
projects can and can not be used that way currently.
Is this a TC question?
Tha
etcd is an already approved openstack dependency. Could that be used instead of
consul so as to not add yet another storage system? coredns with the
https://coredns.io/plugins/etcd/ plugin would maybe do what you need?
Thanks,
Kevin
From: Florian Engelman
There are specific cases where it expects the client to retry and not all code
tests for that case. Its safe funneling all traffic to one server. It can be
unsafe to do so otherwise.
Thanks,
Kevin
From: Jay Pipes [jaypi...@gmail.com]
Sent: Monday, October
rsday, September 27, 2018 12:35 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting
goal selection for T series
On 9/27/2018 2:33 PM, Fox, Kevin M wrote:
> If the project plugins were maintained by the OSC project still, maybe there
If the project plugins were maintained by the OSC project still, maybe there
would be incentive for the various other projects to join the OSC project,
scaling things up?
Thanks,
Kevin
From: Matt Riedemann [mriede...@gmail.com]
Sent: Thursday, September 2
+1 :)
From: Tim Bell [tim.b...@cern.ch]
Sent: Wednesday, September 26, 2018 11:55 AM
To: OpenStack Development Mailing List (not for usage questions);
openstack-operators; openstack-sigs
Subject: Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] sta
How about stated this way,
Its the tc's responsibility to get it done. Either by delegating the activity,
or by doing it themselves. But either way, it needs to get done. Its a ball
that has been dropped too much in OpenStacks history. If no one is ultimately
responsible, balls will keep getting
Might be a good option to plug in to the kubernetes cluster api
https://github.com/kubernetes-sigs/cluster-api too.
Thanks,
Kevin
From: Mark Goddard [m...@stackhpc.com]
Sent: Tuesday, August 28, 2018 10:55 AM
To: OpenStack Development Mailing List (not for usage q
I think in this context, kubelet without all of kubernetes still has the value
that it provides an abstraction layer that podmon/paunch is being suggested to
handle.
It does not need the things you mention, network, sidecar, scaleup/down, etc.
You can use as little as you want.
For example, ma
.
Thanks,
Kevin
From: Fox, Kevin M [kevin@pnnl.gov]
Sent: Thursday, August 23, 2018 9:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API
calls
Question. Rather
Question. Rather then writing a middle layer to abstract both container
engines, couldn't you just use CRI? CRI is CRI-O's native language, and there
is support already for Docker as well.
Thanks,
Kevin
From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday,
:42:45 +0000 (+), Fox, Kevin M wrote:
[...]
> Yes, I realize shared storage was Cinders priority and Nova's now
> way behind in implementing it. so it is kind of a priority to get
> it done. Just using it as an example though in the bigger context.
>
> Having operators approach i
The stuff you are pushing back against are the very same things that other
folks are trying to do at a higher level.
You want control so you can prioritize the things you feel your users are most
interested in. Folks in other projects have decided the same. Really, where
should the priorities
018 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [nova] [placement] placement below or beside
compute after extraction?
On 2018-08-21 16:38:41 + (+), Fox, Kevin M wrote:
[...]
> You need someone like the TC to be able to ste
So, nova's worried about having to be in the boat many of us have been in where
they depend on another project not recognizing their important use cases? heh...
ok, so, yeah. that is a legitimate concern. You need someone like the TC to be
able to step in, in those cases to help sort that kind o
Since the upgrade checking has not been written yet, now would be a good time
to unify them, so you upgrade check your openstack upgrade, not status check
nova, status check neutron, status check glance, status check cinder . ad
nauseam.
Thanks,
Kevin
___
The primary issue I think is that the Nova folks think there is too much in
Nova already.
So there are probably more features that can be done to make it more in line
with vCenter, and more features to make it more functionally like AWS. And at
this point, neither are probably easy to get in.
Inlining with KF>
From: Thierry Carrez [thie...@openstack.org]
Sent: Tuesday, July 17, 2018 7:44 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
Finally found the time to properly read this...
Zane Bitter w
uggoth.org]
Sent: Thursday, July 05, 2018 10:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
On 2018-07-05 17:30:23 + (+0000), Fox, Kevin M wrote:
[...]
> Deploying k8s doesn't need a general solution to
Tantsur [dtant...@redhat.com]
Sent: Thursday, July 05, 2018 11:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
On Thu, Jul 5, 2018, 19:31 Fox, Kevin M
mailto:kevin@pnnl.gov>> wrote:
We're pretty far int
nstack-dev] [tc] [all] TC Report 18-26
Tried hard to avoid this thread, but this message is so much wrong..
On 07/03/2018 09:48 PM, Fox, Kevin M wrote:
> I don't dispute trivial, but a self hosting k8s on bare metal is not
> incredibly hard. In fact, it is easier then you might think
hat it's easier to understand what part of your
message I'm responding to.
On 07/03/2018 02:37 PM, Fox, Kevin M wrote:
> Yes/no on the vendor distro thing. They do provide a lot of options, but they
> also provide a fully k8s tested/provided route too. kubeadm. I can take linux
>
8 03:31 PM, Zane Bitter wrote:
> On 28/06/18 15:09, Fox, Kevin M wrote:
>> * made the barrier to testing/development as low as 'curl
>> http://..minikube; minikube start' (this spurs adoption and
>> contribution)
>
> That's not so different from
26
On 07/02/2018 03:12 PM, Fox, Kevin M wrote:
> I think a lot of the pushback around not adding more common/required services
> is the extra load it puts on ops though. hence these:
>> * Consider abolishing the project walls.
>> * simplify the architecture for ops
>
> I
Yes/no on the vendor distro thing. They do provide a lot of options, but they
also provide a fully k8s tested/provided route too. kubeadm. I can take linux
distro of choice, curl down kubeadm and get a working kubernetes in literally a
couple minutes. No compiling anything or building containers
om]
Sent: Monday, July 02, 2018 11:41 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
On 06/28/2018 02:09 PM, Fox, Kevin M wrote:
> I'll weigh in a bit with my operator hat on as recent experience it pertains
> to the current conversa
I'll weigh in a bit with my operator hat on as recent experience it pertains to
the current conversation
Kubernetes has largely succeeded in common distribution tools where OpenStack
has not been able to.
kubeadm was created as a way to centralize deployment best practices, config,
and upgr
"What is OpenStack"
From: Jay Pipes [jaypi...@gmail.com]
Sent: Tuesday, June 26, 2018 6:12 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
On 06/26/2018 08:41 AM, Chris Dent wrote:
> Meanwhile, to contin
That might not be a good idea. That may just push the problem underground as
people are afraid to speak up publicly.
Perhaps an anonymous poll kind of thing, so that it can be counted publicly but
doesn't cause people to fear retaliation?
Thanks,
Kevin
F
To play devils advocate and as someone that has had to git bisect an ugly
regression once I still think its important not to break trunk. It can be much
harder to deal with difficult issues like that if trunk frequently breaks.
Thanks,
Kevin
From: Sean Mc
Who are your users, what do they need, are you meeting those needs, and what
can you do to better things?
If that can't be answered, how do you know if you are making progress or
staying relevant?
Lines of code committed is not a metric of real progress.
Number of reviews isn't.
Feature additio
k8s does that I think by separating desired state from actual state and working
to bring the two inline. the same could (maybe even should) be done to
openstack. But your right, that is not a small amount of work.
Even without using GraphQL, Making the api more declarative anyway, has
advantage
esday, April 24, 2018 9:13 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] campaign question: How can we make
contributing to OpenStack easier?
On 04/24/2018 12:04 PM, Fox, Kevin M wrote:
> Could the major components, nova-api, neutron-server, glance-apiserver, etc
&g
nks,
Kevin
From: Thierry Carrez [thie...@openstack.org]
Sent: Tuesday, April 24, 2018 3:24 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] campaign question: How can we make
contributing to OpenStack easier?
Fox, Kevin M wrote:
> OpenStack has creat
One more I'll add which is touched on a little below. Contributors spawn from a
healthy Userbase/Operatorbase. If their needs are not met, then they go
elsewhere and the contributor base shrinks. OpenStack has created artificial
walls between the various Projects. It shows up, for example, as ho
What about the other way around? An Octavia plugin that simply manages k8s
Ingress objects on a k8s cluster? Depending on how operators are deploying
openstack, this might be a much easier way to deploy Octavia.
Thanks,
Kevin
From: Lingxian Kong [anlin.k...@gmail
Stack Development Mailing List (not for usage questions)
Cc: openstack-oper.
Subject: Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases
On Wed, Nov 15, 2017 at 10:44 AM, John Dickinson wrote:
>
>
> On 14 Nov 2017, at 15:18, Mathieu Gagné wrote:
>
>> On Tue, Nov 14, 2017 a
The pressure for #2 comes from the inability to skip upgrades and the fact that
upgrades are hugely time consuming still.
If you want to reduce the push for number #2 and help developers get their wish
of getting features into users hands sooner, the path to upgrade really needs
to be much less
+1
From: Juan Antonio Osorio [jaosor...@gmail.com]
Sent: Friday, November 03, 2017 3:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Logging format: let's discuss a bit about default
format, format configuration a
ore/refstack, qa teams, all services (I think
we missed one, it has since been fixed), clients, SDK(s), etc to
ensure that as much support as possible is in place to make utilizing
V3 easy.
On Fri, Oct 20, 2017 at 3:50 PM, Fox, Kevin M wrote:
> No, I'm not saying its the TC teams job to
uggoth.org]
Sent: Friday, October 20, 2017 10:53 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Fwd: [Openstack-operators][tc] [keystone][all]
v2.0API removal
On 2017-10-20 17:15:59 + (+0000), Fox, Kevin M wrote:
[...]
> I know the TC's been shying away f
That is a very interesting question.
It comes from the angle of OpenStack the product more then from the standpoint
of any one OpenStack project.
I know the TC's been shying away from these sorts of questions, but this one
has a pretty big impact. TC?
Thanks,
Kevin
For kolla, we were thinking about a couple of optimization that should greatly
reduce the space.
1. only upload to the hub based on stable versions. The updates are much less
frequent.
2. fingerprint the containers. base it on rpm/deb list, pip list, git
checksums. If the fingerprint is the sam
So, my $0.02.
A supported/recent version of a tool to install an unsupported version of a
software is not a bad thing.
OpenStack has a bad reputation (somewhat deservedly) for being hard to upgrade.
This has mostly gotten better over time but there are still a large number of
older, unsupporte
I slightly disagree. I think there are 3 sets of users not 2...
Operators, Tenant Users, and Tenant Application Developers.
Tenant Application Developers develop software that the Tenant Users deploy in
their tenant.
Most OpenStack developers consider the latter two to always be the same person.
Big +1 for reevaluating the bigger picture. We have a pile of api's that
together don't always form the most useful of api's due to lack of big picture
analysis.
+1 to thinking through the dev's/devops use case.
Another one to really think over is single user that != application developer.
IE,
I don't think its unfair to compare against k8s in this case. You have to
follow the same kinds of steps as an admin provisioning a k8s compute node as
you do an openstack compute node. The main difference I think is they make use
of the infrastructure that was put in place by the operator, maki
Yeah, there is a way to do it today. it really sucks though for most users. Due
to the complexity of doing the task though, most users just have gotten into
the terrible habit of ignoring the "this host's ssh key changed" and just
blindly accepting the change. I kind of hate to say it this way,
Yeah. Very interesting. Thanks for sharing.
Kevin
From: Adam Heczko [ahec...@mirantis.com]
Sent: Wednesday, October 04, 2017 2:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [policy] AWS IAM session
Hi Devdatta,
FYI, a container with net=host runs exactly like it was running outside of a
container with respect to iptables/networking. So that should not be an issue.
If it can be done on the host, it should be able to happen in a container.
Thanks,
Kevin
From: Dan Prince [
https://review.openstack.org/#/c/93/
From: Giuseppe de Candia [giuseppe.decan...@gmail.com]
Sent: Friday, September 29, 2017 1:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Supporting SSH host certificates
I
Its easier to convince the developers employer to keep paying the developer
when their users (operators) want to use their stuff. Its a longer term
strategic investment. But a critical one. I think this has been one of the
things holding OpenStack back of late. The developers continuously push o
+1
From: Surya Prakash Singh [surya.si...@nectechnologies.in]
Sent: Monday, August 14, 2017 2:25 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla-kubernetes] Proposing Rich Wellum to
coreteam
Down that path lies tears. :/
From: joehuang [joehu...@huawei.com]
Sent: Tuesday, August 08, 2017 10:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: kubernetes-sig-openst...@googlegroups.com
Subject: Re: [openstack-dev] [keystone][
Yeah, but you still run into stuff like db contact and driver information being
mixed up with secret used for contacting that service. Those should be separate
fields I think so they can be split/merged with that mechanism.
Thanks,
Kevin
From: Doug Hellma
+1. Please keep me in the loop for when the PTG session is.
Thanks,
Kevin
From: Doug Hellmann [d...@doughellmann.com]
Sent: Friday, August 04, 2017 12:46 PM
To: openstack-dev
Subject: Re: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect
p
I would really like to see secrets separated from config. Always have... They
are two separate things.
If nothing else, a separate config file so it can be permissioned differently.
This could be combined with k8s secrets/configmaps better too.
Or make it much easier to version config in git and
FYI, in kolla-kubernes, I've been playing with fluent-bit as a log shipper.
Works very similar to fluentd but is much lighter weight. I used this:
https://github.com/kubernetes/charts/tree/master/stable/fluent-bit
I fought with getting log rolling working properly with log files and its kind
of
From: Fox, Kevin M
Sent: Monday, July 17, 2017 4:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack
services on Kubernetes
I think if you try to go down the Kuber
een the two
projects.
Thanks,
Kevin
From: Bogdan Dobrelya [bdobr...@redhat.com]
Sent: Monday, July 17, 2017 1:10 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack
services on Kubernetes
I think if you try to go down the Kubernetes & !Kubernetes path, you'll end up
re-implementing pretty much all of Kubernetes, or you will use Kubernetes just
like !Kubernetes and gain very little benefit from it.
Thanks,
Kevin
From: Flavio Percoco [fla...
We do support some upstream charts but we started mariadb/rabbit before some of
the upstream charts were written, so we duplicate a little bit of functionality
at the moment. You can mix and match though. If an upstream chart doesn't work
with kolla-kubernetes, I consider that a bug we should fi
y OpenStack
services on Kubernetes
On Fri, Jul 14, 2017 at 12:16 PM, Fox, Kevin M wrote:
> https://xkcd.com/927/
That's cute, but we aren't really trying to have competing standards.
It's not really about competition between tools.
> I don't think adopting helm as a
https://xkcd.com/927/
I don't think adopting helm as a dependency adds more complexity then writing
more new k8s object deployment tooling?
There are efforts to make it easy to deploy kolla-kubernetes microservice
charts using ansible for orchestration in kolla-kubernetes. See:
https://review.o
y
ones without needlessly excluding other (lower priority) ones.
Thanks,
-amrith
--
Amrith Kumar
P.S. Verizon is hiring OpenStack engineers nationwide. If you are interested,
please contact me or visit https://t.co/gGoUzYvqbE
On Wed, Jul 12, 2017 at 5:46 PM, Fox, Kevin M
mailto:kev
There is a use case where some sites have folks buy whole bricks of compute
nodes that get added to the overarching cloud, but using AZ's or
HostAggregates/Flavors to dedicate the hardware to the users.
You might want to land the db vm on the hardware for that project and one would
expect the n
I think the migration path to something like kolla-kubernetes would be fine,
as you have total control over the orchestration piece, ansible and the config
generation
ansible and since it is all containerized and TripleO production isn't, you
should be able to
'upgrade' from non containtered to c
Part of the confusion is around what is allowed to use the term openstack and
the various ways its used.
we have software such as github.com/openstack/openstack-helm,
which is in the openstack namespace, has openstack in its title, but not under
tc governance.
http://git.openstack.org/cgit/ope
I think everyone would benefit from a read-only role for keystone out of the
box. Can we get this into keystone rather then in the various distro's?
Thanks,
Kevin
From: Ben Nemec [openst...@nemebean.com]
Sent: Wednesday, June 28, 2017 12:06 PM
To: OpenStac
far away from discussing Trove at this point.
Thanks,
Kevin
From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, June 22, 2017 10:05 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove
On 06/22/201
__
From: Thierry Carrez [thie...@openstack.org]
Sent: Thursday, June 22, 2017 12:58 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove
Fox, Kevin M wrote:
> [...]
> If you build a Tessmaster clone just to do mariadb, then you
Anyone else seen a problem with kernel rbd when ceph isn't fully up when an
kernel rbd mount is attempted?
the mount blocks as it should, but if ceph takes too long to start, it
enventually enters a D state forever even though ceph comes up happpy. Its like
it times out and stops trying. Only a
There already is a user side tools for deploying plumbing onto your own cloud.
stuff like Tessmaster itself.
I think the win is being able to extend that k8s with the ability to
declaratively request database clusters and manage them.
Its all about the commons.
If you build a Tessmaster clone
Thanks for starting this difficult discussion.
I think I agree with all the lessons learned except the nova one. while you
can treat containers and vm's the same, after years of using both, I really
don't think its a good idea to treat them equally. Containers can't work
properly if used as a
"Otherwise, -onetime will need to launch new containers each config change."
You say that like its a bad thing
That sounds like a good feature to me. atomic containers. You always know the
state of the system. As an Operator, I want to know which containers have the
new config, which have t
+1 for putting confd in a side car with shared namespaces. much more k8s native.
Still generally -1 on the approach of using confd instead of configmaps. You
loose all the atomicity that k8s provides with deployments. It breaks
upgrade/downgrade behavior.
Would it be possible to have confd run
Flavio: I think your right. k8s configmaps and confd are doing very similar
things. The one thing confd seems to add is dynamic templates on the host side.
This is still accomplished in k8s with a sidecar watching for config changes
with the templating engine in it and an emptyDir. or statically
PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla]
[helm] Configuration management with etcd / confd
> On Jun 8, 2017, at 4:29 PM, Fox, Kevin M wrote:
>
> That is possible. But, a yaml/json driver
See the footer at the bottom of this email.
From: jimi olugboyega [jimiolugboy...@gmail.com]
Sent: Thursday, June 08, 2017 1:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] etcd3 as base service - update
Hel
There are two issues conflated here maybe?
The first is a mechanism to use oslo.config to dump out example settings that
could be loaded into a reference ConfigMap or etcd or something. I think there
is a PS up for that.
The other is a way to get the data back into oslo.config.
etcd is one way
That is possible. But, a yaml/json driver might still be good, regardless of
the mechanism used to transfer the file.
So the driver abstraction still might be useful.
Thanks,
Kevin
From: Doug Hellmann [d...@doughellmann.com]
Sent: Thursday, June 08, 2017
hmm... a very interesting question
I would think control plane only.
Thanks,
Kevin
From: Drew Fisher [drew.fis...@oracle.com]
Sent: Thursday, June 08, 2017 1:07 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] etcd3 as base s
Yeah, I think k8s configmaps might be a good config mechanism for k8s based
openstack deployment.
One feature that might help which is related to the etcd plugin would be a
yaml/json plugin. It would allow more native looking configmaps.
Thanks,
Kevin
Fr
Oh, yes please! We've had to go through a lot of hoops to migrate ceph-mon's
around while keeping their ip's consistent to avoid vm breakage. All the rest
of the ceph ecosystem (at least that we've dealt with) works fine without the
level of effort the current nova/cinder implementation imposes
So, one thing to remember, I don't think etcd has an authz mechanism yet.
You usually want your fernet keys to be accessible by just the keystone nodes
and no others.
This might require a etcd cluster just for keystone fernet tokens, which might
work great. but is an operator overhead to instal
I've only used btrfs and devicemapper on el7. btrfs has worked well.
devicemapper ate may data on multiple occasions. Is redhat supporting overlay
in the el7 kernels now?
Thanks,
Kevin
From: Dan Prince [dpri...@redhat.com]
Sent: Wednesday, May 17, 2017 5:
You can do that, but doesn't play well with orchestration systems such as k8s
as it removes its ability to know when upgraded containers appear.
Thanks,
Kevin
* As always, sorry for top posting, but my organization does not allow me the
choice of mail software.
_
What kolla's been discussing is having something like:
4.0.0-1, 4.0.0-2, 4.0.0-3, etc.
only keeping the most recent two. and then aliases for:
4.0.0 pointing to the newest.
This allows helm upgrade to atomically roll/forward back properly. If you drop
releases, k8s can't properly do atomic upgrad
Security is a spectrum, not a boolean. I know some sites that have instituted
super long/complex password requirements. The end result is usually humans just
writing pw's down on stickies then since its too hard to remember, making
security worse, not better. Humans are always the weakest link i
We can put warnings all over it and if folks choose to ignore them, then its
they who took the risk and get to keep the pieces when it breaks. Some folks
are crazy enough to run devstack in production. But does that mean we should
just abandon devstack? No. of course not. I don't think we should
And bandwidth can be conserved by only uploading images that actually changed
in non trivial ways (packages were updated, not just logfile with a new
timestamp)
Thanks,
Keivn
From: Michał Jastrzębski [inc...@gmail.com]
Sent: Tuesday, May 16, 2017 11:46 AM
1 - 100 of 756 matches
Mail list logo