Heh. your not going to like it. :)

The very fastest path I can think of but is super disruptive is the following 
(there are also less disruptive paths):

First, define what OpenStack will be. If you don't know, you easily run into 
people working across purposes. Maybe there are other things that will be 
sister projects. that's fine. But it needs to be a whole product/project, not 
split on interests. think k8s sigs not openstack projects. The final result is 
a singular thing though. k8s x.y.z. openstack iaas 2.y.z or something like that.

Have a look at what KubeVirt is doing. I think they have the right approach.

Then, define K8s to be part of the commons. They provide a large amount of 
functionality OpenStack needs in the commons. If it is there, you can reuse it 
and not reinvent it.

Implement a new version of each OpenStack services api on top of K8s api using 
CRD's. At the same time, as we now  defined what OpenStack will be, ensure the 
API has all the base use cases covered.

Provide a rest service -> crd adapter to enable backwards compatibility with 
older OpenStack api versions.

This completely removes statefullness from OpenStack services.

Rather then have a dozen databases you have just an etcd system under the hood. 
It provides locking, and events as well. so no oslo.locking backing service, no 
message queue, no sql databases. This GREATLY simplifies what the operators 
need to do. This removes a lot of code too. Backups are simpler as there is 
only one thing. Operators life is drastically simpler.

upgrade tools should be unified. you upgrade your openstack deployment, not 
upgrade nova, upgrade glance, upgrade neutron, ..., etc

Config can be easier as you can ship config with the same mechanism. Currently 
the operator tries to define cluster config and it gets twisted and split up 
per project/per node/sub component.

Service account stuff is handled by kubernetes service accounts. so no rpc over 
amqp security layer and shipping around credentials manually in config files, 
and figuring out how to roll credentials, etc. agent stuff is much simpler. 
less code.

Provide prebuilt containers for all of your components and some basic tooling 
to deploy it on a k8s. K8s provides a lot of tooling here. We've been building 
it over and over in deployment tools. we can get rid of most of it.

Use http for everything. We all have acknowledged we have been torturing rabbit 
for a while. but its still a critical piece of infrastructure at the core 
today. We need to stop.

Provide a way to have a k8s secret poked into a vm.

I could go on, but I think there is enough discussion points here already. And 
I wonder if anyone made it this far without their head exploding already. :)

Thanks,
Kevin




________________________________________
From: Jay Pipes [jaypi...@gmail.com]
Sent: Monday, July 02, 2018 2:45 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 07/02/2018 03:12 PM, Fox, Kevin M wrote:
> I think a lot of the pushback around not adding more common/required services 
> is the extra load it puts on ops though. hence these:
>>   * Consider abolishing the project walls.
>>   * simplify the architecture for ops
>
> IMO, those need to change to break free from the pushback and make progress 
> on the commons again.

What *specifically* would you do, Kevin?

-jay

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to