Angus,
On 10/13/2014 08:51 PM, Angus Lees wrote:
I've been reading a bunch of the existing Dockerfiles, and I have two humble
requests:
1. It would be good if the "interesting" code came from python sdist/bdists
rather than rpms.
This will make it possible to rebuild the containers using code from a private
branch or even unsubmitted code, without having to go through a redhat/rpm
release process first.
I care much less about where the python dependencies come from. Pulling them
from rpms rather than pip/pypi seems like a very good idea, given the relative
difficulty of caching pypi content and we also pull in the required C, etc
libraries for free.
With this in place, I think I could drop my own containers and switch to
reusing kolla's for building virtual testing environments. This would make me
happy.
I've captured this requirement here:
https://blueprints.launchpad.net/kolla/+spec/run-from-master
I also believe it would be interesting to run from master or a stable
branch for CD. Unfortunately I'm still working on the nova-compute
docker code, but if someone comes along and picks up that blueprint, i
expect it will get implemented :) Maybe that could be you.
2. I think we should separate out "run the server" from "do once-off setup".
Currently the containers run a start.sh that typically sets up the database,
runs the servers, creates keystone users and sets up the keystone catalog. In
something like k8s, the container will almost certainly be run multiple times
in parallel and restarted numerous times, so all those other steps go against
the service-oriented k8s ideal and are at-best wasted.
I suggest making the container contain the deployed code and offer a few thin
scripts/commands for entrypoints. The main replicationController/pod _just_
starts the server, and then we have separate pods (or perhaps even non-k8s
container invocations) that do initial database setup/migrate, and post-
install keystone setup.
The server may not start before the configuration of the server is
complete. I guess I don't quite understand what you indicate here when
you say we have separate pods that do initial database setup/migrate.
Do you mean have dependencies in some way, or for eg:
glance-registry-setup-pod.yaml - the glance registry pod descriptor
which sets up the db and keystone
glance-registry-pod.yaml - the glance registry pod descriptor which
starts the application and waits for db/keystone setup
and start these two pods as part of the same selector (glance-registry)?
That idea sounds pretty appealing although probably won't be ready to go
for milestone #1.
Regards,
-steve
I'm open to whether we want to make these as lightweight/independent as
possible (every daemon in an individual container), or limit it to one per
project (eg: run nova-api, nova-conductor, nova-scheduler, etc all in one
container). I think the differences are run-time scalability and resource-
attribution vs upfront coding effort and are not hugely significant either way.
Post-install catalog setup we can combine into one cross-service setup like
tripleO does[1]. Although k8s doesn't have explicit support for batch tasks
currently, I'm doing the pre-install setup in restartPolicy: onFailure pods
currently and it seems to work quite well[2].
(I'm saying "post install catalog setup", but really keystone catalog can
happen at any point pre/post aiui.)
[1]
https://github.com/openstack/tripleo-incubator/blob/master/scripts/setup-endpoints
[2]
https://github.com/anguslees/kube-openstack/blob/master/kubernetes-in/nova-db-sync-pod.yaml
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev