[OpenStack-Infra] helping openstack-infra
Hi! In response to https://governance.openstack.org/tc/reference/upstream-investment-opportunities/2019/community-infrastructure-sysadmins.html would like to state that I am more than willing to help the openstack-infra team with sysadmin tasks needed. I am fully aware this will require me investing more time on infra specific tasks but I am glad it do have the support from my organization for doing this. As I was really happy about my last year experience with others from infra and I valued a lot their help on various tasks, I want to also return the favor and help with other chores or features we may need do to make our CI even better. So mainly, just let me know what needs should be done? Thanks Sorin Sbarnea Red Hat TripleO CI ___ OpenStack-Infra mailing list OpenStack-Infra@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
Re: [OpenStack-Infra] [zuul-jobs] configure-mirrors: deprecate mirroring configuration for easy_install
On Mon, Nov 25, 2019 at 04:02:13PM +1100, Ian Wienand wrote: > Hello, > > Today I force-merged [5] to avoid widespread gate breakage. Because > the change is in zuul-jobs, we have a policy of annoucing > deprecations. I've written the following but not sent it to > zuul-announce (per policy) yet, as I'm not 100% confident in the > explanation. > > I'd appreciate it if, once proof-read, someone could send it out > (modified or otherwise). > > Thanks, > Greetings! Rather then force merge, and potential break other zuul installs. What about a new feature flag, that was still enabled but have openstack base jobs disabled? This would still allow older versions of setuptools to work I would guess? That said, ansible Zuul is not affected as we currently fork configure-mirrors for our open puproses, I'll check now that we are also not affected. > -i > > -- > > Hello, > > The recent release of setuptools 42.0.0 has broken the method used by > the configure-mirrors role to ensure easy_install (the older method of > install packages, before pip became in widespread use [1]) would only > access the PyPi mirror. > > The prior mirror setup code would set the "allow_hosts" whitelist to > the mirror host exclusively in pydistutils.cfg. This would avoid > easy_install "leaking" access outside the specified mirror. > > Change [2] in setuptools means that pip is now used to fetch packages. > Since it does not implement the constraints of the "allow_hosts" > setting, specifying this option has become an error condition. This > is reported as: > > the `allow-hosts` option is not supported 'when using pip to install > requirements > > It has been pointed out [3] that this prior code would break any > dependency_links [4] that might be specified for the package (as the > external URLs will not match the whitelist). Overall, there is no > desire to work-around this behaviour as easy_install is considered > deprecated for any current use. > > In short, this means the only solution is to remove the now > conflicting configuration from pydistutils.cfg. Due to the urgency of > this update, it has been merged with [5] before our usual 2-week > deprecation notice. > > The result of this is that older setuptools (perhaps in a virtualenv) > with jobs still using easy_install may not correctly access the > specified mirror. Assuming jobs have access to PyPi they would still > work, although without the benefits of a local mirror. If such jobs > are firewalled from usptream they may now fail. We consider the > chance of jobs using this legacy install method in this situation to > be very low. > > Please contact zuul-discuss [6] with any concerns. > > We now return you to your regularly scheduled programming :) > > [1] https://packaging.python.org/discussions/pip-vs-easy-install/ > [2] > https://github.com/pypa/setuptools/commit/d6948c636f5e657ac56911b71b7a459d326d8389 > [3] https://github.com/pypa/setuptools/issues/1916 > [4] https://python-packaging.readthedocs.io/en/latest/dependencies.html > [5] https://review.opendev.org/695821 > [6] http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > ___ > OpenStack-Infra mailing list > OpenStack-Infra@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra ___ OpenStack-Infra mailing list OpenStack-Infra@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
Re: [OpenStack-Infra] [zuul-jobs] configure-mirrors: deprecate mirroring configuration for easy_install
On Mon, Nov 25, 2019, at 5:38 AM, Paul Belanger wrote: > On Mon, Nov 25, 2019 at 04:02:13PM +1100, Ian Wienand wrote: > > Hello, > > > > Today I force-merged [5] to avoid widespread gate breakage. Because > > the change is in zuul-jobs, we have a policy of annoucing > > deprecations. I've written the following but not sent it to > > zuul-announce (per policy) yet, as I'm not 100% confident in the > > explanation. > > > > I'd appreciate it if, once proof-read, someone could send it out > > (modified or otherwise). > > > > Thanks, > > > Greetings! > > Rather then force merge, and potential break other zuul installs. What > about a new feature flag, that was still enabled but have openstack base > jobs disabled? This would still allow older versions of setuptools to > work I would guess? > I think the ship has sailed and the change has already been force merged. That said setuptools isn't something that you can easily control versioning of. For example when you create a virtualenv the version of setuptools in the virtualenv is automatically updated to the latest version. For this reason I think the merge was the best course of action. > That said, ansible Zuul is not affected as we currently fork > configure-mirrors for our open puproses, I'll check now that we are also > not affected. > ___ OpenStack-Infra mailing list OpenStack-Infra@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
Re: [OpenStack-Infra] helping openstack-infra
On Mon, Nov 25, 2019, at 4:15 AM, Sorin Sbarnea wrote: > Hi! > > In response to > https://governance.openstack.org/tc/reference/upstream-investment-opportunities/2019/community-infrastructure-sysadmins.html > would like to state that I am more than willing to help the openstack-infra > team with sysadmin tasks needed. > > I am fully aware this will require me investing more time on infra > specific tasks but I am glad it do have the support from my > organization for doing this. > > As I was really happy about my last year experience with others from > infra and I valued a lot their help on various tasks, I want to also > return the favor and help with other chores or features we may need do > to make our CI even better. > > So mainly, just let me know what needs should be done? Current priority efforts include OpenDev-ification of services and updating config management for puppet managed services to ansible and containers. To opendevify services we are updating them to be hosted at opendev.org domains as well as updating theming if necessary to incorporate the opendev logo and color scheme. In many cases we want to install redirects from the old openstack.org names to the new opendev.org name. For services with SSL/TLS this is likely the trickiest bit of the conversion as our plan is to use LetsEncrypt for opendev.org names. Thankfully, we've sorted out a plan [0] for managing openstack.org names with LetsEncrypt certs as well. For the config management updates we've been converting deployments of services from puppet to ansible and in many cases having ansible drive docker-compose to do the actual deployments. This process typically starts by determining if we can use an upstream image or not. If not we add a Dockerfile and corresponding image build jobs to the opendev/system-config repo. Then we can add jobs to test deployment of those containers and finally deploy to production via this method. The great thing about this process is we can fully test the deployment easily and there are many examples of that in system-config now (see Gitea). Whether or not it makes more sense to convert a service to opendev or update its config management first, likely depends on how difficult it is to set up SSL/TLS for multiple domains. My hunch is that for most services doing the config management update with the SSL/TLS needs in mind is likely easiest. If you'd like a concrete place to start I would suggest taking a service like ethercalc or etherpad, udpate its config management to ansible + docker-compose, implement test jobs as others have, then when all that is happy you can work with an infra root to replace our deployment of these services in production. Then if we haven't already done it in conjunction with the config management updates we can update the SSL/TLS setup and convert to an opendev service. Another different but slightly related OpenDev topic is to start implementing tooling so that top level orgs can manage their repositories directly. At the PTG we discussed doing this via a meta project per repo that determines who is allowed to make those changes, then via ansible of some sort implement those updates when the appropriate approvers have ACKed the change. This likely needs a bit of design work just to be sure we are all in agreement of the plan. A lightweight spec would be helpful. [0] https://opendev.org/opendev/infra-specs/src/branch/master/specs/retire-static.rst#opendev-infrastructure-migration Hope this helps. Feel free to dig in or ask questions and I'll do my best to expand on this topic. Clark ___ OpenStack-Infra mailing list OpenStack-Infra@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
Re: [OpenStack-Infra] helping openstack-infra
On 2019-11-25 09:48:20 -0800 (-0800), Clark Boylan wrote: [...] > For the config management updates we've been converting > deployments of services from puppet to ansible [...] It also merits pointing out that this is in part driven by our inability to easily apply our existing Puppet manifests with any of the versions of Puppet available for newer operating systems like Ubuntu 18.04 LTS, so if there is a desire to do something which needs a newer operating system than (rapidly aging) Ubuntu 16.04 LTS, replacing the Puppet automation with Ansible is a necessary prerequisite. -- Jeremy Stanley signature.asc Description: PGP signature ___ OpenStack-Infra mailing list OpenStack-Infra@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
[OpenStack-Infra] Meeting Agenda for November 26, 2019
We will meet in #openstack-meeting at 19:00UTC November 26 with this agenda: == Agenda for next meeting == * Announcements ** Holiday week for those in the USA * Actions from last meeting * Specs approval * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack] ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:update-cfg-mgmt *** Zuul as CD engine ** OpenDev *** Possible gitea/go-git bug in current version of gitea we are running https://storyboard.openstack.org/#!/story/2006849 * General topics ** Trusty Upgrade Progress (clarkb 20191126) *** Wiki updates ** static.openstack.org (ianw,corvus,mnaser,fungi 20191126) *** Infra-root needs to create AFS volumes. ** Installing tox with py3 in base images runs some envs with py3 now - update (ianw 20191125) *** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010957.html ** Submariner on opendev.org (dgroisma, mkolesni) ** dib/nodepool container (ianw 20191125) *** ianw to do writeup before ** Discussion on retiring services that are largely unmaintained or unused (clarkb 20191126) *** Want to start this discussion as services like Ask are barely on life support and we should probably clean them up. *** https://etherpad.openstack.org/infra-service-list * Open discussion ___ OpenStack-Infra mailing list OpenStack-Infra@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
[OpenStack-Infra] Creating OpenDev control-plane docker images and naming
Hello, I'm trying to get us to a point where we can use nodepool container images in production, particularly because I want to use updated tools available in later distributions than our current Xenial builders [1] We have hit the hardest problem; naming :) To build a speculative nodepool-builder container image that is suitable for a CI job (the prerequisite for production), we need to somehow layer openstacksdk, diskimage-builder and finally nodepool itself into one image for testing. [2] These all live in different namespaces, and the links between them are not always clear. Maybe a builder doesn't need diskimage-builder if images come from elsewhere. Maybe a launcher doesn't need openstacksdk if it's talking to some other cloud. This becomes weird when the zuul/nodepool-builder image depends on opendev/python-base but also openstack/diskimage-builder and openstack/openstacksdk. You've got 3 different namespaces crossing with no clear indication of what is supposed to work together. I feel like we've been (or at least I have been) thinking that each project will have *a* Dockerfile that produces some canonical image. I think I've come to the conclusion this is infeasible. There can't be a single container that suits everyone, and indeed this isn't the Zen of containers anyway. What I would propose is that projects do *not* have a single, top-level Dockerfile, but only (potentially many) specifically name-spaced versions. So for example, everything in the opendev/ namespace will be expected to build from opendev/python-base. Even though dib, openstacksdk and zuul come from difference source-repo namespaces, it will make sense to have: opendev/python-base +-> opendev/openstacksdk +-> opendev/diskimage-builder +-> opendev/nodepool-builder because these containers are expected to work together as the opendev control plane containers. Since opendev/nodepool-builder is defined as an image that expected to make RAX compatible, OpenStack uploadable images it makes logical sense for it to bundle the kitchen sink. I would expect that nodepool would also have a Docker.zuul file to create images in the zuul/ namespace as the "reference" implementation. Maybe that looks a lot like Dockerfile.opendev -- but then again maybe it makes different choices and does stuff like Windows support etc. that the opendev ecosystem will not be interested in. You can still build and test these images just the same; just we'll know they're targeted at doing something different. As an example: https://review.opendev.org/696015 - create opendev/openstacksdk image https://review.opendev.org/693971 - create opendev/diskimage-builder (a nodepool change will follow, but it's a bit harder as it's cross-tenant so projects need to be imported). Perhaps codifying that there's no such thing as *a* Dockerfile, and possibly rules about what happens in the opendev/ namespace is spec worthy, I'm not sure. I hope this makes some sense! Otherwise, I'd be interested in any and all ideas of how we basically convert the nodepool-functional-openstack-base job to containers (that means, bring up a devstack, and test nodepool, dib & openstacksdk with full Depends-On: support to make sure it can build, upload and boot). I consider that a pre-requisite before we start rolling anything out in production. -i [1] I know we have ideas to work around the limitations of using host tools to build images, but one thing at a time! :) [2] I started looking at installing these together from a Dockerfile in system-config. The problem is that you have a "build context", basically the directory the Dockerfile is in and everything under it. You can't reference anything outside this. This does not play well with Zuul, which has checked out the code for dib, openstacksdk & nodepool into three different sibling directories. So to speculatively build them together, you have to start copying Zuul checkouts of code underneath your system-config Dockerfile which is crazy. It doesn't use any of the speculative build registry stuff and just feels wrong because you're not building small parts ontop of each other, as Docker is designed to do. I still don't really know how it will work across all the projects for testing either. https://review.opendev.org/696000 ___ OpenStack-Infra mailing list OpenStack-Infra@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra