[rdo-dev] Maintenance on dlrn-db.rdoproject.org
Hi, We are planning to run a patch maintenance on dlrn-db.rdoproject.org on Tuesday, November 21 at 9:00 UTC. The maintenance is expected to last for approximately 1 hour. During this maintenance, no new packages will be built and any call to the DLRN API will fail. Please do not hesitate to contact us if you have any questions or concerns. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [rdo-users] Maintenance on dlrn-db.rdoproject.org
> Hi, > > We are planning to run a patch maintenance on dlrn-db.rdoproject.org on > Tuesday, November 21 at 9:00 UTC. The maintenance is expected to last for > approximately 1 hour. > > During this maintenance, no new packages will be built and any call to the > DLRN API will fail. > Hi, The maintenance has been completed, and all DLRN-related systems should work normally. Please do not hesitate to contact us if you find any issue. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2017-11-22) minutes
== #rdo: RDO meeting - 2017-11-22 == Meeting started by jpena at 15:00:21 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2017_11_22/2017/rdo_meeting___2017_11_22.2017-11-22-15.00.log.html . Meeting summary --- * LINK: https://etherpad.openstack.org/p/RDO-Meeting (jpena, 15:00:43) * roll call (jpena, 15:00:47) * Queens Test day feedback (jpena, 15:04:19) * ACTION: number80 take lead over next test days (dec 14, 15) (number80, 15:19:13) * ACTION: Rich will talk with the users list about upcoming test day and start building a list of test scenarios. (rbowen, 15:19:41) * ACTION: Rich to update test day pages accordingly (rbowen, 15:19:49) * Upstream LTS releases discussion <== follow-up (jpena, 15:21:39) * AGREED: keep DLRN repo running + stable -testing repo to provide deps (and allow us to update them if needed) (number80, 15:28:22) * AGREED: keep DLRN repo running + stable -testing repo to provide deps (and allow us to update them if needed) (number80, 15:28:53) * Keeping infra systems up to date: scheduled maintenance windows? (jpena, 15:37:59) * ACTION: jpena to propose planned maintenance windows (jpena, 15:44:30) * Hangout/Interviews - https://lists.rdoproject.org/pipermail/dev/2017-November/008393.html - please let Rich know if you want to participate. (jpena, 15:46:01) * Quarterly virtual RDO meetup? See https://lists.rdoproject.org/pipermail/dev/2017-November/008394.html (jpena, 15:48:59) * LINK: https://github.com/keithresar/ansible-minneapolis-meetup-topics/issues (rbowen, 15:53:37) * IaaS FOSDEM Devroom, CFP closes next Friday: http://lists.ovirt.org/pipermail/users/2017-November/085073.html (jpena, 15:55:48) * LINK: https://seven.centos.org/2017/10/feb-2-2018-centos-dojo-at-fosdem/ (rbowen, 15:56:59) * chair for the next meeting (jpena, 15:59:20) * ACTION: amoralej to chair the next meeting (jpena, 15:59:59) Meeting ended at 16:00:22 UTC. Action items, by person --- * amoralej * amoralej to chair the next meeting * jpena * jpena to propose planned maintenance windows * number80 * number80 take lead over next test days (dec 14, 15) * **UNASSIGNED** * Rich will talk with the users list about upcoming test day and start building a list of test scenarios. * Rich to update test day pages accordingly People present (lines said) --- * dmsimard (62) * number80 (62) * rbowen (48) * jpena (42) * amoralej (34) * openstack (8) * jruzicka (5) * rdogerrit (1) * weshay (1) * adarazs|ruck (1) * strigazi (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra] Maintenance on images.rdoproject.org
Hi all, We are planning to run a patch maintenance on images.rdoproject.org on Tuesday, November 28 at 9:00 UTC. The maintenance is expected to last for approximately 1 hour. During this maintenance, any CI job fetching or uploading images may fail. Please do not hesitate to contact us if you have any questions or concerns. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra] Maintenance on images.rdoproject.org
Hi all, The maintenance has finished. If you find any issue, please do not hesitate to contact us. Regards, Javier - Original Message - > Hi all, > > We are planning to run a patch maintenance on images.rdoproject.org on > Tuesday, November 28 at 9:00 UTC. The maintenance is expected to last for > approximately 1 hour. > > During this maintenance, any CI job fetching or uploading images may fail. > > Please do not hesitate to contact us if you have any questions or concerns. > > Regards, > Javier > > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [rdo-users] [all] Revisiting RDO Technical Definition of Done
- Original Message - > Hi all, > > we as a community last discussed RDO definition of done more than a > year ago and it was documented[1] > > In the meantime we have multiple changes in the RDO promotion > process, most significant is that we do not run all the CI promotion > jobs in the single Jenkins pipeline, instead there is now an > increasing number of periodic Zuul jobs in review.rdoproject.org > reporting to DLRN API database. > Promotion is performed asynchronously when all the required jobs report > success. > > At the same time, TripleO as the deployment project with the most > coverage in the promotion CI, has moved to be completely containerized > in Queens. > While RDO does provide container registry which is used with RDO > Trunk, there aren't currently plans to provide containers built from > the stable RPM builds as discussed on this list [2] around Pike GA. > Even if we do all the work listed in [2] problem stays that containers > are currently installer specific and we cannot realistically provide > separate set of containers for each of TripleO, Kolla, OSA... > > Proposal would be to redefine DoD as follows: > - RDO GA release delivers RPM packages via CentOS Cloud SIG repos, > built from pristine upstream source tarballs > - CI promotion GA criteria is changed from Jenkins pipeline to the > list of jobs running with RPM packages directly, initial set would be > all weirdo jobs running in [3] > - TripleO jobs would not be part of RDO GA criteria since TripelO now > requires containers which RDO will not ship.TripleO promotion CI will > continue running with containers built with RDO Trunk packages. Would this mean that GA criteria would be to pass Packstack + puppet-openstack-integration jobs? Regards, Javier > > I'm adding this topic on the agenda for the RDO meeting today, I won't > be able to join but we need to get that discussion going so we have > updated DoD ready for Queens GA. > > Cheers, > Alan > > [1] https://www.rdoproject.org/blog/2016/05/technical-definition-of-done/ > [2] https://www.redhat.com/archives/rdo-list/2017-August/msg00069.html > [3] > https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo_trunk-promote-master-current-tripleo/ > ___ > users mailing list > us...@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/users > > To unsubscribe: users-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra] Maintenance on master.monitoring.rdoproject.org
Hi all, We are planning to run a patch maintenance on master.monitoring.rdoproject.org on Thursday, November 30 at 9:00 UTC. The maintenance is expected to last for approximately 1 hour. No direct impact is expected, other than the lack of monitoring (or a spammy rdobot in a worst case). Please do not hesitate to contact us if you have any questions or concerns. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [rdo-users] [infra] Maintenance on master.monitoring.rdoproject.org
> Hi all, > > We are planning to run a patch maintenance on > master.monitoring.rdoproject.org on Thursday, November 30 at 9:00 UTC. The > maintenance is expected to last for approximately 1 hour. > > No direct impact is expected, other than the lack of monitoring (or a spammy > rdobot in a worst case). > Hi all, The maintenance has been completed. Please contact us if you find any issue. Regards, Javier > Please do not hesitate to contact us if you have any questions or concerns. > > Regards, > Javier > ___ > users mailing list > us...@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/users > > To unsubscribe: users-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra][outage] Nodepool outage on review.rdoproject.org, December 2
Hi all, We had another nodepool outage this morning. Around 9:00 UTC, amoralej noticed that no new jobs were being processed. He restarted nodepool, and I helped him later with some stale node cleanup. Nodepool started creating VMs successfully around 10:00 UTC. On a first look at the logs, we see no new messages after 7:30 (not even DEBUG logs), but I was unable to run more troubleshooting steps because the service was already restarted. We will go through the logs on Monday to investigate what happened during the outage. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [rhos-dev] [infra][outage] Nodepool outage on review.rdoproject.org, December 2
- Original Message - > On Sat, Dec 02, 2017 at 01:57:08PM +0100, Alfredo Moralejo Alonso wrote: > > On Sat, Dec 2, 2017 at 11:56 AM, Javier Pena wrote: > > > > > Hi all, > > > > > > We had another nodepool outage this morning. Around 9:00 UTC, amoralej > > > noticed that no new jobs were being processed. He restarted nodepool, and > > > I > > > helped him later with some stale node cleanup. Nodepool started creating > > > VMs successfully around 10:00 UTC. > > > > > > On a first look at the logs, we see no new messages after 7:30 (not even > > > DEBUG logs), but I was unable to run more troubleshooting steps because > > > the > > > service was already restarted. > > > > > > > > In case it helps, i could run successfully both "nodepool list" and > > "nodepool delete --now" (for a couple of instances in delete status) > > before restarting nodepool. However nothing appeared in logs and no > > instances were created for jobs in queue so i restarted nodepool-launcher > > (my understanding was that it fixed similar situations in the past) before > > Javier started working on it. > > > > > > > We will go through the logs on Monday to investigate what happened during > > > the outage. > > > > > > Regards, > > > Javier > > > > Please reach out to me the next time you restart it, something is seriously > wrong is we have to keep restarting nodepool every few days. At this rate, I > would even leave nodepool-launcher is the bad state until we inspect it. > Hi Paul, This happened on a Saturday morning, so I did not expect you to be around. Had it been on a working day, of course I would have pinged you. Leaving nodepool-launcher in bad state for the whole weekend would mean that no jobs would be running at all, including promotion jobs. This is usually not acceptable, but I'll do it if everyone agrees it is ok to wait until Monday. Regards, Javier > Thanks, > PB > > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [rhos-dev] [infra][outage] Nodepool outage on review.rdoproject.org, December 2
- Original Message - > On December 3, 2017 9:27 pm, Paul Belanger wrote: > [snip] > > Please reach out to me the next time you restart it, something is seriously > > wrong is we have to keep restarting nodepool every few days. > > At this rate, I would even leave nodepool-launcher is the bad state until > > we inspect it. > > > > Thanks, > > PB > > > > Hello, > > nodepoold was stuck again. Before restarting it I dumped the thread's > stack-trace and > it seems like 8 threads were trying to aquire a single lock (futex=0xe41de0): > https://review.rdoproject.org/paste/show/9VnzowfzBogKG4Gw0Kes/ > > This make the main loop stuck at > http://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/nodepool.py#n1281 > > I'm not entirely sure what caused this deadlock, the other threads involved > are quite complex: > * kazoo zk_loop > * zmq received > * apscheduler mainloop > * periodicCheck paramiko client connect > * paramiko transport run > * nodepool webapp handle request > > Next time, before restarting the process, it would be good to know what > thread is actually holding the lock, using (gdb) py-print, as explained > here: > https://stackoverflow.com/questions/42169768/debug-pythread-acquire-lock-deadlock/42256864#42256864 > > Paul: any other debug instructions would be appreciated. > Hello, As a follow-up: the Zuul queue for rdoinfo, DLRN-rpmbuild and other jobs using the rdo-centos-7/rdo-centos-7-ssd nodes was moving very slowly. After checking, there were multiple nodes seen by nodepool as "ready", but those nodes were not in jenkins. For example: +---+---+--+--+--+-+-+-- | ID| Provider | AZ | Label|... | State | Age | Comment | +---+---+--+--+--+-+-+-+ | 62045 | rdo-cloud | None | rdo-centos-7 | ... | read | 01:10:24:24 | None| | 62047 | rdo-cloud | None | rdo-centos-7 | ... | ready | 01:10:24:19 | None| The queue was only moving when there were more pending requests than nodes in this state, since that is when nodepool tries to build new nodes. I have manually removed them to allow the reviews to move on. This is already documented in the etherpad at https://review.rdoproject.org/etherpad/p/nodepool-infra-debugging. Regards, Javier > Regards, > -Tristan > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] python-watcherclient is not packaged?
> Hi RDO developers, > > After installing watcher Ocata version by install document[1], > I tried to install python-watcherclient. > > But the package was not found. I found other Openstack project > clients such as neutron. > > Is there any special reason not to package python-watcherclient? > Hi Hidekazu, There was an open review to package python-watcherclient [2], however it has starved for the last months and we had no new activity from the proposed maintainer. We will be happy to assist anyone who wants to take over the review. [2] - https://bugzilla.redhat.com/show_bug.cgi?id=1350974 Regards, Javier > > [1] https://docs.openstack.org/watcher/latest/install/install-rdo.html > > Regards, > Hidekazu Nakamura > > > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [outage][infra] RDO Cloud upgrade
Hi all, The RDO Cloud is being upgraded. As a result, we can expect outages in multiple RDO Infra services to happen until the upgrade is finished. For example, currently nodepool is failing to create new VMs, so CI jobs are being delayed. We are in contact with the RDO Cloud team, and will work with them to make sure all services are up and running after the upgrade. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [outage][infra] RDO Cloud upgrade
> Hi all, > > The RDO Cloud is being upgraded. As a result, we can expect outages in > multiple RDO Infra services to happen until the upgrade is finished. For > example, currently nodepool is failing to create new VMs, so CI jobs are > being delayed. > Hi all, The RDO Cloud upgrade is still ongoing. It has finished its first milestone (minor upgrade), and the RDO Cloud team is currently rebooting compute nodes, so some VMs being used for CI operations may fail. Later today, the major upgrade will be performed, so we can expect disruptions. We will continue to work with the team to make sure everything runs as smoothly as possible. Regards, Javier > We are in contact with the RDO Cloud team, and will work with them to make > sure all services are up and running after the upgrade. > > Regards, > Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [outage][infra] RDO Cloud upgrade
Hi, The RDO Cloud is now going through its major upgrade. This may also impact our CI jobs and most of our infrastructure services, please stay tuned for updates. As a preventive measure, we have stopped builds on the RDO Trunk server (DLRN). We will work with the RDO Cloud to ensure services are available as soon as possible. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [outage][infra] RDO Cloud upgrade
> Hi, > > The RDO Cloud is now going through its major upgrade. This may also impact > our CI jobs and most of our infrastructure services, please stay tuned for > updates. > > As a preventive measure, we have stopped builds on the RDO Trunk server > (DLRN). We will work with the RDO Cloud to ensure services are available as > soon as possible. > Hi all, The RDO Cloud has had issues during its upgrade, and they are still impacting us: - DLRN builds were re-enabled during the weekend, but they are now stopped again, since a new upgrade attempt is going on. - Nodepool got stuck during the weekend. It was investigated and the service restarted. However, since the current RDO Cloud status does not allow it to build new VMs, our CI is impacted and no new jobs are being executed at the moment in review.rdoproject.org. We are monitoring the situation, and will work with the RDO Cloud team to restore services as soon as possible. Regards, Javier > Regards, > Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [Octavia] Providing service VM images in RDO
- Original Message - > On 10 January 2018 at 14:41, Assaf Muller wrote: > > On Wed, Jan 10, 2018 at 8:10 AM, Alan Pevec wrote: > >> Hi Bernard, > >> > >> I've added this as a topic for the > >> https://etherpad.openstack.org/p/RDO-Meeting today, > >> with some initial questions to explore. > Thanks, I will be there in #rdo > >> > >> On Wed, Jan 10, 2018 at 1:50 PM, Bernard Cafarelli > >> wrote: > >>> * easier install/maintenance for the user, tripleo can consume the > >>> image directly (from a package) > >> > >> How do you plan to distribute the image, wrapped inside RPM? > If possible, that is my initial idea yes (easier to fetch, and allows > tracking available updates with yum) If we want to deliver via RPM and build on each Octavia change, we could try to add it to the octavia spec and build it using DLRN. Does the script require many external resources besides diskimage-builder? I'm not sure if that would work on CBS though, if we need to have network connectivity during the build process. Regards, Javier > >> > >>> * ensuring up-to-date amphora images (to match the controller version, > >>> for security updates, …) > >> > >> That's the most critical part of the process to figure out, how to > >> automate image updates, > >> just run it daily, trigger when included packages change... > > > > On Octavia patches (because the agent in the image might change), but > > it's worth pointing out that other elements in the image may need > > updating - Say, on a kernel CVE fix, haproxy update, etc. Maybe on > > Octavia changes + on a nightly basis? > A side-effect here would be that the user will see a new image > available, while there are maybe no real changes. But determining if > an image has real changes may not be easy (thinking out loud, > comparing the installed packages versions?) > > > > >> > >>> * allow to test and confirm the amphora works properly with latest > >>> changes, including enforcing SELinux (CentOS upstream gate runs in > >>> permissive) > >> > >> And that's the second part, which CI job would test it properly? > A bit outside of the RDO pipeline, tripleo CI would use it to test > Octavia deployments (I will check here with relevant folks) > In the pipeline yes it would be nice to test to confirm that Octavia > parts work fine with this repository version > >> > >>> * use this image in tripleo CI (instead of having to build it there) > >> > >> Related to the up-to-date issue: how to ensure it's latest for CI? > >> > >>> * (future) extend this system for other system VM images > >> > >> Which ones? > Currently off the top of my head, Sahara and Trove? Though I did not > study it further if it is possible/worth it for these projects > > Projects like tacker also have service VMs, but these are probably too > specific/customizable > >> > >> Cheers, > >> Alan > >> ___ > >> dev mailing list > >> dev@lists.rdoproject.org > >> http://lists.rdoproject.org/mailman/listinfo/dev > >> > >> To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [Octavia] Providing service VM images in RDO
- Original Message - > On Wed, Jan 10, 2018 at 7:50 PM, Javier Pena wrote: > > If we want to deliver via RPM and build on each Octavia change, we could > > try to add it to the octavia spec and build it using DLRN. Does the script > > require many external resources besides diskimage-builder? > > I'm not sure if that would work on CBS though, if we need to have network > > connectivity during the build process. > > I would be concerned with the storage required, also we need to > trigger not only on Octavia distgit or upstream changes, all included > RPMs need to be checked checked for updates. > This could be simulated with dummy commits in distgit to force e.g. > nightly refresh but due to storage requirements, I'd keep image builds > outside trunk repos. > I have been doing some tests, and it looks like running diskimage-builder from a chroot is not the best idea (it tries to mount some tmpfs and fails), so even if we solved the storage issue it wouldn't work. I think our best chance is to create a periodic job to rebuild the images (daily) then upload them to images.rdoproject.org. This would be a similar approach to what we are currently doing with containers. The only drawback of this alternative is that we would be distributing the qcow2 images instead of an RPM package, but we could still apply retention policies, and add some CI jobs to test them if needed. Cheers, Javier > Cheers, > Alan > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] proposing rlandy as core
> Greetings, > > As discussed in the RDO meeting today, I am proposing rlandy as core for [1]. > I'm not 100% sure how the access is divided up so I'm specifying the repo > itself. Ronelle has proven herself to be a quality reviewer and her > submissions are also up to the standards of core. > > Thank you all for your time in considering Ronelle as core. > +1 > [1] https://github.com/rdo-infra/review.rdoproject.org-config > > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra][outage] Update on RDO Trunk builder on January 30th
Hello, We are going to update the RDO Trunk builders to the latest DLRN version on January 30th, starting at 9:00 AM UTC. The update is expected to last for one hour, and no packages will be built during that time. RDO Trunk repos and the DLRN API will remain accessible. If you have any questions, please do not hesitate to contact me. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [Octavia] Providing service VM images in RDO
- Original Message - > Bumping the thead, upstream patches are merged now [0] > > With current upstream code, I can generate an image from master packages > with: > $ wget > https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1801-01.qcow2 > $ virt-customize -a CentOS-7-x86_64-GenericCloud-1801-01.qcow2 > --selinux-relabel --run-command 'yum-config-manager --add-repo > http://trunk.rdoproject.org/centos7/delorean-deps.repo' > $ virt-customize -a CentOS-7-x86_64-GenericCloud-1801-01.qcow2 > --selinux-relabel --run-command 'yum-config-manager --add-repo > https://trunk.rdoproject.org/centos7/current-passed-ci/delorean.repo' > $ DIB_LOCAL_IMAGE=/home/stack/CentOS-7-x86_64-GenericCloud-1801-01.qcow2 > /opt/stack/octavia/diskimage-create/diskimage-create.sh -p -i centos > -o amphora-x64-haproxy-centos.qcow2 > > This is with devstack, but will be mostly the same when RDO packages > are updated (just the script location that then comes from > openstack-octavia-diskimage-create package) > I have run a quick test with the latest openstack-octavia-diskimage-create package from RDO Trunk, and it works like a charm. > So what are the next steps here? missing information, place to track > this, item for next meeting, action items, … ? > Let's add it as an item for the next meeting, so we define a plan. My proposal would be a daily build using a periodic job, storing images in a new path under images.rdoproject.org. Regards, Javier > > [0] https://review.openstack.org/#/c/522626/ > > On 12 January 2018 at 13:05, Bernard Cafarelli wrote: > > On 11 January 2018 at 11:53, Javier Pena wrote: > >> - Original Message - > >>> On Wed, Jan 10, 2018 at 7:50 PM, Javier Pena wrote: > >>> > If we want to deliver via RPM and build on each Octavia change, we > >>> > could > >>> > try to add it to the octavia spec and build it using DLRN. Does the > >>> > script > >>> > require many external resources besides diskimage-builder? > >>> > I'm not sure if that would work on CBS though, if we need to have > >>> > network > >>> > connectivity during the build process. > > I looked a bit initially into building the image directly in spec, one > > problem was how to pass the needed RDO packages properly to > > diskimage-builder (as a repo so that yum pulls them in). > > Apart from some configuration tweaks, most of the steps sum up to yum > > calls (system update - install haproxy, keepalived, … - install > > openstack-octavia-amphora-agent), these need network access, or at > > least local mirrors. > >>> > >>> I would be concerned with the storage required, also we need to > >>> trigger not only on Octavia distgit or upstream changes, all included > >>> RPMs need to be checked checked for updates. > >>> This could be simulated with dummy commits in distgit to force e.g. > >>> nightly refresh but due to storage requirements, I'd keep image builds > >>> outside trunk repos. > >>> > >> > >> I have been doing some tests, and it looks like running diskimage-builder > >> from a chroot is not the best idea (it tries to mount some tmpfs and > >> fails), so even if we solved the storage issue it wouldn't work. > >> I think our best chance is to create a periodic job to rebuild the images > >> (daily) then upload them to images.rdoproject.org. This would be a > >> similar approach to what we are currently doing with containers. > > That would work for "keeping other packages up to date" too > > > >> The only drawback of this alternative is that we would be distributing the > >> qcow2 images instead of an RPM package, but we could still apply > >> retention policies, and add some CI jobs to test them if needed. > > On disk usage and retention polices, the images I build locally (with > > CentOS) are 500-500 MB qcow2 files > > > > -- > > Bernard > > > > -- > Bernard Cafarelli > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2018-01-31) minutes
== #rdo: RDO meeting - 2018-01-31 == Meeting started by jpena at 15:01:19 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2018_01_31/2018/rdo_meeting___2018_01_31.2018-01-31-15.01.log.html . Meeting summary --- * roll call (jpena, 15:01:49) * Plan to provide VM images for Octavia (jpena, 15:05:56) * AGREED: Build images for Octavia using a periodic job on review.rdoproject.org, store on images.rdoproject.org (jpena, 15:22:58) * ACTION: jpena to send email to dev@lists with results of Octavia image discussion (jpena, 15:23:31) * Discuss Power CI for RDO (jpena, 15:24:16) * LINK: http://trunk.rdoproject.org/centos7-master/current-tripleo (amoralej, 15:40:12) * Preparation for queens release (jpena, 15:42:22) * ACTION: maintainers to send final updates for specs with requirements updates (amoralej, 15:45:55) * we will pin non-OpenStack puppet modules for queens with the builds in next promotion (amoralej, 15:49:28) * Chair for the next meeting (jpena, 15:53:25) * ACTION: amoralej to chair the next meeting (jpena, 15:53:49) * open floor (jpena, 15:53:54) Meeting ended at 16:00:48 UTC. Action items, by person --- * amoralej * amoralej to chair the next meeting * jpena * jpena to send email to dev@lists with results of Octavia image discussion People present (lines said) --- * amoralej (60) * jpena (37) * mjturek (26) * bcafarel (13) * number80 (8) * rdogerrit (6) * openstack (6) * EmilienM (5) * pabelanger (4) * ykarel (2) * mary_grace (2) * mnaser (1) * tosky (1) * chandankumar (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [Octavia] Providing service VM images in RDO
- Original Message - > > > - Original Message - > > Bumping the thead, upstream patches are merged now [0] > > > > With current upstream code, I can generate an image from master packages > > with: > > $ wget > > https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1801-01.qcow2 > > $ virt-customize -a CentOS-7-x86_64-GenericCloud-1801-01.qcow2 > > --selinux-relabel --run-command 'yum-config-manager --add-repo > > http://trunk.rdoproject.org/centos7/delorean-deps.repo' > > $ virt-customize -a CentOS-7-x86_64-GenericCloud-1801-01.qcow2 > > --selinux-relabel --run-command 'yum-config-manager --add-repo > > https://trunk.rdoproject.org/centos7/current-passed-ci/delorean.repo' > > $ DIB_LOCAL_IMAGE=/home/stack/CentOS-7-x86_64-GenericCloud-1801-01.qcow2 > > /opt/stack/octavia/diskimage-create/diskimage-create.sh -p -i centos > > -o amphora-x64-haproxy-centos.qcow2 > > > > This is with devstack, but will be mostly the same when RDO packages > > are updated (just the script location that then comes from > > openstack-octavia-diskimage-create package) > > > > I have run a quick test with the latest openstack-octavia-diskimage-create > package from RDO Trunk, and it works like a charm. > > > So what are the next steps here? missing information, place to track > > this, item for next meeting, action items, … ? > > > > Let's add it as an item for the next meeting, so we define a plan. My > proposal would be a daily build using a periodic job, storing images in a > new path under images.rdoproject.org. > Hi all, We discussed the topic at today's RDO meeting (see [1] for minutes). Consensus was reached on using a periodic job to build the images, then store them in images.rdoproject.org. This should be a first iteration of the concept, so we can provide a ready-made image for testing. If/when this image is used in gate jobs, we will have to revisit the concept and make sure we add some tests before publishing an image. Now it is time to implement it. All help is welcome :). Regards, Javier [1] - https://lists.rdoproject.org/pipermail/dev/2018-January/008521.html > Regards, > Javier > > > > > > [0] https://review.openstack.org/#/c/522626/ > > > > On 12 January 2018 at 13:05, Bernard Cafarelli wrote: > > > On 11 January 2018 at 11:53, Javier Pena wrote: > > >> - Original Message - > > >>> On Wed, Jan 10, 2018 at 7:50 PM, Javier Pena wrote: > > >>> > If we want to deliver via RPM and build on each Octavia change, we > > >>> > could > > >>> > try to add it to the octavia spec and build it using DLRN. Does the > > >>> > script > > >>> > require many external resources besides diskimage-builder? > > >>> > I'm not sure if that would work on CBS though, if we need to have > > >>> > network > > >>> > connectivity during the build process. > > > I looked a bit initially into building the image directly in spec, one > > > problem was how to pass the needed RDO packages properly to > > > diskimage-builder (as a repo so that yum pulls them in). > > > Apart from some configuration tweaks, most of the steps sum up to yum > > > calls (system update - install haproxy, keepalived, … - install > > > openstack-octavia-amphora-agent), these need network access, or at > > > least local mirrors. > > >>> > > >>> I would be concerned with the storage required, also we need to > > >>> trigger not only on Octavia distgit or upstream changes, all included > > >>> RPMs need to be checked checked for updates. > > >>> This could be simulated with dummy commits in distgit to force e.g. > > >>> nightly refresh but due to storage requirements, I'd keep image builds > > >>> outside trunk repos. > > >>> > > >> > > >> I have been doing some tests, and it looks like running > > >> diskimage-builder > > >> from a chroot is not the best idea (it tries to mount some tmpfs and > > >> fails), so even if we solved the storage issue it wouldn't work. > > >> I think our best chance is to create a periodic job to rebuild the > > >> images > > >> (daily) then upload them to images.rdoproject.org. This would be a > > >> similar approach to what we are currently doing with containers. > > > That would work for "keeping other packages up to date" too > >
Re: [rdo-dev] Delaying this week's RDO test days ?
- Original Message - > Hi, > > We're currently a bit behind in terms of trunk repository promotions > for the m3 queens milestone. > We are not very confident that we can get all the issues sorted out in > time for the test day so I would like to suggest we push the test days > next week. > > Instead of February 8th/9th, we would instead run the test days on the > 15th/16th. > +1 for me, let's delay if that helps us get a promoted repository on time. Regards, Javier > Ideally we would reach a consensus before wednesday's RDO meeting at > which point it would be too late. > > Thanks, > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] Long queue in RDO SF
Hi, I see no issues in nodepool. Looking at the current Zuul queue, we have a single job stuck for ~90 hours, queued on "gate-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-queens". When that happens, it's usually a configuration issue, and this is the case here: we have no definition for the queens gate job for featureset035 in https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/jobs/tripleo-upstream.yml#L979-L990. An easy way to troubleshoot this is: - If we find one or more jobs queued, first check at https://review.rdoproject.org/jenkins/ and see if there are nodes available to jenkins. - If there are, just check the list of jobs available to Jenkins. If it's not there, we need to double-check the jjb configuration and find what is missing. My only doubt is why this does not show up as "NOT_REGISTERED" in Zuul as it did before. I have proposed https://review.rdoproject.org/r/12038 as a fix for this. Regards, Javier - Original Message - > FWIW no alerts during the weekend and I have been able to spawn 10+ > instances without issue. > > Cheers > David Manchado > Senior Software Engineer - SysOps Team > Red Hat > dmanc...@redhat.com > > > On 11 February 2018 at 16:39, Paul Belanger wrote: > > On Sun, Feb 11, 2018 at 12:44:52PM +0100, Haïkel Guémar wrote: > >> On 02/11/2018 12:17 AM, Sagi Shnaidman wrote: > >> > Hi, > >> > > >> > I see openstack-check has 53 hours queue when 1 job only is queued: > >> > https://review.rdoproject.org/zuul/ > >> > > >> > Seems like problem with nodepool? > >> > > >> > Thanks > >> > > >> > -- > >> > Best regards > >> > Sagi Shnaidman > >> > > >> > > >> > ___ > >> > dev mailing list > >> > dev@lists.rdoproject.org > >> > http://lists.rdoproject.org/mailman/listinfo/dev > >> > > >> > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > >> > > >> > >> Ok, it looks bad enough that a simple nodepool list fails with that error: > >> os_client_config.exceptions.OpenStackConfigException: Cloud rdo-cloud was > >> not found. > >> > >> Despite RDO Cloud looks up, there might be an outage or incident hence > >> copying David Manchado. > >> > >> Regards, > >> H. > >> > > Okay, I have to run, but this looks like a configuration issue. It is hard > > to > > tell without debug logs for nodepool or zuul, but please double check your > > node > > is setup properly. > > > > I have to run now. > > > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [ptg] Dublin PTG lunch?
- Original Message - > On 02/15/2018 04:40 PM, Michael Turek wrote: > > Hey all, > > > > I'll be attending the Dublin PTG and was wondering if anyone would be > > interested in a birds of a feather lunch around RDO. Specifically I'd > > like to talk about RDO CI for Power but I'm open to other topics. I'd > > like to do it either Monday or Tuesday. > > > > Anyone attending that would be interested? > > > > Thanks, > > Mike Turek > > > > ___ > > dev mailing list > > dev@lists.rdoproject.org > > http://lists.rdoproject.org/mailman/listinfo/dev > > > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > > Looks good to me. > I'd be happy to go, too. ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra][outage] Zuul server restart on review.rdoproject.org
Hi all, Today at about 16:20 UTC we had to restart zuul-server on review.rdoproject.org. The RDO Cloud had a network outage, and that was affecting Zuul, which was not queueing new changes to be reviewed. All queued changes have been restored. However, if you sent a change for review between 13:30 UTC and 16:30 UTC and do not see any response from review.rdo's Zuul, please recheck. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2018-02-21) minutes
== #rdo: RDO meeting - 2018-02-21 == Meeting started by jpena at 15:00:15 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2018_02_21/2018/rdo_meeting___2018_02_21.2018-02-21-15.00.log.html . Meeting summary --- * roll call (jpena, 15:00:20) * Status of Cloud SIG Meeting? (jpena, 15:04:02) * ACTION: number80 post on centos-devel that current schedule for cloud SIG meeting will be confirmed until another slot is chosen (number80, 15:05:57) * PTG next week in Dublin. Will the RDO people present be doing an interview? (jpena, 15:06:58) * 28 days since promotion? What's up? (jpena, 15:14:29) * LINK: https://review.rdoproject.org/r/#/c/12558/ and https://review.rdoproject.org/r/#/c/12540/ (amoralej, 15:17:14) * open floor (jpena, 15:23:29) * LINK: https://www.openstack.org/ptg/#tab_schedule (dmsimard, 15:30:07) * LINK: https://www.openstack.org/ptg#tab_schedule (rbowen, 15:30:08) * chair for next meeting (jpena, 15:33:10) * ACTION: move next meeting to March 7 (jpena, 15:34:40) * ACTION: rbowen to chair the next meeting, on March 7 (jpena, 15:36:03) Meeting ended at 15:37:25 UTC. Action items, by person --- * number80 * number80 post on centos-devel that current schedule for cloud SIG meeting will be confirmed until another slot is chosen * rbowen * rbowen to chair the next meeting, on March 7 People present (lines said) --- * rbowen (32) * jpena (25) * dmsimard (20) * number80 (15) * amoralej (8) * openstack (6) * rlandy|rover (5) * chandankumar (3) * leanderthal (3) * mary_grace (3) * dmellado (2) * sshnaidm (2) * mjturek (2) * rasca (1) * rdobot (1) * myoung (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra][outage] Gerrit restarts today on review.rdoproject.org
Hi all, We've had to restart Gerrit on review.rdoproject.org a couple times, due to a repository synchronization issue with GitHub. GitHub has disabled some crypto standards from SSH [1], and that broke the jsch [2] library version embedded with review.rdo's Gerrit. As a workaround, we have updated jsch to a newer version on Gerrit's war file, and we will make sure we get a new release in Software Factory. If you find any issues, please let us know. Regards, Javier [1] - https://github.com/blog/2507-weak-cryptographic-standards-removed [2] - http://www.jcraft.com/jsch/ ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2018-04-11) minutes
== #rdo: RDO meeting - 2018-04-11 == Meeting started by jpena at 15:00:40 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2018_04_11/2018/rdo_meeting___2018_04_11.2018-04-11-15.00.log.html . Meeting summary --- * roll call (jpena, 15:00:52) * Don't forget: OpenStack Summit booth duty signup (jpena, 15:03:51) * LINK: https://etherpad.openstack.org/p/rdo-vancouver-summit-booth (leanderthal, 15:04:08) * LINK: https://etherpad.openstack.org/p/rdo-vancouver-summit-booth (leanderthal, 15:04:11) * Tentative test day schedule posted to https://www.rdoproject.org/testday/ - please mention serious conflicts so we can fix (jpena, 15:10:12) * open floor (jpena, 15:16:48) * LINK: https://review.rdoproject.org/r/#/c/10152/ (amoralej, 15:17:33) * ACTION: number80 to chair the next meeting (jpena, 15:26:26) * LINK: https://lists.rdoproject.org/pipermail/dev/2018-March/008632.html didn't get much traction so I wanted to bring up building Power Triple O containers again. First step still picking hardware? (mjturek, 15:29:05) Meeting ended at 15:48:35 UTC. Action items, by person --- * number80 * number80 to chair the next meeting People present (lines said) --- * dmsimard (41) * leanderthal (30) * jpena (23) * number80 (20) * mjturek (17) * rbowen (12) * openstack (11) * apevec (6) * amoralej (6) * Duck (5) * dmellado (4) * mwhahaha (3) * mhu (2) * ykarel (1) * baha (1) * jruzicka (1) * rbown (0) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] Nomination of new RDO infrastructure cores
- Original Message - > On Sat, Apr 14, 2018 at 3:07 AM, Alan Pevec wrote: > >> - Tristan Cacqueray (tristanC) > >> - Nicolas Hicher (nhicher) > >> - Fabien Boucher (fbo) > >> - Matthieu Huin (mhu) > > +1 to all :-) > I already voted in the review, but of course, +1 to all. Cheers, Javier > Thanks, > > Chandan Kumar > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra][outage] Planned update on RDO Trunk builders, Tue Apr 24
Hi all, We are planning to do some routine maintenance activities of the DLRN version in our RDO Trunk builders on April 24th. These activities will not affect the RDO Trunk repos, they could only cause a short delay in a package being built after a commit is merged upstream. If you have any doubt or concern, please do not hesitate to contact me. Regards, Javier Peña ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2018-05-16) minutes
== #rdo: RDO meeting - 2018-05-16 == Meeting started by jpena at 15:06:39 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2018_05_16/2018/rdo_meeting___2018_05_16.2018-05-16-15.06.log.html . Meeting summary --- * roll call (jpena, 15:07:03) * Demos are greatly appreciated for OpenStack Summit (jpena, 15:10:29) * python3 status update (jpena, 15:20:05) * Created fedora-stabilized repo (can be configured with https://trunk.rdoproject.org/fedora/delorean-deps.repo) (amoralej, 15:21:50) * Repository updades can be proposed as reviews to https://review.rdoproject.org/r/#/q/project:rdo-infra/fedora-stable-config (amoralej, 15:22:40) * Created a new DLRN builder for py3-enabled only packages https://trunk.rdoproject.org/fedora/ (amoralej, 15:22:50) * Creating a new image and node type in nodepool to run jobs https://review.rdoproject.org/r/#/c/13758/ (amoralej, 15:23:45) * next week's chair (jpena, 15:29:22) * ACTION: leanderthal to chair the next meeting (jpena, 15:30:52) * open floor (jpena, 15:30:57) * LINK: https://www.rdoproject.org/newsletter/ (leanderthal, 15:40:03) * ACTION: PagliaccisCloud to write something packstack related for rdoproject.org/newsletter/2018/june (leanderthal, 15:44:11) Meeting ended at 15:45:01 UTC. Action items, by person --- * leanderthal * leanderthal to chair the next meeting * PagliaccisCloud * PagliaccisCloud to write something packstack related for rdoproject.org/newsletter/2018/june People present (lines said) --- * amoralej (30) * leanderthal (28) * jpena (21) * ykarel (7) * PagliaccisCloud (4) * openstack (4) * jruzicka (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra][maintenance] Routine patch update on RDO Trunk infrastructure, Tue Jun 19 @ 9:00 UTC
Dear all, We will run a routine patch update procedure on the RDO Trunk infrastructure (trunk.rdoproject.org and associated servers) tomorrow, June 19th, at 9:00 UTC. During this maintenance, expected to last for 1h, the RDO Trunk repositories will be available as usual, with a small interruption due to the required reboot. If you have any doubt, please do not hesitate to contact me. Regards, Javier Peña ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra][maintenance] Routine patch update on RDO Trunk infrastructure, Tue Jun 19 @ 9:00 UTC
- Original Message - > Dear all, > > We will run a routine patch update procedure on the RDO Trunk infrastructure > (trunk.rdoproject.org and associated servers) tomorrow, June 19th, at 9:00 > UTC. > > During this maintenance, expected to last for 1h, the RDO Trunk repositories > will be available as usual, with a small interruption due to the required > reboot. The update process has finished successfully, and packages are being built again. If you find any unusual behavior, please do not hesitate to contact me. Regards, Javier > > If you have any doubt, please do not hesitate to contact me. > > Regards, > Javier Peña > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra][maintenance] Routine patch update to logs.rdoproject.org and afs-mirror.rdoproject.org On Jun 26, 9:00 UTC
Dear all, We will run a routine patch update procedure on logs.rdoproject.org and afs-mirror.rdoproject.org tomorrow, June 26th, at 9:00 UTC. During this maintenance, expected to last for 1h, there might be short interruptions to jobs using those resources (any job running on review.rdoproject.org) due to the reboot. If you have any doubt, please do not hesitate to contact me. Regards, Javier Peña ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra][maintenance] Routine patch update to logs.rdoproject.org and afs-mirror.rdoproject.org On Jun 26, 9:00 UTC
- Original Message - > On Mon, Jun 25, 2018 at 04:30:27AM -0400, Javier Pena wrote: > > Dear all, > > > > We will run a routine patch update procedure on logs.rdoproject.org and > > afs-mirror.rdoproject.org tomorrow, June 26th, at 9:00 UTC. > > > > During this maintenance, expected to last for 1h, there might be short > > interruptions to jobs using those resources (any job running on > > review.rdoproject.org) due to the reboot. > > > > If you have any doubt, please do not hesitate to contact me. > > > If we are expecting job failures, we should stop zuul-executors which avoids > jobs from launching. Zuul will keep enqueing job safely and start running > them > once we finish updates and start zuul-executor again. > For now, I have set the Jenkins instance in review.rdoproject.org to maintenance mode, so no new jobs are launched (we have little activity in Zuulv3 at this time). Once the currently running jobs are finished, I will proceed with the update. Javier > - Paul > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [rdo-users] [infra][maintenance] Routine patch update to logs.rdoproject.org and afs-mirror.rdoproject.org On Jun 26, 9:00 UTC
- Original Message - > > > - Original Message - > > On Mon, Jun 25, 2018 at 04:30:27AM -0400, Javier Pena wrote: > > > Dear all, > > > > > > We will run a routine patch update procedure on logs.rdoproject.org and > > > afs-mirror.rdoproject.org tomorrow, June 26th, at 9:00 UTC. > > > > > > During this maintenance, expected to last for 1h, there might be short > > > interruptions to jobs using those resources (any job running on > > > review.rdoproject.org) due to the reboot. > > > > > > If you have any doubt, please do not hesitate to contact me. > > > > > If we are expecting job failures, we should stop zuul-executors which > > avoids > > jobs from launching. Zuul will keep enqueing job safely and start running > > them > > once we finish updates and start zuul-executor again. > > > > For now, I have set the Jenkins instance in review.rdoproject.org to > maintenance mode, so no new jobs are launched (we have little activity in > Zuulv3 at this time). Once the currently running jobs are finished, I will > proceed with the update. > The maintenance is now finished. Please let me know if you find any issue. Regards, Javier > Javier > > > - Paul > > > ___ > users mailing list > us...@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/users > > To unsubscribe: users-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [meeting] RDO meeting (2018-06-27) minutes
#rdo: RDO meeting 2018-06-27 Meeting started by jpena at 15:00:59 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting_2018_06_27/2018/rdo_meeting_2018_06_27.2018-06-27-15.00.log.html . Meeting summary --- * roll call (jpena, 15:01:06) * p-o-i failures in RDO master (jpena, 15:03:06) * LINK: https://centos.logs.rdoproject.org/weirdo-generic-puppet-openstack-scenario001/5662/weirdo-project/logs/libvirt/qemu/instance-0001.txt.gz (amoralej, 15:06:09) * anything for the july newsletter? (jpena, 15:13:42) * next week's chair (jpena, 15:20:01) * ACTION: amoralej to chair the next meeting (jpena, 15:21:28) * open floor (jpena, 15:21:36) Meeting ended at 15:33:05 UTC. Action items, by person --- * amoralej * amoralej to chair the next meeting People present (lines said) --- * jpena (25) * leanderthal (17) * tosky (10) * pabelanger (9) * amoralej (9) * mwhahaha (7) * openstack (4) * dmsimard (2) * sshnaidm (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] RDO Trunk build statistics available in review.rdo's Grafana
Hi, Sometimes we want to know how many packages are processed by our RDO Trunk infrastructure, for statistical purposes or just because we want to brag about it in some presentation :). After an initial idea from dmsimard, we have extended the DLRN API and connected that to Grafana, so now we can check those stats: https://review.rdoproject.org/grafana/dashboard/db/rdo-trunk?orgId=1. Currently we only have data for the last week, but over time we will be able to have a good overview of our builds. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2018-08-08) minutes
== #rdo: RDO meeting - 2018-08-08 == Meeting started by jpena at 15:03:47 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2018_08_08/2018/rdo_meeting___2018_08_08.2018-08-08-15.03.log.html . Meeting summary --- * Request for publishing of centos-release-openstack-queens to altarch/cloud/ppc64le/ (jpena, 15:08:05) * ACTION: amoralej to check rdo queens release rpm status for altarch (amoralej, 15:11:01) * Status of Rocky release (jpena, 15:14:10) * https://releases.openstack.org/rocky/schedule.html (amoralej, 15:14:53) * ACTION: maintainers to send required updates for rocky to distgits asap (amoralej, 15:17:28) * status of tasks for rocky in https://trello.com/c/eYGgs0d2/668-rocky-release-preparation (amoralej, 15:18:48) * next week's chair (jpena, 15:20:25) * ACTION: leanderthal to chair the next meeting (jpena, 15:22:46) * open floor (jpena, 15:22:54) * LINK: https://review.rdoproject.org/r/#/q/project:rdo-infra/centos-release-openstack (apevec, 15:24:21) * ykarel is now RDO provenpackager. Well done!! (amoralej, 15:32:50) Meeting ended at 15:39:51 UTC. Action items, by person --- * amoralej * amoralej to check rdo queens release rpm status for altarch * **UNASSIGNED** * maintainers to send required updates for rocky to distgits asap * leanderthal to chair the next meeting People present (lines said) --- * amoralej (56) * jpena (23) * apevec (16) * baha (6) * openstack (6) * mjturek (4) * number80 (2) * rdogerrit (2) * PagliaccisCloud (1) * chandankumar (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [rdo-users] Enabling RDO Trunk Rocky repositories
> > Hi, > > As part of the preparation tasks for Rocky GA we have created a new RDO Trunk > repository centos7-rocky which follows stable/rocky branches. > > This repo is now consistent and is available in > http://trunk.rdoproject.org/centos7-rocky-bootstrap but today, we will be > moving it to http://trunk.rdoproject.org/centos7-rocky . This means that, > any user or job using this URL will move from using master branches to > stable/rocky. Anyone interested in keeping using master branches should move > to http://trunk.rdoproject.org/centos7-master repos instead. > > Please, let us know if you have any doubt or this is causing any issue. > > Best regards, > > RDO team > Hi, The repo is now available at http://trunk.rdoproject.org/centos7-rocky, and new packages are being built just like with the other builders. Regards, Javier Peña ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [ppc64le] Delorean Octavia ppc64le Packages
- Original Message - > Hey all, > > I've been working alongside Mike Turek to try to get Triple O containers > building on ppc64le. We're currently hung up on the > python-octavia-tests-tempest-golang package from the delorean repo. If > you go take a look at the repo itself [1], you'll see that the only > packages being published exclusively for x86 are the octavia ones. > > The openstack-queens repo [2] offers all of the octavia packages built > for ppc64le, so there's definitely not an x86 requirement, but their > version does not match that required for the Triple O containers. We > need someone to build the octavia packages (especially > python-octavia-tests-tempest-golang) for ppc64le in the delorean > repository to stop versioning conflicts. > Hi Adam, It looks like you found the only non-noarch package :). As you mentioned, there is nothing x86-specific in the package, just the fact that it is compiled using Go. Unfortunately, our RDO Trunk infra consists just of x86 VMs, so we do not have the means to build it for ppc64le. Maybe you could setup a DLRN instance to build just the Octavia package, and then use that repo in your tests with a higher priority? Regards, Javier > Thanks! > Adam Kimball > > [1] - > https://trunk.rdoproject.org/centos7/e4/fc/e4fcd2c5732e7d366639d07245c493329f463381_ffda0a4a > [2] - http://mirror.centos.org/altarch/7/cloud/ppc64le/openstack-queens/ > > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2018-08-29) minutes
== #rdo: RDO meeting - 2018-08-29 == Meeting started by jpena at 15:01:17 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2018_08_29/2018/rdo_meeting___2018_08_29.2018-08-29-15.01.log.html . Meeting summary --- * roll call (jpena, 15:01:31) * Policy for abandoning packages proposal (jpena, 15:03:03) * ACTION: number80 create a thread about abandoning packages policy on the rdo-dev list (number80, 15:07:51) * Release preparations (jpena, 15:21:02) * LINK: https://www.rdoproject.org/rdo/release-checklist/ (leanderthal, 15:23:18) * LINK: https://etherpad.openstack.org/p/rdo-rocky-release (leanderthal, 15:24:26) * ACTION: all help with the release announcement (number80, 15:25:33) * We have repositories ready (all 3 platforms) and same for centos-release-openstack-rocky (leanderthal, 15:27:02) * LINK: https://releases.openstack.org/rocky/schedule.html#r-release (apevec, 15:29:28) * Relationship with rpm-packaging project (jpena, 15:40:00) * ACTION: jpena will start discussion on rdo-dev list about rpm-packaging involvement, with a deadline to decide (jpena, 15:47:52) * Patch for introducing job for ppc64le container builds (jpena, 15:49:42) * LINK: https://review.rdoproject.org/r/#/c/15978/ (baha, 15:50:24) * ACTION: CI reviewers to check https://review.rdoproject.org/r/#/c/15978/ (jpena, 15:52:18) * Hardware for building octavia-tempest-test-golang package (jpena, 15:52:49) * LINK: https://lists.rdoproject.org/pipermail/dev/2018-August/008884.html thread here for those curious (mjturek, 15:55:52) * LINK: https://github.com/rdo-packages/octavia-distgit/blob/rpm-master/openstack-octavia.spec#L144 (jpena, 15:59:25) * ACTION: jpena to extract octavia-tempest-test-golang to a separate dep (jpena, 16:04:02) * propose to shift test days to M1 / M3 instead of M2 / GA (jpena, 16:06:51) * AGREED: test on M1 / M3 instead and cancel rocky test days GA next week (jpena, 16:09:12) * open floor (jpena, 16:09:21) Meeting ended at 16:11:05 UTC. Action items, by person --- * jpena * jpena will start discussion on rdo-dev list about rpm-packaging involvement, with a deadline to decide * jpena to extract octavia-tempest-test-golang to a separate dep * number80 * number80 create a thread about abandoning packages policy on the rdo-dev list * **UNASSIGNED** * all help with the release announcement * CI reviewers to check https://review.rdoproject.org/r/#/c/15978/ People present (lines said) --- * apevec (61) * jpena (57) * number80 (38) * leanderthal (38) * ykarel (20) * mjturek (19) * jruzicka (9) * openstack (7) * baha (5) * rdogerrit (4) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] RDO community involvement in OpenStack's rpm-packaging project
Hi all, As you're probably aware, there is an OpenStack rpm-packaging project [1], where the RDO community has been collaborating with people from other rpm-based distributions. We have been successful in reusing some tools created by that project [2][3], but we've never been able to reuse the spec file templates generated by the project, besides openstack-macros and some python 3 tests done during the Rocky cycle. As a community, we need to decide what we want our involvement in the project to be: - Only get involved in the tooling side, if we have no plans to reuse the spec templates in the future. - Try a deeper integration with the specs. - Other alternatives? Each option will carry its own consequences, e.g. if we stop contributing to the spec templates we should stop the 3rd party CI jobs and VMs that support them. Please contribute to the discussion on this thread. We will vote for a final decision during the next RDO Community meeting on September 5th. Thanks, Javier [1] - https://github.com/openstack/rpm-packaging [2] - https://github.com/openstack/pymod2pkg [3] - https://github.com/openstack/renderspec ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [meeting] RDO meeting (2018-09-12) minutes
== #rdo: RDO meeting - 2018-09-12 == Meeting started by jpena at 15:01:18 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2018_09_12/2018/rdo_meeting___2018_09_12.2018-09-12-15.01.log.html . Meeting summary --- * roll call (jpena, 15:01:36) * open floor (jpena, 15:05:31) * LINK: https://review.rdoproject.org/r/16040 needs some love. It is related to the ppc64le issue we were discussing lately (jpena, 15:07:42) * LINK: https://review.rdoproject.org/r/16211 is a first step towards that, I need to finish it tomorrow (jpena, 15:17:39) * LINK: https://review.rdoproject.org/r/#/settings/projects (jpena, 15:20:16) Meeting ended at 15:28:56 UTC. People present (lines said) --- * jpena (25) * Duck (13) * number80 (8) * ykarel (7) * openstack (6) * rdogerrit (4) * PagliaccisCloud (1) * chkumar|ruck (1) * weshay (1) * sshnaidm|afk (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [meeting] RDO meeting (2018-09-19) minutes
== #rdo: RDO meeting - 2018-09-19 == Meeting started by jpena at 15:02:05 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2018_09_19/2018/rdo_meeting___2018_09_19.2018-09-19-15.02.log.html . Meeting summary --- * roll call (jpena, 15:02:23) * Status of python3 conversion (jpena, 15:06:34) * clients and libraries need to be python3 only in fedora30 so the must be converted to single python3 subpackage too (amoralej, 15:29:46) * next week's chair (jpena, 15:31:31) * ACTION: amoralej to chair the next meeting (jpena, 15:32:05) * open floor (jpena, 15:32:08) * LINK: https://blogs.rdoproject.org/2018/03/rdo-queens-released/ (leanderthal, 15:42:34) * LINK: https://etherpad.openstack.org/p/rdo-rocky-release (leanderthal, 15:47:45) * ACTION: jpena to try and fix repoxplorer new contributor stats, else poke fbo (jpena, 15:48:11) Meeting ended at 15:54:58 UTC. Action items, by person --- * amoralej * amoralej to chair the next meeting * jpena * jpena to try and fix repoxplorer new contributor stats, else poke fbo People present (lines said) --- * amoralej (72) * jpena (29) * leanderthal (24) * ykarel (21) * openstack (8) * PagliaccisCloud (3) * rdogerrit (1) * chkumar|ruck (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] RDO Registry SSL certificate renewal on Mon, Oct 1
Hi all, We need to renew the RDO Registry SSL certificate. The process to do so is automated, but slightly more complex than the usual for an Apache web server. We will renew the certificate next Monday, October 1st. The estimated downtime is very short (< 10 minutes), but just take it into account if any CI jobs fail. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2018-10-17) minutes
== #rdo: RDO meeting - 2018-10-17 == Meeting started by jpena at 15:00:26 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2018_10_17/2018/rdo_meeting___2018_10_17.2018-10-17-15.00.log.html . Meeting summary --- * roll call (jpena, 15:00:41) * reserve rdo-* namespaces on gitlab? (jpena, 15:04:39) * drop newton tags from rdoinfo (jpena, 15:14:48) * ACTION: jpena to coordinate newton removal (jpena, 15:17:54) * next week's chair (jpena, 15:18:17) * ACTION: number80 to chair the next meeting (jpena, 15:18:32) * open floor (jpena, 15:18:53) Meeting ended at 15:44:21 UTC. Action items, by person --- * jpena * jpena to coordinate newton removal * number80 * number80 to chair the next meeting People present (lines said) --- * jpena (38) * ykarel (15) * openstack (8) * number80 (7) * mjturek (7) * PagliaccisCloud (6) * ktenzer (4) * amoralej (2) * rdogerrit (1) * openstackgerrit (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2018-11-21) minutes
== #rdo: RDO meeting - 2018-11-21 == Meeting started by jpena at 15:01:05 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2018_11_21/2018/rdo_meeting___2018_11_21.2018-11-21-15.01.log.html . Meeting summary --- * roll call (jpena, 15:01:24) * how can I move this forward? https://review.rdoproject.org/r/#/c/17208/ (jpena, 15:03:49) * LINK: https://github.com/rdo-packages/keystone-distgit/blob/fa748c68fcf17348ea24a2d5ae44dfa5b2eefc83/openstack-keystone.spec#L17 (moguimar, 15:07:35) * LINK: https://github.com/rdo-packages/keystone-distgit/blob/rocky-rdo/openstack-keystone.spec#L17 (amoralej, 15:07:52) * open floor (jpena, 15:15:11) * ACTION: ykarel to chair the next meeting (jpena, 15:18:22) Meeting ended at 15:29:22 UTC. Action items, by person --- * ykarel * ykarel to chair the next meeting People present (lines said) --- * moguimar (26) * jpena (22) * amoralej (7) * openstack (7) * PagliaccisCloud (6) * chandankumar (6) * EmilienM (3) * ykarel (2) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra] Planned DLRN update for RDO Trunk builders on December 11
Hi, We are going to update the DLRN code used by RDO Trunk builders on December 11, to fix some recurring issues. The update process should be transparent, and you could only notice a delay in new packages being built. If you have any questions or concerns, please do not hesitate to contact me. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra] Planned DLRN update for RDO Trunk builders on December 11
- Original Message - > Hi, > > We are going to update the DLRN code used by RDO Trunk builders on December > 11, to fix some recurring issues. The update process should be transparent, > and you could only notice a delay in new packages being built. > > If you have any questions or concerns, please do not hesitate to contact me. > Hi, The update is now completed, and all builders are working as expected. If you find any issue, please contact me. Cheers, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2019-01-23) minutes
== #rdo: RDO meeting - 2019-01-23 == Meeting started by jpena at 15:03:00 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_01_23/2019/rdo_meeting___2019_01_23.2019-01-23-15.03.log.html . Meeting summary --- * roll call (jpena, 15:03:33) * Updates on removal of python2-* from Fedora 30 by EOM https://etherpad.openstack.org/p/fedora-openstack-packages (jpena, 15:07:16) * LINK: https://etherpad.openstack.org/p/fedora_py2_removal_ticketsis the list of bugzilla's to triage (ykarel, 15:11:08) * LINK: https://etherpad.openstack.org/p/fedora-openstack-packages list of packages to manage (jpena, 15:12:42) * LINK: https://etherpad.openstack.org/p/fedora_py2_removal_ticketsis list of bz to triage (jpena, 15:12:51) * ACTION: everyone who got an email from apevec, assign their packages to openstack-sig in Fedora (jpena, 15:13:28) * sync up on haproxy 1.8 in stein (jpena, 15:13:39) * ACTION: dciabrin to coordinate with number80/amoralej for having a build of haproxy (jpena, 15:21:57) * Please sign up for booth duty at DevConf.cz and / or FOSDEM (jpena, 15:22:41) * chair for next week (jpena, 15:24:36) * ACTION: amoralej to chair next week (jpena, 15:26:27) * open floor (jpena, 15:26:46) Meeting ended at 15:30:52 UTC. Action items, by person --- * amoralej * dciabrin to coordinate with number80/amoralej for having a build of haproxy * amoralej to chair next week * dciabrin * dciabrin to coordinate with number80/amoralej for having a build of haproxy * openstack * everyone who got an email from apevec, assign their packages to openstack-sig in Fedora People present (lines said) --- * jpena (38) * dciabrin (14) * ykarel (10) * amoralej (10) * openstack (7) * rdogerrit (5) * PagliaccisCloud (1) * baha (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2019-03-06) minutes
== #rdo: RDO meeting - 2019-03-06 == Meeting started by jpena at 15:00:56 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_03_06/2019/rdo_meeting___2019_03_06.2019-03-06-15.00.log.html . Meeting summary --- * roll call (jpena, 15:01:10) * Status of Stein release (jpena, 15:04:43) * DLRN builders for CentOS7 and Fedora28 chasing stable/stein branches are bootstrapped (amoralej, 15:07:31) * URLs http://trunk.rdoproject.org/centos7-stein and http://trunk.rdoproject.org/fedora28-stein will be switched to new builders tomorrow (amoralej, 15:08:14) * requirements adjustements for stein ongoing in https://review.rdoproject.org/r/#/q/topic:stein-branching (amoralej, 15:09:36) * we will start branching distgits and building libraris early next week (amoralej, 15:10:35) * ACTION: maintainers send required adjustments for distgits for stein (amoralej, 15:10:53) * LINK: status for release at https://review.rdoproject.org/etherpad/p/stein-release-preparation (ykarel, 15:13:08) * open floor (jpena, 15:13:44) * ACTION: fultonj to try second loop devices for https://review.openstack.org/#/c/637196/ and pike and report if it fixes pike job xor we remove ooo+ceph tests for pike only (fultonj, 15:31:40) * AGREED: keep RDO Trunk pike but remove ceph from pike promotion criteria (jpena, 15:32:26) * ansible 2.7 in rdo for train https://review.rdoproject.org/r/#/c/18721/ (jpena, 15:34:05) * chair for next meeting (jpena, 15:41:48) * ACTION: mjturek to chair the next meeting (jpena, 15:43:31) Meeting ended at 15:53:13 UTC. Action items, by person --- * fultonj * fultonj to try second loop devices for https://review.openstack.org/#/c/637196/ and pike and report if it fixes pike job xor we remove ooo+ceph tests for pike only * mjturek * mjturek to chair the next meeting * openstack * fultonj to try second loop devices for https://review.openstack.org/#/c/637196/ and pike and report if it fixes pike job xor we remove ooo+ceph tests for pike only People present (lines said) --- * amoralej (47) * fultonj (37) * jpena (30) * weshay|rover (17) * ykarel (16) * mjturek (10) * openstack (7) * rdogerrit (5) * gfidente (3) * PagliaccisCloud (1) * Duck (1) * baha (1) * jschlueter (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] RDO Trunk stable/stein content will be available from today
Hi, Once the initial bootstrap for the stable/stein workers has finished in RDO Trunk, today we will switch the links and start serving them from their normal URLs: - centos7: https://trunk.rdoproject.org/centos7-stein - fedora: https://trunk.rdoproject.org/fedora28-stein If you are using any of the above links, please be aware that those will server stable/stein content from now on. If you need to keep using the RDO Trunk repos following the master branch, make sure you use: - centos7: https://trunk.rdoproject.org/centos7-master - fedora: https://trunk.rdoproject.org/fedora Regards, Javier Peña ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra] Patch maintenance on RDO Trunk machines, Tue Mar 19 10:00 UTC
Hi all, We are going to run some patch maintenance on the RDO Trunk machines (trunk.rdoproject.org and builders) on March 19, starting at 10:00 UTC. There should not be a big disruption, just a few minutes during the trunk.rdoproject.org reboot. If you have any doubt, please do not hesitate to contact me. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra][tripleoci] ppc64le container images in registry.rdoproject.org
Hi all, Over the last few weeks, mjturek and baha have been busy working on a set of periodic jobs to build TripleO images for the ppc64le arch [1]. The current missing step is publishing those images, and they are proposing to push those to the RDO Registry instance at registry.rdoproject.org, just like we do with our TripleO images. I have tried to understand the requirements, and would like to get input on the following topics: - Which namespace would these images use? Based on some logs [2] it looks like they use tripleomaster-ppc64le, will they also push the images to that namespace? - Could this create any conflicts with our current promotion pipeline? - Is registry.rdo the right place for those images? I'm not familiar with the next steps for ppc64le images after that (will it then go through a promotion pipeline?), so that might affect. If we decide to upload the images to images.rdo, we'll need to do the following: - Create the tripleomaster-ppc64le namespace in registry.rdo, following a similar pattern to [3]. - Schedule a short registry downtime to increase its disk space, since it is currently near its limit. - Update the job at ci.centos to include the REGISTRY_PASSWORD environment variable with the right token (see [4]). This is missing today, and causing the job failure. Once we get input from all interested parties, we will decide on the next steps. Thanks, Javier [1] - https://ci.centos.org/job/tripleo-upstream-containers-build-master-ppc64le/ [2] - https://centos.logs.rdoproject.org/tripleo-upstream-containers-build-master-ppc64le/422/logs/logs/000_FAILED_tripleoclient.log [3] - https://review.rdoproject.org/r/19063 [4] - https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/playbooks/tripleo-ci-periodic-base/containers-build.yaml#L12-L20 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra] Patch maintenance on RDO Trunk machines, Tue Mar 19 10:00 UTC
> Hi all, > > We are going to run some patch maintenance on the RDO Trunk machines > (trunk.rdoproject.org and builders) on March 19, starting at 10:00 UTC. > > There should not be a big disruption, just a few minutes during the > trunk.rdoproject.org reboot. > Hi, The update is now completed, and packages are being built as usual. Please contact me if you find any issue or disruption. Regards, Javier > If you have any doubt, please do not hesitate to contact me. > > Regards, > Javier > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra][tripleoci] ppc64le container images in registry.rdoproject.org
- Original Message - > Hi all, > > Over the last few weeks, mjturek and baha have been busy working on a set of > periodic jobs to build TripleO images for the ppc64le arch [1]. > > The current missing step is publishing those images, and they are proposing > to push those to the RDO Registry instance at registry.rdoproject.org, just > like we do with our TripleO images. I have tried to understand the > requirements, and would like to get input on the following topics: > > - Which namespace would these images use? Based on some logs [2] it looks > like they use tripleomaster-ppc64le, will they also push the images to that > namespace? > - Could this create any conflicts with our current promotion pipeline? > - Is registry.rdo the right place for those images? I'm not familiar with the > next steps for ppc64le images after that (will it then go through a > promotion pipeline?), so that might affect. > > If we decide to upload the images to images.rdo, we'll need to do the Correction: this should read "registry.rdo" > following: > > - Create the tripleomaster-ppc64le namespace in registry.rdo, following a > similar pattern to [3]. > - Schedule a short registry downtime to increase its disk space, since it is > currently near its limit. > - Update the job at ci.centos to include the REGISTRY_PASSWORD environment > variable with the right token (see [4]). This is missing today, and causing > the job failure. > > Once we get input from all interested parties, we will decide on the next > steps. > > Thanks, > Javier > > > [1] - > https://ci.centos.org/job/tripleo-upstream-containers-build-master-ppc64le/ > [2] - > https://centos.logs.rdoproject.org/tripleo-upstream-containers-build-master-ppc64le/422/logs/logs/000_FAILED_tripleoclient.log > [3] - https://review.rdoproject.org/r/19063 > [4] - > https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/playbooks/tripleo-ci-periodic-base/containers-build.yaml#L12-L20 > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra][tripleoci] ppc64le container images in registry.rdoproject.org
- Original Message - > I've been working with mjturek and baha on this a bit. I've responded inline > below, but also want to clarify on the desired workflow. > > TL;DR: The desired workflow is to have ppc64le and x86_64 seamlessly > integrated and uploaded. This can be done with docker manifest list images. > > The following link explains in greater detail: > https://docs.docker.com/registry/spec/manifest-v2-2/ > > The process boils down to the following steps: > > 1) Upload an image of the first architecture (ex: image1:x86_64_01012019) > 2) Upload an image of the second architecture (ex: image1:ppc64le_01012019) > 3) Upload manifest list image of the image (ex: image1:01012019) > This is one of the details where I had my doubts. Currently, the images uploaded to the registry use the following naming convention: tripleomaster/centos-binary-neutron-l3-agent:42a882962919b867c91a182b83acca6d8004096e_ee467b40 Where: - tripleomaster is associated to the release (we have tripleomaster, tripleostein, tripleorocky...) - centos is associated to the OS (we have centos and fedora) - 42a882962919b867c91a182b83acca6d8004096e_ee467b40 refers to the repository in trunk.rdoproject.org used to build the image (commit hash and short distro hash) If we want to go multi-arch, we need to change that tag to include the architecture, is this correct? Otherwise, we could have conflicts between the x86_64 and ppc64le pipelines trying to upload the same image. Regards, Javier > Step 3 is essentially just pushing a JSON body that has descriptors and > references to the other two images, such that when someone does a pull > request of the manifest list image, it will gather the appropriate > architecture for that image based on the host's architecture. > > -Trevor > > PS. If I've missed something important with the overall concerns here I > apologize, but thought it necessary to spell out the goal as I understand > it. > > > On Mar 21, 2019, at 12:28 PM, Javier Pena wrote: > > > > > > - Original Message - > >> Hi all, > >> > >> Over the last few weeks, mjturek and baha have been busy working on a set > >> of > >> periodic jobs to build TripleO images for the ppc64le arch [1]. > >> > >> The current missing step is publishing those images, and they are > >> proposing > >> to push those to the RDO Registry instance at registry.rdoproject.org, > >> just > >> like we do with our TripleO images. I have tried to understand the > >> requirements, and would like to get input on the following topics: > >> > >> - Which namespace would these images use? Based on some logs [2] it looks > >> like they use tripleomaster-ppc64le, will they also push the images to > >> that > >> namespace? > > I have no experience in namespaces inside of a registry or how that > differentiates images from one another, but the images should be pushed (in > my opinion) to the same location in which the x86 images reside. > > >> - Could this create any conflicts with our current promotion pipeline? > > This should not cause conflicts in current promotion pipeline, as the process > should be an extension to existing functionality. > > >> - Is registry.rdo the right place for those images? I'm not familiar with > >> the > >> next steps for ppc64le images after that (will it then go through a > >> promotion pipeline?), so that might affect. > > If the x86 images exist in registry.rdo, then the ppc64le (and any other > architecture image) should exist there as well. I can't think of a reason > to differentiate between architectures when the desired result is parity and > support of more architectures. > > >> > >> If we decide to upload the images to images.rdo, we'll need to do the > > > > Correction: this should read "registry.rdo" > > > >> following: > >> > >> - Create the tripleomaster-ppc64le namespace in registry.rdo, following a > >> similar pattern to [3]. > >> - Schedule a short registry downtime to increase its disk space, since it > >> is > >> currently near its limit. > > This is definitely necessary, given the capacity requirement will double, > give or take, to support the additional architecture. > > >> - Update the job at ci.centos to include the REGISTRY_PASSWORD environment > >> variable with the right token (see [4]). This is missing today, and > >> causing > >> the job failure. > >> > >> Once we
[rdo-dev] [infra] Planned maintenance in the RDO registry, 26 March at 9:00 UTC
Hi all, We need to run a planned maintenance in registry.rdoproject.org on March 26th at 9:00 UTC. This maintenance will extend the disk currently being used to host the registry images, to cope with current and future requirements. Unfortunately, it cannot be performed online, so a short downtime is required (expected < 1 hour). During the downtime, any operation accessing the registry (including periodic jobs) will fail. If you have any doubt or concern, please do not hesitate to contact me. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra][tripleoci] ppc64le container images in registry.rdoproject.org
- Original Message - > On Fri, Mar 22, 2019 at 8:35 PM Javier Pena < jp...@redhat.com > wrote: > > - Original Message - > > > > I've been working with mjturek and baha on this a bit. I've responded > > > inline > > > > below, but also want to clarify on the desired workflow. > > > > > > > > TL;DR: The desired workflow is to have ppc64le and x86_64 seamlessly > > > > integrated and uploaded. This can be done with docker manifest list > > > images. > > > > > > > > The following link explains in greater detail: > > > > https://docs.docker.com/registry/spec/manifest-v2-2/ > > > > > > > > The process boils down to the following steps: > > > > > > > > 1) Upload an image of the first architecture (ex: image1:x86_64_01012019) > > > > 2) Upload an image of the second architecture (ex: > > > image1:ppc64le_01012019) > > > > 3) Upload manifest list image of the image (ex: image1:01012019) > > > > > > > This is one of the details where I had my doubts. Currently, the images > > uploaded to the registry use the following naming convention: > > > tripleomaster/centos-binary-neutron-l3-agent:42a882962919b867c91a182b83acca6d8004096e_ee467b40 > > > Where: > > > - tripleomaster is associated to the release (we have tripleomaster, > > tripleostein, tripleorocky...) > > > - centos is associated to the OS (we have centos and fedora) > > > - 42a882962919b867c91a182b83acca6d8004096e_ee467b40 refers to the > > repository > > in trunk.rdoproject.org used to build the image (commit hash and short > > distro hash) > > > If we want to go multi-arch, we need to change that tag to include the > > architecture, is this correct? Otherwise, we could have conflicts between > > the x86_64 and ppc64le pipelines trying to upload the same image. > > Yup. The idea is that the enpoint URL > (tripleomaster/centos-binary-neutron-l3-agent:42a882962919b867c91a182b83acca6d8004096e_ee467b40) > is a container manifest. Where we include the arch would be with an > additional tag: > tripleomaster/centos-binary-neutron-l3-agent:42a882962919b867c91a182b83acca6d8004096e_ee467b40_$arch > but nothing else should change and *explicitly* do not want different orgs > per architecture. So the publish pipeline would look like: > * Each architecture builds and publishes all the containers per branch and OS > [1] all the containers and publishes a container image/layer to: > 'tripleo%(branch)s/%(os)s-%(build_type)s-%(container)s:%(repo)s_%(arch)s' > * Then checks to see if the manifest exists. manifest = > 'tripleo%(branch)s/%(os)s-%(build_type)s-%(container)s:%(repo)s' if > exists(manifest): > add_to_manifest(arch_layer=tripleo%(branch)s/%(os)s-%(build_type)s-%(container)s:%(repo)s_%(arch)s') > else: > create_manifest(arch_layer=tripleo%(branch)s/%(os)s-%(build_type)s-%(container)s:%(repo)s_%(arch)s') I have been running some tests to check how this could work, and I've found an issue. It looks like the OpenShift Registry (what we use for our RDO Registry) does not properly support manifest lists, see [2]. It actually failed for me when I tried it, while a plain Docker registry worked (using manifest-tool). Would the manifest upload be required in the RDO Registry (which is used as an intermediate step), or just in DockerHub (which is used for actual content delivery)? If it's the second case, we're still fine. Regards, Javier [2] - https://trello.com/c/4EcAIJrd/1303-5-quayregistry-add-support-for-manifest-list-rd > This shouldn't break existing consumers as docker and podman both do the > correct thing when encountering a manifest. and does mean that multi-arch > consumers can use the same URL scheme. This is how downstream currently > works > It's possible and possibly even desirable, due to resource constraints, for > the ppc64le build to be triggered only when updating current-passed-ci. > That's exactly what we discussed in Dublin. > Tony. > [1] for ppc64le we're starting with centos and master but over time this > would need to grow out from master to include stein, u etc etc We haven't > looked at Fedora due to using centos CI but if Fedora is going to stick > around we can work on that too. ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra] Planned maintenance in the RDO registry, 26 March at 9:00 UTC
- Original Message - > Hi all, > > We need to run a planned maintenance in registry.rdoproject.org on March 26th > at 9:00 UTC. > > This maintenance will extend the disk currently being used to host the > registry images, to cope with current and future requirements. > Unfortunately, it cannot be performed online, so a short downtime is > required (expected < 1 hour). During the downtime, any operation accessing > the registry (including periodic jobs) will fail. > Hi all, The maintenance is now completed, and the registry services seem to be back to normal. If you find any issue, please let me know. Regards, Javier > If you have any doubt or concern, please do not hesitate to contact me. > > Regards, > Javier > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2019-03-27) minutes
== #rdo: RDO meeting - 2019-03-27 == Meeting started by jpena at 15:00:24 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_03_27/2019/rdo_meeting___2019_03_27.2019-03-27-15.00.log.html . Meeting summary --- * roll call (jpena, 15:00:31) * ppc64le containers build update (jpena, 15:04:11) * Ceph Nautilus update (jpena, 15:05:10) * LINK: https://review.openstack.org/#/c/618320/1/docker/services/ceph-ansible/ceph-base.yaml (fultonj, 15:27:00) * ACTION: fmount gets new envs and continues debug of ceph issue. we pull in ovn team if necessary (fultonj, 15:32:45) * Decisions on ppc64le arch enablement for containers (jpena, 15:33:33) * TripleO meeting is 14:00 UTC Tuesday in #tripleo (mjturek, 15:59:33) * ACTION: mjturek, Vorrtex and the tripleo ci team to agree on implementation, jpena/amoralej to attend the tripleo-ci community sync next week (jpena, 15:59:40) * chair for the next meeting (jpena, 16:01:39) * ACTION: ykarel to chair next meeting (jpena, 16:02:21) * open floor (jpena, 16:02:24) Meeting ended at 16:09:17 UTC. Action items, by person --- * amoralej * mjturek, Vorrtex and the tripleo ci team to agree on implementation, jpena/amoralej to attend the tripleo-ci community sync next week * fmount * fmount gets new envs and continues debug of ceph issue. we pull in ovn team if necessary * jpena * mjturek, Vorrtex and the tripleo ci team to agree on implementation, jpena/amoralej to attend the tripleo-ci community sync next week * mjturek * mjturek, Vorrtex and the tripleo ci team to agree on implementation, jpena/amoralej to attend the tripleo-ci community sync next week * Vorrtex * mjturek, Vorrtex and the tripleo ci team to agree on implementation, jpena/amoralej to attend the tripleo-ci community sync next week * ykarel * ykarel to chair next meeting People present (lines said) --- * jpena (53) * amoralej (51) * fultonj (43) * fmount (30) * ykarel (22) * mjturek (21) * Vorrtex (19) * Duck (18) * openstack (10) * baha (6) * rdogerrit (2) * PagliaccisCloud (1) * egonzalez (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2019-05-08) minutes
== #rdo: RDO meeting - 2019-05-08 == Meeting started by jpena at 15:00:31 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_05_08/2019/rdo_meeting___2019_05_08.2019-05-08-15.00.log.html . Meeting summary --- * roll call (jpena, 15:00:36) * request to update to puppet 6 in RDO:- https://trello.com/c/isJkCzYZ (jpena, 15:05:29) * LINK: https://etherpad.openstack.org/p/RDO-Meeting (fultonj, 15:05:45) * puppet 6 will need to wait until centos 8 is available, can do some pre-work in Fedora (jpena, 15:06:43) * ACTION: ykarel to propose pr to update puppet in fedora (amoralej, 15:16:56) * ACTION: amoralej to update puppet in fedora stabilized repo once is built in fedora (amoralej, 15:17:19) * ceph/ansible post-update follow up (jpena, 15:18:33) * ppc64le container uploads (jpena, 15:30:30) * LINK: https://ci.centos.org/job/tripleo-upstream-containers-build-master-ppc64le/711/ ? (amoralej, 15:32:03) * LINK: https://centos.logs.rdoproject.org/tripleo-upstream-containers-build-master-ppc64le/711/logs/logs/build.log is the most recent build log, for example (baha, 15:33:03) * LINK: https://trunk.rdoproject.org/centos7-master/deps/latest/ (ykarel, 15:37:52) * ACTION: jpena to check potential alternatives for the RDO registry (jpena, 16:15:44) * chair for the next meeting (jpena, 16:17:17) Meeting ended at 16:18:41 UTC. Action items, by person --- * amoralej * amoralej to update puppet in fedora stabilized repo once is built in fedora * jpena * jpena to check potential alternatives for the RDO registry * ykarel * ykarel to propose pr to update puppet in fedora People present (lines said) --- * amoralej (57) * jpena (52) * ykarel (32) * tonyb (19) * fultonj (18) * weshay (18) * mwhahaha (15) * baha (14) * fmount (7) * openstack (7) * Vorrtex (4) * rdogerrit (3) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] Updating and retiring OpenStack packages from Fedora repositories
- Original Message - > Hi, > We have started working on synchronizing OpenStack clients from Stein release > into Fedora rawhide official repos. The goal is not only updating it in > fedora but also removing non required openstack packages on Fedora and > automating the process as much as possible. > An initial analysis of the packages that we can remove from Fedora shows that > following ones can be safely retired: > python-ceilometermiddleware > python-keystonemiddleware > python-os-win > python-oslo-vmware > python-oslo-sphinx > python-pycadf > python-oslo-cache > python-cursive > python-castellan > python-oslo-rootwrap > python-oslo-middleware > python-oslo-policy > python-oslo-reports > python-oslo-privsep > python-taskflow > python-automaton > python-microversion-parse > python-reno > python-osprofile > python-oslo-messaging > python-oslo-service > python-oslo-concurrency > I also have doubts about if we should maintain in Fedora: > diskimage-builder I think diskimage-builder can still be quite useful, if we want to test image builds from Fedora. About the rest, I'm fine if they are not needed by any client. Regards, Javier > python-hardware > python-gnocchiclient > Any opinion about it? > I'm documenting the work related to this in > https://review.rdoproject.org/etherpad/p/fedora-clients-sync > If you have questions or think that retiring these packages from Fedora repos > may cause any problem, please let me know on #rdo freenode channel or using > RDO mailing lists. > Best regards, > Alfredo > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > To unsubscribe: dev-unsubscr...@lists.rdoproject.org ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO Meeting (2019-06-12) minutes
== #rdo: RDO meeting - 2019-06-12 == Meeting started by jpena at 15:00:29 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_06_12/2019/rdo_meeting___2019_06_12.2019-06-12-15.00.log.html . Meeting summary --- * roll call (jpena, 15:00:33) * Pike is now extended maintenance, time for EOL in RDO (jpena, 15:03:58) * ACTION: amoralej to send a mail about Pike CloudSIG repos being EOL (amoralej, 15:09:54) * tracking pike removal in https://trello.com/c/6e0bVGzK/712-pike-is-em-removal-from-rdo (amoralej, 15:13:09) * altarch container discussion (jpena, 15:14:02) * New release of python-sqlalchemy-collectd requires collectd subpackage which is included in opstools (jpena, 15:27:40) * LINK: https://review.opendev.org/#/c/659678/ (amoralej, 15:31:25) * LINK: https://review.opendev.org/#/c/659678/ is running with centos7 so we need to provide it (amoralej, 15:40:41) * ACTION: amoralej to follow-up with mrunge about the topic (amoralej, 15:45:03) * Next week's chair (jpena, 15:45:58) * ACTION: ykarel to chair the next meeting (jpena, 15:47:29) * open floor (jpena, 15:47:36) Meeting ended at 15:54:56 UTC. Action items, by person --- * amoralej * amoralej to send a mail about Pike CloudSIG repos being EOL * amoralej to follow-up with mrunge about the topic * mrunge * amoralej to follow-up with mrunge about the topic * ykarel * ykarel to chair the next meeting People present (lines said) --- * amoralej (76) * jpena (31) * mjturek (27) * ykarel (19) * mrunge (12) * zzzeek (12) * openstack (5) * baha (2) * gfidente (2) * mriosfer (1) * apevec (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra][tripleo-ci] Disk space usage in logs.rdoproject.org
Hi all, For the last few days, I have been monitoring a spike in disk space utilization for logs.rdoproject.org. The current situation is: - 94% of space used, with less than 140GB out of 2TB available. - The log pruning script has been reclaiming less space than we are using for new logs during this week. - I expect the situation to improve over the weekend, but we're definitely running out of space. I have looked at a random job (https://review.opendev.org/639324, patch set 26), and found that each run is consuming 1.2 GB of disk space in logs. The worst offenders I have found are: - atop.bin.gz files (one per job, 8 jobs per recheck), ranging between 15 and 40 MB each - logs/undercloud/home/zuul/tempest/.stackviz directory on tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001 jobs, which is a virtualenv eating up 81 MB. As a temporary measure, I am reducing log retention from 21 days to 14, but we still need to reduce the rate at which we are uploading logs. Would it be possible to check the oooq-generated logs and see where we can reduce? These jobs are by far the ones consuming most space. Thanks, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra][tripleo-ci] Disk space usage in logs.rdoproject.org
- Original Message - > On Thu, Jun 13, 2019 at 8:22 AM Javier Pena < jp...@redhat.com > wrote: > > Hi all, > > > For the last few days, I have been monitoring a spike in disk space > > utilization for logs.rdoproject.org . The current situation is: > > > - 94% of space used, with less than 140GB out of 2TB available. > > > - The log pruning script has been reclaiming less space than we are using > > for > > new logs during this week. > > > - I expect the situation to improve over the weekend, but we're definitely > > running out of space. > > > I have looked at a random job ( https://review.opendev.org/639324 , patch > > set > > 26), and found that each run is consuming 1.2 GB of disk space in logs. The > > worst offenders I have found are: > > > - atop.bin.gz files (one per job, 8 jobs per recheck), ranging between 15 > > and > > 40 MB each > > > - logs/undercloud/home/zuul/tempest/.stackviz directory on > > tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001 jobs, which is a > > virtualenv eating up 81 MB. > > Can we sync up w/ how you are calculating these results as they do not match > our results. > I see each job consuming about 215M of space, we are close on stackviz being > 83M. Oddly I don't see atop.bin.gz in our calculations so I'll have to look > into that. I've checked it directly using du on the logserver. By 1.2 GB I meant the aggregate of the 8 jobs running for a single patchset. PS26 is currently using 2.5 GB and had one recheck. About the atop.bin.gz file: # find . -name atop.bin.gz -exec du -sh {} \; 16M ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens-branch/042cb8f/logs/undercloud/var/log/atop.bin.gz 16M ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens-branch/e4171d7/logs/undercloud/var/log/atop.bin.gz 28M ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-rocky-branch/ffd4de9/logs/undercloud/var/log/atop.bin.gz 26M ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-rocky-branch/34d44bf/logs/undercloud/var/log/atop.bin.gz 25M ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-stein-branch/b89761d/logs/undercloud/var/log/atop.bin.gz 24M ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-stein-branch/9ade834/logs/undercloud/var/log/atop.bin.gz 29M ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset053/a10447d/logs/undercloud/var/log/atop.bin.gz 44M ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset053/99a5f9f/logs/undercloud/var/log/atop.bin.gz 15M ./tripleo-ci-centos-7-multinode-1ctlr-featureset010/c8a8c60/logs/subnode-2/var/log/atop.bin.gz 33M ./tripleo-ci-centos-7-multinode-1ctlr-featureset010/c8a8c60/logs/undercloud/var/log/atop.bin.gz 16M ./tripleo-ci-centos-7-multinode-1ctlr-featureset010/73ef532/logs/subnode-2/var/log/atop.bin.gz 33M ./tripleo-ci-centos-7-multinode-1ctlr-featureset010/73ef532/logs/undercloud/var/log/atop.bin.gz 40M ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035/109d5ae/logs/undercloud/var/log/atop.bin.gz 45M ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035/c2ebeae/logs/undercloud/var/log/atop.bin.gz 39M ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001/7fe5bbb/logs/undercloud/var/log/atop.bin.gz 16M ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001/5e6cb0f/logs/undercloud/var/log/atop.bin.gz 40M ./tripleo-ci-centos-7-ovb-3ctlr_1comp_1supp-featureset039/c6bf5ea/logs/undercloud/var/log/atop.bin.gz 40M ./tripleo-ci-centos-7-ovb-3ctlr_1comp_1supp-featureset039/6ec5ac6/logs/undercloud/var/log/atop.bin.gz Can I safely delete all .stackviz directories? I guess that would give us some breathing room. Regards, Javier > Each job reports the size of the logs e.g. [1] > http://logs.rdoproject.org/24/639324/26/openstack-check/tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-stein-branch/9ade834/logs/quickstart_files/log-size.txt > > As a temporary measure, I am reducing log retention from 21 days to 14, but > > we still need to reduce the rate at which we are uploading logs. Would it > > be > > possible to check the oooq-generated logs and see where we can reduce? > > These > > jobs are by far the ones consuming most space. > > > Thanks, > > > Javier > > > ___ > > > dev mailing list > > > dev@lists.rdoproject.org > > > http://lists.rdoproject.org/mailman/listinfo/dev > > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra][tripleo-ci] Disk space usage in logs.rdoproject.org
- Original Message - > Hi all, > > For the last few days, I have been monitoring a spike in disk space > utilization for logs.rdoproject.org. The current situation is: > > - 94% of space used, with less than 140GB out of 2TB available. > - The log pruning script has been reclaiming less space than we are using for > new logs during this week. > - I expect the situation to improve over the weekend, but we're definitely > running out of space. > > I have looked at a random job (https://review.opendev.org/639324, patch set > 26), and found that each run is consuming 1.2 GB of disk space in logs. The > worst offenders I have found are: > > - atop.bin.gz files (one per job, 8 jobs per recheck), ranging between 15 and > 40 MB each > - logs/undercloud/home/zuul/tempest/.stackviz directory on > tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001 jobs, which is a > virtualenv eating up 81 MB. > > As a temporary measure, I am reducing log retention from 21 days to 14, but > we still need to reduce the rate at which we are uploading logs. Would it be > possible to check the oooq-generated logs and see where we can reduce? These > jobs are by far the ones consuming most space. > Two months after this e-mail, we're having the same situation. Disk I/O performance on RDO Cloud is not great, so we're close to 95% disk space usage, and old logs deletion is slower than new logs addition. On top of this, any attempt to clear logs more aggressively cause additional load on the server, which results on failed log uploads [1]. Please, could we tackle the excessive log uploads asap? I see the .stackviz virtualenv directories are still being uploaded. If we don't fix this soon, we'll end up having unwanted downtime in the log server, which will affect all jobs. Thanks, Javier [1] - https://review.rdoproject.org/zuul/builds?result=POST_FAILURE > Thanks, > Javier > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO Meeting (2019-08-21) minutes
== #rdo: RDO meeting - 2019-08-21 == Meeting started by jpena at 15:01:53 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_08_21/2019/rdo_meeting___2019_08_21.2019-08-21-15.01.log.html . Meeting summary --- * roll call (jpena, 15:02:12) * open floor (jpena, 15:05:32) * CentOS 8 is coming soon: https://wiki.centos.org/About/Building_8 (jpena, 15:08:38) * LINK: http://mirror.centos.org/centos/7/cr/x86_64/Packages/ (ykarel, 15:13:24) * CloudSIG repo for ocata and pike will be removed with CentOS 7.7 GA (amoralej, 15:15:17) * LINK: https://bugzilla.redhat.com/buglist.cgi?bug_status=__open__&bug_status=NEW&bug_status=ASSIGNED&f1=blocked&f2=assigned_to&list_id=10432013&o1=substring&o2=anywords&query_format=advanced&v1=1686977&v2=apevec%20jpena%20amoralej%20ykarel%20hguemar (amoralej, 15:28:43) * ACTION: ykarel to chair the next meeting (jpena, 15:42:37) Meeting ended at 15:42:48 UTC. Action items, by person --- * ykarel * ykarel to chair the next meeting People present (lines said) --- * amoralej (54) * ykarel (26) * jpena (24) * openstack (10) * jcapitao (2) * PagliaccisCloud (1) * zbr (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO Meeting (2019-08-28) minutes
Hi all, We have skipped this week's meeting due to lack of topics to discuss. See you on September 5th for the next meeting! Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra][outage] trunk.rdoproject.org down
Hi all, Due to an issue in the cloud provider, trunk.rdoproject.org has been down during the weekend. While we are still trying to recover it, we have switched to our secondary server inside RDO Cloud. Depending on the DNS propagation time, it may still take a few minutes to be visible, but normal operations will be resumed soon. We will send another email as soon as everything is back to normal. Please do not hesitate to contact me if you need additional information. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra][outage] trunk.rdoproject.org down
> Hi all, > > Due to an issue in the cloud provider, trunk.rdoproject.org has been down > during the weekend. > > While we are still trying to recover it, we have switched to our secondary > server inside RDO Cloud. Depending on the DNS propagation time, it may still > take a few minutes to be visible, but normal operations will be resumed > soon. > > We will send another email as soon as everything is back to normal. Please do > not hesitate to contact me if you need additional information. > Hi, Apparently, the volumes where the RDO Trunk data was stored in trunk.rdoproject.org are not recoverable. Thus, we have set up a new trunk.rdo machine, and will follow this procedure to recover data: - We have stopped new builds on RDO Trunk to sync the data to the new machine. - Tomorrow morning, once all data has been synced, we will switch the trunk.rdo DNS entry to the new machine and resume normal operations. All repositories will remain accessible during the process, with only a brief outage expected tomorrow while the DNS is propagating. Regards, Javier > Regards, > Javier > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra][outage] trunk.rdoproject.org down
> > Hi all, > > > > Due to an issue in the cloud provider, trunk.rdoproject.org has been down > > during the weekend. > > > > While we are still trying to recover it, we have switched to our secondary > > server inside RDO Cloud. Depending on the DNS propagation time, it may > > still > > take a few minutes to be visible, but normal operations will be resumed > > soon. > > > > We will send another email as soon as everything is back to normal. Please > > do > > not hesitate to contact me if you need additional information. > > > > Hi, > > Apparently, the volumes where the RDO Trunk data was stored in > trunk.rdoproject.org are not recoverable. Thus, we have set up a new > trunk.rdo machine, and will follow this procedure to recover data: > > - We have stopped new builds on RDO Trunk to sync the data to the new > machine. > - Tomorrow morning, once all data has been synced, we will switch the > trunk.rdo DNS entry to the new machine and resume normal operations. > > All repositories will remain accessible during the process, with only a brief > outage expected tomorrow while the DNS is propagating. > Hi, The DNS entries have been updated. I am currently monitoring DNS propagation, please be aware that there may be some flakiness until it is completed. For example, some Google DNS servers are currently showing the new IP, while some others are not. Once DNS propagation is stabilized, we will resume normal operations. Regards, Javier > Regards, > Javier > > > > Regards, > > Javier > > ___ > > dev mailing list > > dev@lists.rdoproject.org > > http://lists.rdoproject.org/mailman/listinfo/dev > > > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > > > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra][outage] trunk.rdoproject.org down
> > > > Hi all, > > > > > > Due to an issue in the cloud provider, trunk.rdoproject.org has been down > > > during the weekend. > > > > > > While we are still trying to recover it, we have switched to our > > > secondary > > > server inside RDO Cloud. Depending on the DNS propagation time, it may > > > still > > > take a few minutes to be visible, but normal operations will be resumed > > > soon. > > > > > > We will send another email as soon as everything is back to normal. > > > Please > > > do > > > not hesitate to contact me if you need additional information. > > > > > > > Hi, > > > > Apparently, the volumes where the RDO Trunk data was stored in > > trunk.rdoproject.org are not recoverable. Thus, we have set up a new > > trunk.rdo machine, and will follow this procedure to recover data: > > > > - We have stopped new builds on RDO Trunk to sync the data to the new > > machine. > > - Tomorrow morning, once all data has been synced, we will switch the > > trunk.rdo DNS entry to the new machine and resume normal operations. > > > > All repositories will remain accessible during the process, with only a > > brief > > outage expected tomorrow while the DNS is propagating. > > > > Hi, > > The DNS entries have been updated. I am currently monitoring DNS propagation, > please be aware that there may be some flakiness until it is completed. For > example, some Google DNS servers are currently showing the new IP, while > some others are not. > > Once DNS propagation is stabilized, we will resume normal operations. > Hi, Everything seems to be back to normal since this morning. If you find anything unusual, please let me know. Regards, Javier > Regards, > Javier > > > Regards, > > Javier > > > > > > > Regards, > > > Javier > > > ___ > > > dev mailing list > > > dev@lists.rdoproject.org > > > http://lists.rdoproject.org/mailman/listinfo/dev > > > > > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > > > > > > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] Proposal for administrators group update in review.rdoproject.org
Hi, I would like to propose the following changes to the Administrators group in review.rdoproject.org [1]: - Add Yatin Karel (ykarel) to the group. Actually, I'm surprised he was not already an administrator. - Remove Frederic Lepied and Haïkel Guemar from the group. Both have moved on to other tasks, and do not work on a daily basis in the management side. Please reply in the list with your votes (+1/-1). I am planning to do the change next Friday. Regards, Javier [1] - https://review.rdoproject.org/r/#/admin/groups/1,members ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2019-09-11) minutes
== #rdo: RDO meeting - 2019-09-11 == Meeting started by jpena at 15:01:02 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_09_11/2019/rdo_meeting___2019_09_11.2019-09-11-15.01.log.html . Meeting summary --- * Roll call (jpena, 15:01:18) * Train release preparation update (jpena, 15:08:34) * centos7-train dlrn builder ready: https://trunk.rdoproject.org/centos7-train (ykarel, 15:10:06) * centos7-train is similar to master currently, will pick changes from stable/train and train-rdo once they are available (ykarel, 15:11:01) * non-client libraries are released upstream, Requirement sync for these completed in RDO, train-rdo branch will be created soon. (ykarel, 15:11:31) * client libraries are being released upstream this week, RDO prep started, status being tracked at https://review.rdoproject.org/etherpad/p/train-release-preparation (ykarel, 15:11:46) * patches are being proposed https://review.rdoproject.org/r/#/q/topic:train-branching, reviews/patches appreciated for train prep (ykarel, 15:12:18) * Changes to the Administrators group in review.rdo (jpena, 15:15:42) * chair for the next meeting (jpena, 15:22:47) * ACTION: ykarel to chair the next meeting (jpena, 15:23:38) * open floor (jpena, 15:23:42) Meeting ended at 15:37:47 UTC. Action items, by person --- * ykarel * ykarel to chair the next meeting People present (lines said) --- * jpena (31) * ykarel (25) * weshay (8) * jcapitao (7) * openstack (6) * openstackgerrit (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] Any ETA for getting RDO kibana back online?
> RDO Kibaba, which was supposed to provide the similar log search experience > as upstream http://logstash.openstack.org went down a long time ago, I > reported it again on 18th on irc but got no replies. > > https://review.rdoproject.org/analytics/app/kibana > > As ruck/rover the ability to investigate when and how often specific errors > appeared is key. > Hi Sorin, The elk instance has some memory pressure errors, and elasticsearch was getting OOM'ed. I have resized the instance to give it more RAM (16 GB vs 8) and swap, so we hope it will get under control now. > Is there an issue tracker ticket that we can follow regarding this? -- using > irc is not really what i would call traceable. > You can use our Taiga board to track issues: https://tree.taiga.io/project/morucci-software-factory/issues?q= Regards, Javier > Thanks > Sorin > > > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO Meeting (2019-10-09) minutes
== #rdo: RDO meeting - 2019-10-09 == Meeting started by jpena at 15:01:28 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_10_09/2019/rdo_meeting___2019_10_09.2019-10-09-15.01.log.html . Meeting summary --- * roll call (jpena, 15:01:44) * Train Release Preparation (jpena, 15:06:37) * Train release next week (ykarel, 15:07:51) * RC builds(except deployment projects) tested with weirdo jobs and live at https://buildlogs.centos.org/centos/7/cloud/x86_64/openstack-train/ (ykarel, 15:08:07) * RC builds being promoted to train-release with https://review.rdoproject.org/r/#/c/23026/ (ykarel, 15:08:19) * py2/py3 migration done (jpena, 15:12:47) * chair for next week (jpena, 15:20:50) * ACTION: jcapitao to chair the next meeting (jpena, 15:21:52) * open floor (jpena, 15:21:56) Meeting ended at 15:26:18 UTC. Action items, by person --- * jcapitao * jcapitao to chair the next meeting People present (lines said) --- * jpena (23) * ykarel (19) * jcapitao (7) * openstack (5) * rdogerrit (3) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] Build Rdo Train in RHEL 8 on LinuxONE
- Original Message - > Hi, > > Last week I asked RDO related question in CentOS-devel mail list, you > can see the question from bottom link [1]. Thanks for Alfredo kindly > help. > > After we do some POC to build packages in link [2] , I still have some > questions: > > 1. We want contribute to RDO community, let RDO add s390x architecture > build. As there is no CentOS s390x architecture build in CentOS > repositories, we can only build and test RDO packages in RHEL, Is it > possiable we add RDO s390x build without CentOS s390x architecture > build? Build CentOS might need much more effort and time. > > 2. I'm trying building RDO Train packages in [2] in RHEL 8 on LinuxONE, > seems there are some package missing, such as python-d2to1. Should I > switch to build RDO Train in RHEL 7 as the start step, wait until RDO > train for CentOS 8 is ready? > We are bootstrapping the RHEL8 Train packages in https://trunk.rdoproject.org/rhel8-train . Not all packages are there yet, but it should be completed during the weekend. Regards, Javier > 3. For RDO Train on RHEL 8, can I expect python 3 is the default Python > interpreter for all OpenStack packages and dependecy packages? > > > Thank you. > > > > [1] > https://lists.centos.org/pipermail/centos-devel/2019-October/017948.html > [2] https://trunk.rdoproject.org/rhel8-master/ > > > > -- > Shi Lin, Huang > IBM China Systems Lab, Beijing, China > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] Proposal to change RDO meeting time
Hi, Yesterday, during the weekly RDO meeting, we discussed the current schedule. During the winter time, there are collisions with the schedule of some community members, and it is also quite late for some of us. So, I'd like to propose a change in the meeting time, to start 1 hour later than it does now: Thursdays, 14:00 UTC time. That is 15:00 CET, 19:30 IST, 9:00 EST. If the new time works for you, please let us know. Otherwise, we will keep the current meeting time. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] Proposal to change RDO meeting time
> Hi, > > Yesterday, during the weekly RDO meeting, we discussed the current schedule. > During the winter time, there are collisions with the schedule of some > community members, and it is also quite late for some of us. > > So, I'd like to propose a change in the meeting time, to start 1 hour later > than it does now: Thursdays, 14:00 UTC time. That is 15:00 CET, 19:30 IST, > 9:00 EST. Correction: that should be Wednesdays, not Thursdays. Sorry for the confusion. Javier > > If the new time works for you, please let us know. Otherwise, we will keep > the current meeting time. > > Regards, > Javier > > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [rdo-users] Proposal to change RDO meeting time
- Original Message - > > > Hi, > > > > Yesterday, during the weekly RDO meeting, we discussed the current > > schedule. > > During the winter time, there are collisions with the schedule of some > > community members, and it is also quite late for some of us. > > > > So, I'd like to propose a change in the meeting time, to start 1 hour later > > than it does now: Thursdays, 14:00 UTC time. That is 15:00 CET, 19:30 IST, > > 9:00 EST. > > Correction: that should be Wednesdays, not Thursdays. Sorry for the > confusion. > Friendly reminder: we have only received three replies to the proposal. If you have an opinion on this (for or against the proposal), please reply before next Wednesday, Nov 27th. We will review the results during the RDO meeting. Regards, Javier > Javier > > > > > If the new time works for you, please let us know. Otherwise, we will keep > > the current meeting time. > > > > Regards, > > Javier > > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2019-11-27) minutes
== #rdo: RDO meeting - 2019-11-27 == Meeting started by jpena at 15:00:24 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html . Meeting summary --- * roll call (jpena, 15:00:45) * CentOS8 Updates, still no ETA for centos8 buildroot in CBS. (jpena, 15:04:09) * projects started removing support for python2 (amoralej, 15:05:02) * Started deps bootstrap in copr https://copr.fedorainfracloud.org/coprs/g/openstack-sig/centos8-deps (amoralej, 15:06:26) * deps bootstrap is using latest fedora builds in f32 and enabling automatic dependencies in python to match fedora behavior (amoralej, 15:07:17) * Meeting time update (jpena, 15:10:13) * AGREED: Meeting time moves to 14:00 UTC on Wednesdays (1 hour earlier than before) (jpena, 15:17:02) * Questions about RDO CI/Software Factory (jpena, 15:17:23) * Chair for next meeting (jpena, 15:30:28) * ACTION: leanderthal to chair the next meeting (jpena, 15:31:17) * open floor (jpena, 15:31:36) Meeting ended at 15:43:34 UTC. Action items, by person --- * leanderthal * leanderthal to chair the next meeting People present (lines said) --- * jpena (41) * leanderthal (28) * tosky (20) * amoralej (20) * openstack (6) * rdogerrit (3) * jcapitao (2) * tristanC (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [tripleo] Component-based RDO Trunk for CentOS 8
Hi, During the last few months, we have been working on a component-based concept for RDO Trunk. This means that the set of packages in RDO Trunks will be split into separate logical components, that can be promoted independently by the TripleO CI. Thus, an issue in one component will not necessarily stop all updates, just those related to that component. We have successfully prototyped the concept (see [1]), and we are ready to move on to the implementation stage. The upcoming CentOS8-based builder is a good opportunity to start from scratch, so we are planning to create it with components. What do we need from you? We have an initial proposal for the component split in [2], and we need your reviews to ensure we have placed the right packages on each component (of course this can change over time). Once the component list is agreed, we will proceed to merge all relevant patches ([3]) and create the new RDO Trunk builder. >From a user's perspective, you should not notice any change: promoted >repositories will still be delivered from the same locations (e.g. >trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo). The >under-the-hood changes will mostly affect the TripleO CI team. Regards, Javier [1] - https://trunk-staging.rdoproject.org/centos7/report.html [2] - https://review.rdoproject.org/r/22394 [3] - https://softwarefactory-project.io/r/#/q/status:open+topic:component-based-dlrn ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2020-01-05) minutes
== #rdo: RDO meeting - 2020-01-15 == Meeting started by jpena at 14:04:24 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2020_01_15/2020/rdo_meeting___2020_01_15.2020-01-15-14.04.log.html . Meeting summary --- * roll call (jpena, 14:04:39) * Removing fedora stabilized repos and jobs (jpena, 14:08:15) * Fedora Stabilize repos in https://trunk.rdoproject.org/fedora/stable-base/ will be removed (amoralej, 14:12:07) * anyone using them, please let us know (amoralej, 14:12:37) * rpm-packaging is still using f28 jobs (amoralej, 14:15:57) * DLRN builder https://trunk.rdoproject.org/fedora/report.html will be stopped and removed (amoralej, 14:18:48) * repos under https://trunk.rdoproject.org/fedora/ will be maintained as current status while rpm-packaging project moves to python3 only (amoralej, 14:19:24) * repo https://review.rdoproject.org/r/#/admin/projects/rdo-infra/fedora-stable-config will be closed (amoralej, 14:20:27) * copr project https://copr.fedorainfracloud.org/coprs/g/openstack-sig/fedora-overrides/ will be removed (amoralej, 14:21:08) * any update in deps for fedora will be done manually (amoralej, 14:21:21) * Update about CentOS 8 (jpena, 14:22:15) * Update about centos8 and cbs from centos team https://lists.centos.org/pipermail/centos-devel/2020-January/036477.html (amoralej, 14:23:14) * Puppet promotion pipeline running and promoting https://ci.centos.org/view/rdo/view/weirdo-pipelines/view/weirdo-promote-test-puppet-centos8/ (amoralej, 14:24:53) * high availability packages for 8.0 have been uploaded to https://trunk.rdoproject.org/centos8-master/deps/advvirt/8.0/ha/ (amoralej, 14:25:56) * LINK: https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-centos-8-master-containers-build-push (amoralej, 14:27:40) * chair for next meeting (jpena, 14:30:27) * ACTION: jcapitao to chair the next meeting (jpena, 14:31:43) * open floor (jpena, 14:31:52) Meeting ended at 14:41:20 UTC. Action items, by person --- * jcapitao * jcapitao to chair the next meeting People present (lines said) --- * amoralej (43) * jpena (28) * openstack (6) * jcapitao (3) * rdogerrit (1) * tosky (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [rdo-users] [Meeting] RDO meeting (2020-01-05) minutes
> > > For my understanding, what’s the latest RDO release which will be supported > on CentOS 7 ? > Hi Tim, RDO Train is the last release supported for CentOS 7. We're still building RDO Trunk for CentOS 7 during the Ussuri cycle, but we will stop before the Ussuri GA. Regards, Javier > Thanks > Tim > > > On 15 Jan 2020, at 15:55, Javier Pena wrote: > > > > == > > #rdo: RDO meeting - 2020-01-15 > > == > > > > > > Meeting started by jpena at 14:04:24 UTC. The full logs are available > > at > > http://eavesdrop.openstack.org/meetings/rdo_meeting___2020_01_15/2020/rdo_meeting___2020_01_15.2020-01-15-14.04.log.html > > . > > > > > > > > Meeting summary > > --- > > > > * roll call (jpena, 14:04:39) > > > > * Removing fedora stabilized repos and jobs (jpena, 14:08:15) > > * Fedora Stabilize repos in > >https://trunk.rdoproject.org/fedora/stable-base/ will be removed > >(amoralej, 14:12:07) > > * anyone using them, please let us know (amoralej, 14:12:37) > > * rpm-packaging is still using f28 jobs (amoralej, 14:15:57) > > * DLRN builder https://trunk.rdoproject.org/fedora/report.html will be > >stopped and removed (amoralej, 14:18:48) > > * repos under https://trunk.rdoproject.org/fedora/ will be maintained > >as current status while rpm-packaging project moves to python3 only > >(amoralej, 14:19:24) > > * repo > > > > https://review.rdoproject.org/r/#/admin/projects/rdo-infra/fedora-stable-config > >will be closed (amoralej, 14:20:27) > > * copr project > >https://copr.fedorainfracloud.org/coprs/g/openstack-sig/fedora-overrides/ > >will be removed (amoralej, 14:21:08) > > * any update in deps for fedora will be done manually (amoralej, > >14:21:21) > > > > * Update about CentOS 8 (jpena, 14:22:15) > > * Update about centos8 and cbs from centos team > >https://lists.centos.org/pipermail/centos-devel/2020-January/036477.html > >(amoralej, 14:23:14) > > * Puppet promotion pipeline running and promoting > > > > https://ci.centos.org/view/rdo/view/weirdo-pipelines/view/weirdo-promote-test-puppet-centos8/ > >(amoralej, 14:24:53) > > * high availability packages for 8.0 have been uploaded to > >https://trunk.rdoproject.org/centos8-master/deps/advvirt/8.0/ha/ > >(amoralej, 14:25:56) > > * LINK: > > > > https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-centos-8-master-containers-build-push > >(amoralej, 14:27:40) > > > > * chair for next meeting (jpena, 14:30:27) > > * ACTION: jcapitao to chair the next meeting (jpena, 14:31:43) > > > > * open floor (jpena, 14:31:52) > > > > > > > > Meeting ended at 14:41:20 UTC. > > > > > > > > Action items, by person > > --- > > > > * jcapitao > > * jcapitao to chair the next meeting > > > > > > > > People present (lines said) > > --- > > > > * amoralej (43) > > * jpena (28) > > * openstack (6) > > * jcapitao (3) > > * rdogerrit (1) > > * tosky (1) > > > > > > > > Generated by `MeetBot`_ 0.1.4 > > > > ___ > > users mailing list > > us...@lists.rdoproject.org > > http://lists.rdoproject.org/mailman/listinfo/users > > > > To unsubscribe: users-unsubscr...@lists.rdoproject.org > > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra][rdo-trunk] Migration of trunk-primary.rdoproject.org starting on February 4th
Hi all, We are going to migrate trunk-primary.rdoproject.org to our new cloud provider, starting on February 4th. During this process, all RDO Trunk repositories will remain available, but you may notice delays in getting new commits packaged while each worker data is synchronized and enabled on the new environment. If you have any doubts or concerns, please do not hesitate to contact me. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO meeting (2020-02-05) minutes
== #rdo: RDO meeting - 2020-02-05 == Meeting started by jpena at 14:00:48 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2020_02_05/2020/rdo_meeting___2020_02_05.2020-02-05-14.00.log.html . Meeting summary --- * roll call (jpena, 14:01:13) * Log uploads for ppc64le containers job is failing (jpena, 14:05:46) * Manually test uploading a container to registry.rdoproject.org (jpena, 14:19:25) * ACTION: jpena to send mjturek credentials to upload images to the RDO registry (jpena, 14:26:34) * CentOS 8 update (jpena, 14:27:54) * erlang stack blocked by https://bugs.centos.org/view.php?id=16968 (amoralej, 14:28:37) * Chair for next meeting (jpena, 14:41:03) * ACTION: jcapitao to chair the next meeting (jpena, 14:42:41) * open floor (jpena, 14:43:17) Meeting ended at 14:49:23 UTC. Action items, by person --- * jcapitao * jcapitao to chair the next meeting * jpena * jpena to send mjturek credentials to upload images to the RDO registry * mjturek * jpena to send mjturek credentials to upload images to the RDO registry People present (lines said) --- * jpena (37) * mjturek (19) * amoralej (17) * ykarel (10) * openstack (8) * baha (3) * dpawlik (3) * jcapitao (2) * tristanC (1) * ykarel|away (1) * rh-jelabarre (1) * zbr (1) * leanderthal (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] Building and shipping RPMs from Ansible collections
> > On Tue, Feb 25, 2020 at 14:25 Alfredo Moralejo Alonso wrote: > > On Tue, Feb 25, 2020 at 12:23 PM Sagi Shnaidman > > wrote: > >> We need to test what we ship to customers, so we need to figure that > >>> out first for Ansible, together with the Ansible team. > >>> Has shipping on Red Hat CDN for Collections been defined by the > >>> Ansible organization? > >>> > >> > >> TBH, I don't think Ansible team is a part to consult with. Openstack > >> modules are moved to Openstack repository and Ansible team has no any > >> relation to it now. > >> As well as Podman modules that will be moved soon (hopefully when I find > >> the time) to CRIO namespace (https://github.com/containers) > >> The whole point of Ansible 2.10 is to remove all community modules from > >> core and takeover of responsibilities for them to interested parties. > >> So I think nothing should block us to build/ship the RPMs > >> > >> > > My understanding is that 2.10 sill move some supported modules also to > > collections and they have defined the way to deliver it using both galaxy > > for upstream and automation hub for downstream [1][2]. We could confirm > > with Ansible team if they plan to deliver it *only* via automation-hub or > > also using packages, but I guess we will rely on what they provide for all > > supported content. I think it'd be good to have a clear picture of what > > they will support and how they will deliver it. > > > > For the collections maintained in OPenStack, it may make sense to build > > RPMs for it to keep our rpm-based workflow. > > > > Could we investigate and/or design an `ansible-galaxy-rpm` tool to > produce and maintain the spec files? > If that doesn't exist, it could be a very useful thing to have. As an > example, the haskell packages in fedora are produced using a `cabal-rpm` > tool and it seems like a viable solution to use for maintaining the > ansible collection as rpm. > I felt curiosity for this, and had a look at the Ansible Galaxy API. It seems to be quite straightforward, so I have created https://github.com/javierpena/galaxy-collection-to-rpm with a simple script to create the spec file for a collection. If we wanted to build some collections using DLRN, we could add some code to provide better support, similar to what was done in the past for Ruby gems [3]. Regards, Javier [3] - https://github.com/softwarefactory-project/DLRN/commit/cd64b6b172362dbadae25c3a93e7e00c9e5acaaf > -Tristan > > > > > [1] https://www.ansible.com/products/automation-hub > > [2] https://www.ansible.com/blog/getting-started-with-automation-hub > > > > > > > >> Thanks > >> -- > >> Best regards > >> Sagi Shnaidman > >> ___ > >> dev mailing list > >> dev@lists.rdoproject.org > >> http://lists.rdoproject.org/mailman/listinfo/dev > >> > >> To unsubscribe: dev-unsubscr...@lists.rdoproject.org > >> > > ___ > > dev mailing list > > dev@lists.rdoproject.org > > http://lists.rdoproject.org/mailman/listinfo/dev > > > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [Meeting] RDO Meeting (2020-03-25) minutes
== #rdo: RDO meeting - 2020-03-25 == Meeting started by jpena at 14:00:22 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2020_03_25/2020/rdo_meeting___2020_03_25.2020-03-25-14.00.log.html . Meeting summary --- * roll call (jpena, 14:00:28) * Re-kick RDO Trunk rsync server idea (jpena, 14:03:54) * AGREED: We'll move on with the puppet-dlrn patch, and enable rsync clients on demand (jpena, 14:21:58) * rdopkg reqcheck recommendation (jpena, 14:22:09) * LINK: https://github.com/rdo-infra/rdo-jobs/blob/a5e35c443b368095bfdfa4fa072db2666d94a134/playbooks/rpmlint/run.yaml#L95 (amoralej, 14:31:50) * New sign and push to CentOS mirrors is up and running (jpena, 14:38:17) * LINK: https://lists.centos.org/pipermail/centos-devel/2020-March/036690.html (amoralej, 14:40:10) * altarch repos for CloudSIG are now on sync for rocky, stein and train (amoralej, 14:40:39) * chair for the next meeting (jpena, 14:46:44) * ACTION: jcapitao to chair the next meeting (jpena, 14:48:32) * open floor (jpena, 14:48:35) Meeting ended at 14:55:44 UTC. Action items, by person --- * jcapitao * jcapitao to chair the next meeting People present (lines said) --- * amoralej (43) * jpena (40) * jcapitao (15) * ykarel (10) * openstack (6) * rdogerrit (3) * cyberpear (2) * sshnaidm (1) * marios (1) Generated by `MeetBot`_ 0.1.4 ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [RDO Trunk] Renamed centos8-train builders in RDO Trunk
Dear all, A couple weeks ago, we created a new component-enabled centos8-train builder for RDO Trunk, using https://trunk.rdoproject.org/centos8-train-components as its URL. Today, we have renamed it to be the official centos8-train builder, and the previous non-component-enabled one has been renamed to centos8-train-old. To summarize: - https://trunk.rdoproject.org/centos8-train-old is the old, non-componentized URL - https://trunk.rdoproject.org/centos8-train is the new, component-enabled URL The TripleO CI team will work on a new component-based promotion pipeline for the new centos8-train builder, so we will have promoted content soon. If you were using any repo from the previous URL, and now it fails, you just need to add the "-old" suffix. In the future, we expect everyone to use the new component-enabled centos8-train builder, and delete the old one. Note that this only affects CentOS8-based builders. If you find any trouble, please do not hesitate to contact us. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] please tag new rdopkg release
> Hi folks, > > Would you please tag and push a new rdopkg release? > > https://github.com/softwarefactory-project/rdopkg/issues/186 > > It would be really nice to have > https://softwarefactory-project.io/r/18391 in a release. > Hi Ken, We have tagged a new rdopkg release (1.3.0), and built packages for Fedora 33 and EPEL8. It will still take a while until the updated packages are available. Thanks, Javier > - Ken > > ___ > dev mailing list > dev@lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscr...@lists.rdoproject.org > > ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra] Planned maintenance on RDO Trunk machines, Sep 8th 9:00 UTC
Hi all, We are planning to run some planned maintenance activities on the RDO Trunk machines on Tuesday, September 8th at 9:00 UTC. These planned activities could result in short downtime periods as the httpd service is restarted. The maintenance window is expected to last for 2 hours maximum. If you have any question or concern, please do not hesitate to contact me. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra] Planned maintenance on RDO Trunk machines, Sep 8th 9:00 UTC
> Hi all, > > We are planning to run some planned maintenance activities on the RDO Trunk > machines on Tuesday, September 8th at 9:00 UTC. > > These planned activities could result in short downtime periods as the httpd > service is restarted. The maintenance window is expected to last for 2 hours > maximum. > > If you have any question or concern, please do not hesitate to contact me. > Hi all, The maintenance is now finished, and all RDO Trunk services should be back to normal. If you find any abnormal behavior, please do not hesitate to contact me. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra] Maintenance on AFS mirror servers on Thursday, September 17th
Hi, We are planning to run some maintenance activities on the RDO AFS mirror servers (mirror.regionone.rdo-cloud.rdoproject.org and mirror.regionone.vexxhost.rdoproject.org) on next September 17th at 8:00 UTC. During the maintenance, we may have some httpd service restarts, which mean that some 3rd party CI jobs may be affected. We expect the maintenance to take ~ 1 hour. If you have any question or concern, please do not hesitate to contact me. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
[rdo-dev] [infra] Planned maintenance on DLRN DB on Friday, Sep 18th
Hi, We are planning to run some maintenance activities to fix the DLRN DB replication next Friday, September 18th, at 9:00 UTC. These activities require the main DLRN DB to be set to read-only mode for a while, so the DLRN API will not allow write operations (reporting results or promotions) during that period. We expect the maintenance operation to last less than one hour. If you have any question or concern, please do not hesitate to contact me. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra] Maintenance on AFS mirror servers on Thursday, September 17th
> Hi, > > We are planning to run some maintenance activities on the RDO AFS mirror > servers (mirror.regionone.rdo-cloud.rdoproject.org and > mirror.regionone.vexxhost.rdoproject.org) on next September 17th at 8:00 > UTC. > > During the maintenance, we may have some httpd service restarts, which mean > that some 3rd party CI jobs may be affected. We expect the maintenance to > take ~ 1 hour. > > If you have any question or concern, please do not hesitate to contact me. > Hi all, The maintenance is now complete. With the updated Ansible roles, we now have SSL support (test https://mirror.regionone.vexxhost.rdoproject.org/) and some extra ports for cached URLs, just like the OpenDev Infra mirrors. If you find any issue, please do not hesitate to contact us. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org
Re: [rdo-dev] [infra] Planned maintenance on DLRN DB on Friday, Sep 18th
> Hi, > > We are planning to run some maintenance activities to fix the DLRN DB > replication next Friday, September 18th, at 9:00 UTC. > > These activities require the main DLRN DB to be set to read-only mode for a > while, so the DLRN API will not allow write operations (reporting results or > promotions) during that period. We expect the maintenance operation to last > less than one hour. > > If you have any question or concern, please do not hesitate to contact me. > Hi all, The maintenance operations are now complete, and the DLRN DB replication is working as expected. If you find any issue, please do not hesitate to contact us. Regards, Javier ___ dev mailing list dev@lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscr...@lists.rdoproject.org