On 03/21/2017 11:40 AM, Wesley Hayutin wrote:


On Tue, Mar 21, 2017 at 12:03 PM, Emilien Macchi <emil...@redhat.com
<mailto:emil...@redhat.com>> wrote:

    On Mon, Mar 20, 2017 at 3:29 PM, Paul Belanger
    <pabelan...@redhat.com <mailto:pabelan...@redhat.com>> wrote:
    > On Sun, Mar 19, 2017 at 06:54:27PM +0200, Sagi Shnaidman wrote:
    >> Hi, Paul
    >> I would say that real worthwhile try starts from "normal" priority, 
because
    >> we want to run promotion jobs more *often*, not more *rarely* which 
happens
    >> with low priority.
    >> In addition the initial idea in the first mail was running them each 
after
    >> other almost, not once a day like it happens now or with "low" priority.
    >>
    > As I've said, my main reluctance is is how the gate will react if we 
create a
    > new pipeline with the same priority as our check pipeline.  I would much 
rather
    > since on caution, default to 'low', see how things react for a day / week 
/
    > month, then see what it would like like a normal.  I want us to be 
caution about
    > adding a new pipeline, as it dynamically changes how our existing 
pipelines
    > function.
    >
    > Further more, this is actually a capacity issue for 
tripleo-test-cloud-rh1,
    > there currently too many jobs running for the amount of hardware. If 
these jobs
    > were running on our donated clouds, we could get away with a low priority
    > periodic pipeline.

    multinode jobs are running under donated clouds but as you know ovb not.
    We want to keep ovb jobs in our promotion pipeline because they bring
    high value to the tests (ironic, ipv6, ssl, probably more).

    Another alternative would be to reduce it to one ovb job (ironic with
    introspection + ipv6 + ssl at minimum) and use the 4 multinode jobs
    into the promotion pipeline -instead of the 3 ovb.


I'm +1 on using one ovb jobs + 4 multinode jobs.

Then we lose coverage on the ipv4 net-iso case and the no net-iso case, both of which are very common, even if only with developers. There's a reason we've always run 3 OVB jobs.

I believe we also had timeout issues in the past when trying to test all the things in a single periodic job. I'm not sure if it's still an issue, but that's why logic like http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/toci_gate_test-orig.sh#n135 exists.

Ultimately, the problem here is not adding another handful of periodic jobs to rh1. It's already running something like 750 per day, another ~15 is not that big a deal. But adding them as low priority jobs isn't going to work because of the other 750 jobs being run per day that will crowd them out.




    current: 3 ovb jobs running every night
    proposal: 18 ovb jobs per day

    The addition will cost us 15 jobs into rh1 load. Would it be acceptable?

    > Now, allow me to propose another solution.
    >
    > RDO project has their own version of zuul, which has the ability to do 
periodic
    > pipelines.  Since tripleo-test-cloud-rh2 is still around, and has OVB 
ability, I
    > would suggest configuring this promoting pipeline within RDO, as to not 
affect
    > the capacity of tripleo-test-cloud-rh1.  This now means, you can 
continuously
    > enqueue jobs at a rate of 4 hours, priority shouldn't matter as you are 
the only
    > jobs running on tripleo-test-cloud-rh2, resulting in faster promotions.

    Using RDO would also be an option. I'm just not sure about our
    available resources, maybe other can reply on this one.


The purpose of the periodic jobs are two fold.
1. ensure the latest built packages work
2. ensure the tripleo check gates continue to work with out error

Running the promotion in review.rdoproject would not cover #2.  The
rdoproject jobs
would be configured in slightly different ways from upstream tripleo.
Running the promotion
in ci.centos has the same issue.

Using tripleo-testcloud-rh2 I think is fine.

No, it's not. rh2 has been repurposed as a developer cloud and is oversubscribed as it is. There is no more capacity in either rh1 or rh2 at this point.

Well, strictly speaking rh1 has more capacity, but I believe we've reached the point of diminishing returns where adding more jobs slows down all the jobs, and since we keep adding slower and slower jobs that isn't going to work. As it is OVB jobs are starting to timeout again (although there may be other factors besides load at work there - things are kind of a mess right now).




    > This also make sense, as packaging is done in RDO, and you are triggering 
Centos
    > CI things as a result.

    Yes, it would make sense. Right now we have zero TripleO testing when
    doing changes in RDO packages (we only run packstack and puppet jobs
    which is not enough). Again, I think it's a problem of capacity here.


We made a pass at getting multinode jobs running in RDO with tripleo.
That was
initially not very successful and we chose to instead focus on
upstream.  We *do*
have it on our list to gate packages from RDO builds with tripleo.  In
the short term
that gate will use rdocloud, in the long term we'd also like to gate w/
multinode nodepool jobs in RDO.




    Thoughts?

    >> Thanks
    >>
    >> On Wed, Mar 15, 2017 at 11:16 PM, Paul Belanger
    <pabelan...@redhat.com <mailto:pabelan...@redhat.com>>
    >> wrote:
    >>
    >> > On Wed, Mar 15, 2017 at 03:42:32PM -0500, Ben Nemec wrote:
    >> > >
    >> > >
    >> > > On 03/13/2017 02:29 PM, Sagi Shnaidman wrote:
    >> > > > Hi, all
    >> > > >
    >> > > > I submitted a change:
    https://review.openstack.org/#/c/443964/
    <https://review.openstack.org/#/c/443964/>
    >> > > > but seems like it reached a point which requires an additional
    >> > discussion.
    >> > > >
    >> > > > I had a few proposals, it's increasing period to 12 hours
    instead of 4
    >> > > > for start, and to leave it in regular periodic *low*
    precedence.
    >> > > > I think we can start from 12 hours period to see how it
    goes, although
    >> > I
    >> > > > don't think that 4 only jobs will increase load on OVB
    cloud, it's
    >> > > > completely negligible comparing to current OVB capacity and
    load.
    >> > > > But making its precedence as "low" IMHO completely removes
    any sense
    >> > > > from this pipeline to be, because we already run
    experimental-tripleo
    >> > > > pipeline which this priority and it could reach timeouts
    like 7-14
    >> > > > hours. So let's assume we ran periodic job, it's queued to
    run now 12 +
    >> > > > "low queue length" - about 20 and more hours. It's even
    worse than
    >> > usual
    >> > > > periodic job and definitely makes this change useless.
    >> > > > I'd like to notice as well that those periodic jobs unlike
    "usual"
    >> > > > periodic are used for repository promotion and their value
    are equal or
    >> > > > higher than check jobs, so it needs to run with "normal" or
    even "high"
    >> > > > precedence.
    >> > >
    >> > > Yeah, it makes no sense from an OVB perspective to add these
    as low
    >> > priority
    >> > > jobs.  Once in a while we've managed to chew through the entire
    >> > experimental
    >> > > queue during the day, but with the containers job added it's very
    >> > unlikely
    >> > > that's going to happen anymore.  Right now we have a 4.5 hour
    wait time
    >> > just
    >> > > for the check queue, then there's two hours of experimental
    jobs queued
    >> > up
    >> > > behind that.  All of which means if we started a low priority
    periodic
    >> > job
    >> > > right now it probably wouldn't run until about midnight my
    time, which I
    >> > > think is when the regular periodic jobs run now.
    >> > >
    >> > Lets just give it a try? A 12 hour periodic job with low
    priority. There is
    >> > nothing saying we cannot iterate on this after a few days /
    weeks / months.
    >> >
    >> > > >
    >> > > > Thanks
    >> > > >
    >> > > >
    >> > > > On Thu, Mar 9, 2017 at 10:06 PM, Wesley Hayutin
    <whayu...@redhat.com <mailto:whayu...@redhat.com>
    >> > > > <mailto:whayu...@redhat.com <mailto:whayu...@redhat.com>>>
    wrote:
    >> > > >
    >> > > >
    >> > > >
    >> > > >     On Wed, Mar 8, 2017 at 1:29 PM, Jeremy Stanley
    <fu...@yuggoth.org <mailto:fu...@yuggoth.org>
    >> > > >     <mailto:fu...@yuggoth.org <mailto:fu...@yuggoth.org>>>
    wrote:
    >> > > >
    >> > > >         On 2017-03-07 10:12:58 -0500 (-0500), Wesley
    Hayutin wrote:
    >> > > >         > The TripleO team would like to initiate a
    conversation about
    >> > the
    >> > > >         > possibility of creating a new pipeline in
    Openstack Infra to
    >> > allow
    >> > > >         > a set of jobs to run periodically every four hours
    >> > > >         [...]
    >> > > >
    >> > > >         The request doesn't strike me as
    contentious/controversial.
    >> > Why not
    >> > > >         just propose your addition to the zuul/layout.yaml
    file in the
    >> > > >         openstack-infra/project-config repo and hash out
    any resulting
    >> > > >         concerns via code review?
    >> > > >         --
    >> > > >         Jeremy Stanley
    >> > > >
    >> > > >
    >> > > >     Sounds good to me.
    >> > > >     We thought it would be nice to walk through it in an
    email first :)
    >> > > >
    >> > > >     Thanks
    >> > > >
    >> > > >
    >> > > >
     ____________________________________________________________
    >> > ______________
    >> > > >         OpenStack Development Mailing List (not for usage
    questions)
    >> > > >         Unsubscribe:
    >> > > >
     openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    >> > > >
     <http://openstack-dev-requ...@lists.openstack.org?subject
    <http://openstack-dev-requ...@lists.openstack.org?subject>:
    >> > unsubscribe>
    >> > > >
     http://lists.openstack.org/cgi-bin/mailman/listinfo/
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/>
    >> > openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/>
    >> > openstack-dev>
    >> > > >
    >> > > >
    >> > > >
    >> > > >
     ____________________________________________________________
    >> > ______________
    >> > > >     OpenStack Development Mailing List (not for usage
    questions)
    >> > > >     Unsubscribe:
    >> > > >
     openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    >> > > >
     <http://openstack-dev-requ...@lists.openstack.org?subject
    <http://openstack-dev-requ...@lists.openstack.org?subject>:
    >> > unsubscribe>
    >> > > >
     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
    >> > > >
     <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
    >> > >
    >> > > >
    >> > > >
    >> > > >
    >> > > >
    >> > > > --
    >> > > > Best regards
    >> > > > Sagi Shnaidman
    >> > > >
    >> > > >
    >> > > > ____________________________________________________________
    >> > ______________
    >> > > > OpenStack Development Mailing List (not for usage questions)
    >> > > > Unsubscribe:
    openstack-dev-requ...@lists.openstack.org?subject
    <http://openstack-dev-requ...@lists.openstack.org?subject>:
    >> > unsubscribe
    >> > > >
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
    >> > > >
    >> > >
    >> > > ____________________________________________________________
    >> > ______________
    >> > > OpenStack Development Mailing List (not for usage questions)
    >> > > Unsubscribe:
    openstack-dev-requ...@lists.openstack.org?subject
    <http://openstack-dev-requ...@lists.openstack.org?subject>:
    >> > unsubscribe
    >> > >
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
    >> >
    >> >
    __________________________________________________________________________
    >> > OpenStack Development Mailing List (not for usage questions)
    >> > Unsubscribe:
    openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    >> >
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
    >> >
    >>
    >>
    >>
    >> --
    >> Best regards
    >> Sagi Shnaidman
    >
    >>
    __________________________________________________________________________
    >> OpenStack Development Mailing List (not for usage questions)
    >> Unsubscribe:
    openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
    >
    >
    >
    __________________________________________________________________________
    > OpenStack Development Mailing List (not for usage questions)
    > Unsubscribe:
    openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>



    --
    Emilien Macchi

    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:
    openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to