Zitat von Sylvain Bauza <sba...@redhat.com>:
Le 30/07/2014 22:13, Boris Pavlovic a écrit :
Hi all,

This thread is very useful. We've detect issue related to the mission
statement and name of proposed program on early steps. Seems like
mission statement and name are totally unclear and don't present in
the right perspective goals of this program.

I updated name and mission statement:

name:
    SLA Management

mission:
    Provide SLA Management for production OpenStack clouds. This includes
    measuring and tracking performance of OpenStack Services, key API
methods
    and cloud applications, performance and functional tests on
demand, and
    everything that is required to detect and debug issues in live
production clouds.

As well, I updated patch to governance:
https://review.openstack.org/#/c/108502/3


I hope now it's more clear, what is the goal of this program and why
we should add new program.

Thoughts?


-1



-1 to it. SLA means that you create a contract in between a provider and
an user. Here, you don't create a contract (ie. you don't create ratios
and engagement) but you monitor these contracts.


So I agree with Sylvain, SLA Management would imply to have
a tool set that monitors agreements of a service. I don't see how this fits
into the existing projects you are referring to. IMHO SLA Management would
never consist of debugging or tracing anything. It would be always focused on a
service and not on a root-cause analysis.

Regards
Marc


As there are no SLAs now in OpenStack upstream (that's something for
operators), we can't say if the KPIs are OK or not.
How can you ensure that the code will be 9 nines if you don't look at
how OpenStack will be deployed ?

If you say the mission statement is to provide measurement tools for
OpenStack, then it possibly goes either in QA or in Telemetry programs.
If you say that the goal is to detect and debug issues in clouds, then
it clearly goes into QA program.


IMHO, both your name and your mission statement are confusing. Long
story short, I'm pro Rally as a separate project but in the QA program.

-Sylvain

Best regards,
Boris Pavlovic


On Tue, Jul 29, 2014 at 12:39 AM, Boris Pavlovic <bo...@pavlovic.me
<mailto:bo...@pavlovic.me>> wrote:

    Hi Sean,

    I appreciate you valuing Rally so highly as to suggesting it
    should join the QA program. It is a great vote of confidence for
    me. While I believe that Rally  and Tempest will always work
    closely together, the intended utility and the direction of where
    we are planing to take Rally will not be compatible with the
    direction of where I think the QA program is going. Please let me
    explain in more details below.

    Tempest is a collection of Functional and Performance Tests which
    is used by the developers to improve the quality of the OpenStack
    code.

    Rally on the other hand, is envisioned as a Tool that is going to
    be run by the cloud operators in order to measure, tune and
    continuously improve the performance of an OpenStack cloud.
     Moreover, we have an SLA module that allows the Operator to
    define what constitutes an acceptable level of performance and a
    profiler that would provide both the user and the developer the
    diagnostic set of performance data.  Finally, Rally is designed to
    run on production clouds and to be integrated as a Horizon plugin.

    In the future, we envision integrating Rally with other services
    (e.g. Logging as a Service, Satori, Rubick, and other
    operator-targeted services). I believe that this is not the
    direction compatible with the mission of the the QA program .

    Before applying for a new Performance and Scalability program, we
    have thought that the best existing program that Rally could be a
    part of now and in the future is the Telemetry program. We have
    discussed with Eoghan Glynn the idea of extending the scope of its
    mission to include other operator related projects and include
    Rally to it. Eoghan liked the idea in general but felt that
    Ceilometer currently has too much on its plate and was not in a
    position to merge in a new project. However, I can still see the
    two programs maturing and potentially becoming one down the road.

    Now, regarding the point that you make of Rally and Tempest doing
    some duplicate work. I completely agree with you that we should
    avoid it as much as possible and we should stay in close
    communication to make sure that duplicate requirements are only
    implemented once.

    Following our earlier discussion, Rally is now using Tempest for
    those benchmarks that do not require special complex environments,
    we also encapsulated and automated Tempest usage to make it more
    accessible for the Operators (here is the Blog documenting it --
http://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/).

    We would like to further continue to de-duplicate the work inside
    Tempest and Rally. We made some joint design decisions in Atlanta
    to transfer some of the Integration code from Rally to Tempest,
    resulting in the work performed by Andrew Kurilin
    (https://review.openstack.org/#/c/94473/). I would encourage and
    welcome more of such cooperation in the future.

    I trust that this addresses most of your concerns and please do
    not hesitate to bring up more questions and suggestions.

    Sincerely,

    Boris


    On Sun, Jul 27, 2014 at 6:57 PM, Sean Dague <s...@dague.net
    <mailto:s...@dague.net>> wrote:

        On 07/26/2014 05:51 PM, Hayes, Graham wrote:
        > On Tue, 2014-07-22 at 12:18 -0400, Sean Dague wrote:
        >> On 07/22/2014 11:58 AM, David Kranz wrote:
        >>> On 07/22/2014 10:44 AM, Sean Dague wrote:
        >>>> Honestly, I'm really not sure I see this as a different
        program, but is
        >>>> really something that should be folded into the QA
        program. I feel like
        >>>> a top level effort like this is going to lead to a lot of
        duplication in
        >>>> the data analysis that's currently going on, as well as
        functionality
        >>>> for better load driver UX.
        >>>>
        >>>>    -Sean
        >>> +1
        >>> It will also lead to pointless discussions/arguments about
        which
        >>> activities are part of "QA" and which are part of
        >>> "Performance and Scalability Testing".
        >
        > I think that those discussions will still take place, it
        will just be on
        > a per repository basis, instead of a per program one.
        >
        > [snip]
        >
        >>
        >> Right, 100% agreed. Rally would remain with it's own repo +
        review team,
        >> just like grenade.
        >>
        >>      -Sean
        >>
        >
        > Is the concept of a separate review team not the point of a
        program?
        >
        > In the the thread from Designate's Incubation request
        Thierry said [1]:
        >
        >> "Programs" just let us bless goals and teams and let them
        organize
        >> code however they want, with contribution to any code repo
        under that
        >> umbrella being considered "official" and ATC-status-granting.
        >
        > I do think that this is something that needs to be clarified
        by the TC -
        > Rally could not get a PTL if they were part of the QA
        project, but every
        > time we get a program request, the same discussion happens.
        >
        > I think that mission statements can be edited to fit new
        programs as
        > they occur, and that it is more important to let teams that
        have been
        > working closely together to stay as a distinct group.

        My big concern here is that many of the things that these
        efforts have
        been doing are things we actually want much closer to the
        base. For
        instance, metrics on Tempest runs.

        When Rally was first created it had it's own load generator.
        It took a
        ton of effort to keep the team from duplicating that and
        instead just
        use some subset of Tempest. Then when measuring showed up, we
        actually
        said that is something that would be great in Tempest, so
        whoever ran
        it, be it for Testing, Monitoring, or Performance gathering,
        would have
        access to that data. But the Rally team went off in a corner
        and did it
        otherwise. That's caused the QA team to have to go and redo
        this work
        from scratch with subunit2sql, in a way that can be consumed
        by multiple
        efforts.

        So I'm generally -1 to this being a separate effort on the
        basis that so
        far the team has decided to stay in their own sandbox instead of
        participating actively where many of us thing the functions
        should be
        added. I also think this isn't like Designate, because this isn't
        intended to be part of the integrated release.

        Of course you could decide to slice up the universe in a
        completely
        different way, but we have toolchains today, which I think the
        focus
        should be on participating there.

                -Sean

        --
        Sean Dague
        http://dague.net


        _______________________________________________
        OpenStack-dev mailing list
        OpenStack-dev@lists.openstack.org
        <mailto:OpenStack-dev@lists.openstack.org>
        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to