On 10/12/2016 03:01 PM, Vasyl Saienko wrote:
Hello Dmitry,

Thanks for raising this question. I think the problem is deeper. There are a lot
of use-cases that are not covered by our CI like cleaning, adoption etc...

This is nice, but here I'm trying to solve a pretty specific problem: we can't reasonably add more jobs to even cover all supported partitioning scenarios.


The main problem is that we need to change ironic configuration to apply
specific use-case. Unfortunately tempest doesn't allow to change cloud
configuration during tests run.


Recently I've started working on PoC that should solve this problem [0]. The
main idea is to have ability to change ironic configuration during single gate
job run, and launch the same tempest tests after each configuration change.

We can't change other components configuration as it will require reinstalling
whole devstack, so launching flat network and multitenant network scenarios is
not possible in single job.


For example:

1. Setup devstack with agent_ssh wholedisk ipxe configuration

2. Run tempest tests

3. Update localrc to use agent_ssh localboot image

For this particular example, my approach will be much, much faster, as all instances will be built in parallel.


4. Unstack ironic component only. Not whole devstack.

5. Install/configure ironic component only

6. Run tempest tests

7. Repeat steps 3-6 with other Ironic-only configuration change.


Running step 4,5 takes near 2-3 minutes.


Below is an non-exhaustive list of configuration choices we could try to
mix-and-match in single tempest run to have a maximal overall code coverage in a
sibl:

  *

    cleaning enabled / disabled

This is the only valid example, for other cases you don't need a devstack 
update.


  *

    using pxe_* drivers / agent_* drivers

  *

    using netboot / localboot

  * using partitioned / wholedisk images



[0] https://review.openstack.org/#/c/369021/




On Wed, Oct 12, 2016 at 3:01 PM, Dmitry Tantsur <dtant...@redhat.com
<mailto:dtant...@redhat.com>> wrote:

    Hi folks!

    I'd like to propose a plan on how to simultaneously extend the coverage of
    our jobs and reduce their number.

    Currently, we're running one instance per job. This was reasonable when the
    coreos-based IPA image was the default, but now with tinyipa we can run up
    to 7 instances (and actually do it in the grenade job). I suggest we use 6
    fake bm nodes to make a single CI job cover many scenarios.

    The jobs will be grouped based on driver (pxe_ipmitool and agent_ipmitool)
    to be more in sync with how 3rd party CI does it. A special configuration
    option will be used to enable multi-instance testing to avoid breaking 3rd
    party CI systems that are not ready for it.

    To ensure coverage, we'll only leave a required number of nodes "available",
    and deploy all instances in parallel.

    In the end, we'll have these jobs on ironic:
    gate-tempest-ironic-pxe_ipmitool-tinyipa
    gate-tempest-ironic-agent_ipmitool-tinyipa

    Each job will cover the following scenarious:
    * partition images:
    ** with local boot:
    ** 1. msdos partition table and BIOS boot
    ** 2. GPT partition table and BIOS boot
    ** 3. GPT partition table and UEFI boot  <*>
    ** with netboot:
    ** 4. msdos partition table and BIOS boot <**>
    * whole disk images:
    * 5. with msdos partition table embedded and BIOS boot
    * 6. with GPT partition table embedded and UEFI boot  <*>

     <*> - in the future, when we figure our UEFI testing
     <**> - we're moving away from defaulting to netboot, hence only one 
scenario

    I suggest creating the jobs for Newton and Ocata, and starting with Xenial
    right away.

    Any comments, ideas and suggestions are welcome.

    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to