On 10/12/2016 03:54 PM, Vasyl Saienko wrote:
On Wed, Oct 12, 2016 at 4:10 PM, Dmitry Tantsur <[email protected]
<mailto:[email protected]>> wrote:
On 10/12/2016 03:01 PM, Vasyl Saienko wrote:
Hello Dmitry,
Thanks for raising this question. I think the problem is deeper. There
are a lot
of use-cases that are not covered by our CI like cleaning, adoption
etc...
This is nice, but here I'm trying to solve a pretty specific problem: we
can't reasonably add more jobs to even cover all supported partitioning
scenarios.
The main problem is that we need to change ironic configuration to apply
specific use-case. Unfortunately tempest doesn't allow to change cloud
configuration during tests run.
Recently I've started working on PoC that should solve this problem
[0]. The
main idea is to have ability to change ironic configuration during
single gate
job run, and launch the same tempest tests after each configuration
change.
We can't change other components configuration as it will require
reinstalling
whole devstack, so launching flat network and multitenant network
scenarios is
not possible in single job.
For example:
1. Setup devstack with agent_ssh wholedisk ipxe configuration
2. Run tempest tests
3. Update localrc to use agent_ssh localboot image
For this particular example, my approach will be much, much faster, as all
instances will be built in parallel.
One the gates we've using 7 VMs and we never boot all 7 nodes in parallel not
sure how slow environment will be in this case.
I think we do boot them in parallel (more or less) in the grenade job.
4. Unstack ironic component only. Not whole devstack.
5. Install/configure ironic component only
6. Run tempest tests
7. Repeat steps 3-6 with other Ironic-only configuration change.
Running step 4,5 takes near 2-3 minutes.
Below is an non-exhaustive list of configuration choices we could try to
mix-and-match in single tempest run to have a maximal overall code
coverage in a
sibl:
*
cleaning enabled / disabled
This is the only valid example, for other cases you don't need a devstack
update.
There are other use-cases like: portgroups, security groups, boot from volume
which will require configuration changes.
*
using pxe_* drivers / agent_* drivers
*
using netboot / localboot
* using partitioned / wholedisk images
[0] https://review.openstack.org/#/c/369021/
<https://review.openstack.org/#/c/369021/>
On Wed, Oct 12, 2016 at 3:01 PM, Dmitry Tantsur <[email protected]
<mailto:[email protected]>
<mailto:[email protected] <mailto:[email protected]>>> wrote:
Hi folks!
I'd like to propose a plan on how to simultaneously extend the
coverage of
our jobs and reduce their number.
Currently, we're running one instance per job. This was reasonable
when the
coreos-based IPA image was the default, but now with tinyipa we can
run up
to 7 instances (and actually do it in the grenade job). I suggest we
use 6
fake bm nodes to make a single CI job cover many scenarios.
The jobs will be grouped based on driver (pxe_ipmitool and
agent_ipmitool)
to be more in sync with how 3rd party CI does it. A special
configuration
option will be used to enable multi-instance testing to avoid
breaking 3rd
party CI systems that are not ready for it.
To ensure coverage, we'll only leave a required number of nodes
"available",
and deploy all instances in parallel.
In the end, we'll have these jobs on ironic:
gate-tempest-ironic-pxe_ipmitool-tinyipa
gate-tempest-ironic-agent_ipmitool-tinyipa
Each job will cover the following scenarious:
* partition images:
** with local boot:
** 1. msdos partition table and BIOS boot
** 2. GPT partition table and BIOS boot
** 3. GPT partition table and UEFI boot <*>
** with netboot:
** 4. msdos partition table and BIOS boot <**>
* whole disk images:
* 5. with msdos partition table embedded and BIOS boot
* 6. with GPT partition table embedded and UEFI boot <*>
Am I right that we need to increase number of tempest tests to the number of
use-cases we are going to test per driver. To ensure that we using right node
for each test, because partition scheme is defined in node properties and
requires right image to be used.
<*> - in the future, when we figure our UEFI testing
<**> - we're moving away from defaulting to netboot, hence only one
scenario
I suggest creating the jobs for Newton and Ocata, and starting with
Xenial
right away.
Any comments, ideas and suggestions are welcome.
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
[email protected]?subject:unsubscribe
<http://[email protected]?subject:unsubscribe>
<http://[email protected]?subject:unsubscribe
<http://[email protected]?subject:unsubscribe>>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
[email protected]?subject:unsubscribe
<http://[email protected]?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
<http://[email protected]?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev