On 06/07/18 16:08, Ian Jackson wrote:
> Juergen Gross writes ("Re: [Xen-devel] [Notes for xen summit 2018 design 
> session] Process changes: is the 6 monthly release Cadence too short, 
> Security Process, ..."):
>> On 05/07/18 17:14, Ian Jackson wrote:
>>> Certainly it would be a bad idea to use anything *on the test host
>>> itself* as a basis for a subsequent test.  The previous test might
>>> have corrupted it.
>>
>> Right. Not sure, whether possible, but in an ideal environment we'd have
>> an external storage system reachable by the control nodes and the test
>> systems. The test systems should be able to access their test data only,
>> while the control nodes would initialize the test data while the related
>> test machine is still offline.
> 
> That would mean that every test would run with the test host accessing
> its primary OS storage via something like iSCSI.  That would be an
> awful lot of extra complexity.

SCSI and NAS are not the only storage technologies available.

In my previous employment we used FC disk arrays for that purpose.

>>> Unless you think we should do our testing of Xen mainly with released
>>> versions of Linux stable branches (in which case, given how Linux
>>> stable branches are often broken, we might be long out of date), or
>>> our testing of Linux only with point releases of Xen, etc.
>>
>> Yes, that's what I think.
>>
>> The Xen (hypervisor, tools) tests should be done with released kernels
>> (either stable or the last one from upstream).
>>
>> Tests of Linux Xen support should be done with released Xen versions.
> 
> The result of this is that a feature which requires support in
> multiple trees could not be tested until at least one (and probably
> several) of the relevant trees had been released.  Which would be
> rather late to discover that it doesn't work.

For that purpose special tests could be set up. E.g. by using a Linux
kernel tree based on a released kernel with just the needed patches on
top.

>>> The current approach is mostly to take the most recent
>>> tested-and-working git commit from each of the inputs.  This aspect of
>>> osstest generally works well, I think.
>>
>> We have a bandwidth problem already. If one unstable input product is
>> failing all related tests do so, too. I'd rather know which of the
>> input sources is the most probable one to be blamed for a test failure.
> 
> I think you are conflating "released" with "tested".  In the current
> osstest setup each test of xen-unstable#staging is done with *tested*
> versions of all the other trees.

And what about tests of the other trees? Do those only use
xen-unstable#master? If yes, I'm fine.

> So barring heisenbugs, hardware problems, or whatever, blocking
> failures will be due to changes in xen-unstable#staging.
> (Host-specific failures might also slip through, but this is not very
> likely.)
> 
> The problem is that we have too many "heisenbugs, hardware problems,
> or whatever".

Yes.


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to