On Sonntag, 1. November 2020 18:44:44 CET Greg Kurz wrote: > On Sun, 1 Nov 2020 15:37:12 +0100 > > Christian Schoenebeck <qemu_...@crudebyte.com> wrote: > > Use mkdtemp() to generate a unique directory for the 9p 'local' tests. > > > > This fixes occasional 9p test failures when running 'make check -jN' if > > QEMU was compiled for multiple target architectures, because the > > individual > > architecture's test suites would run in parallel and interfere with each > > other's data as the test directory was previously hard coded and hence the > > same directory was used by all of them simultaniously. > > > > This also requires a change how the test directory is created and deleted: > > As the test path is now randomized and virtio_9p_register_nodes() being > > called in a somewhat undeterministic way, that's no longer an appropriate > > place to create and remove the test directory. Use a constructor and > > destructor function for creating and removing the test directory instead. > > Unfortunately libqos currently does not support setup/teardown callbacks > > to handle this more cleanly. > > > > The constructor functions needs to be in virtio-9p-test.c, not in > > virtio-9p.c, because in the latter location it would cause all apps that > > link to libqos (i.e. entirely unrelated test suites) to create a 9pfs > > test directory as well, which would even break other test suites. > > > > Signed-off-by: Christian Schoenebeck <qemu_...@crudebyte.com> > > --- > > Reviewed-by: Greg Kurz <gr...@kaod.org>
Thanks for the overtime, on a Sunday! Queued on 9p.next: https://github.com/cschoenebeck/qemu/commits/9p.next And this one with Peter Xu's patches on top, just for testing: https://github.com/cschoenebeck/qemu/commits/9p.experimental.2 > I could run 'make check -j' with 4 archs (ppc64, x86_64, aarch64, s390x) > on a POWER9 system with 128 cpus, for ~1 hour without seeing any failure. > > Tested-by: Greg Kurz <gr...@kaod.org> OO Sounds like having advantages working for IBM. Respect. I start to get envy as these beasts are running towards PCIe 6, while we regular x86 users would already be glad having PCIe 4. I give it some more spinning hours this time, just to be sure, before sending the PR tomorrow morning. But I think it's all right now. Thanks! Best regards, Christian Schoenebeck