On Wed, 2023-08-23 at 12:47 +0300, Mikko Rapeli wrote: > On Wed, Aug 23, 2023 at 10:06:41AM +0100, Richard Purdie wrote: > > On Wed, 2023-08-23 at 10:31 +0300, Mikko Rapeli wrote: > > > Hi, > > > > > > On Tue, Aug 22, 2023 at 11:25:58PM -0700, Khem Raj wrote: > > > > will this work when running multiple instances of qemu ? > > > > e.g. try bitbake core-image-ptest-all > > > > > > I was not aware of core-image-ptest-all. Tried to build it but it doesn't > > > seem to be compatible with IMAGE_FEATURES += "ssh-server-dropbear" which > > > is > > > needed to test core-image-minimal: > > > > > > Error: > > > Problem: package packagegroup-core-ssh-dropbear-1.0-r1.noarch from > > > oe-repo requires dropbear, but none of the providers can be installed > > > - package dropbear-2022.83-r0.core2_64 from oe-repo conflicts with > > > openssh provided by openssh-9.3p2-r0.core2_64 from oe-repo > > > - package openssh-9.3p2-r0.core2_64 from oe-repo conflicts with > > > dropbear provided by dropbear-2022.83-r0.core2_64 from oe-repo > > > - conflicting requests > > > (try to add '--allowerasing' to command line to replace conflicting > > > packages or '--skip-broken' to skip uninstallable packages) > > > > > > oeqa runtime testing core-image-minimal without ssh server doesn't make > > > sense as all tests will > > > just be skipped. > > > > The autobuilder actually does that, the minimal image is just tested > > with the small number of non-network tests. The main thing was to test > > it does actually boot to a login prompt. We have other tests which test > > the other areas with other images. > > Yes, granted it's enough to test that boot to serial console login works. > > > The reason for the above is that there will be ptest openssh images > > which conflict with the dropbear ones. You can likely avoid that by > > using: > > > > IMAGE_FEATURES:append:pn-core-image-minimal = " ssh-server-dropbear" > > > > The ptest images are designed to only include the ptest in question so > > in theory are otherwise as minimal as the dependencies allow. > > Alright, this I could try. But I fear there is a log more missing from my > plain poky and default machine target to get the selftests and tests running.
There is no secret magic config the autobuilder uses. You keep asking me for this and there isn't anything. It is actually starting to annoy me a bit as there isn't anything "hidden". The configurations used are all from this file: https://git.yoctoproject.org/yocto-autobuilder-helper/tree/config.json Yes, there is a block of high level config around numbers of threads, disk space monitoring, pressure regulation values and so on but we purposefully keep the config to be as close to standard poky as we can. When we run selftest we do a couple of things. Firstly we split the machine and toolchain targets into separate areas. We also split reproducibility to it's own target and test mirroring elsewhere too. This results in a slightly more complex selftest invocation: OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail/ DISPLAY=:1 oe-selftest -a --skip-tests distrodata.Distrodata.test_checkpkg buildoptions.SourceMirroring.test_yocto_source_mirror reproducible -T machine -T toolchain-user -T toolchain-system -j 15 The only test which I don't think we run anywhere any more is the test_checkpkg target. You can see all this from the logs buildbot shows from it's UI on the autobuilder too. > This magic is somewhere in the autobuilder related git repositories, but from > plain > poky checkout with a specific commit from master branch I don't know which > versions > and repos to use so that the tests would be passing. > > With these modifications in local.conf: > > IMAGE_CLASSES += "testimage" > TEST_RUNQEMUPARAMS += "slirp" We do not use slirp on the autobuilder. We never have and we're unlikely ever to do so and it is not something we officially support for this. This is likely the biggest source of problems. I appreciate that gives some networking challenges for people in constrained environments but we did that primarily to allow for simplifications in the rest of the setup. > IMAGE_FEATURES += "ssh-server-dropbear" I've already explained that this one does likely cause problems. We simply don't run many tests against minimal images. > # update kernel to latest available in poky > PREFERRED_VERSION_linux-yocto = "" Not sure why this is needed? > SANITY_TESTED_DISTROS = "" This one we've discussed. It really should be fixed in a better way but isn't anywhere near the top of the priority list. > at least runtime_test.TestImage are passing with slirp now. > > Without MAGE_FEATURES += "ssh-server-dropbear", "bitbake > core-image-ptest-all" now succeeds > and "bitbake -c testimage core-image-ptest-all" is running the tests, > seeminly in series. > At least there are no multiple qemu instances running in parallel and no > failures related to > slirp ssh port being reserved by a single qemu instance. But the tests are > reporting only skips > so maybe the autobuilder scripts have some settings which I don't have > correctly set: > > Cannot run ptests without @expectedFailure as ptests are expected to fail > QMP released QEMU at 08/23/23 10:26:03 and took 0.13 seconds from connect > Cannot run ptests without @expectedFailure as ptests are expected to fail > QMP connected to QEMU at 08/23/23 10:26:04 and took 0.60 seconds > QMP released QEMU at 08/23/23 10:26:04 and took 0.13 seconds from connect > Cannot run ptests without @expectedFailure as ptests are expected to fail > RESULTS: > RESULTS - parselogs.ParseLogsTest.test_parselogs: PASSED (4.30s) > RESULTS - ping.PingTest.test_ping: PASSED (0.04s) > RESULTS - ptest.PtestRunnerTest.test_ptestrunner_expectfail: PASSED (1.55s) > RESULTS - ssh.SSHTest.test_ssh: PASSED (1.01s) > RESULTS - ptest.PtestRunnerTest.test_ptestrunner_expectsuccess: SKIPPED > (0.00s) > SUMMARY: > core-image-ptest-libtry-tiny-perl () - Ran 5 tests in 7.208s > core-image-ptest-libtry-tiny-perl - OK - All required tests passed > (successes=3, skipped=1, failures=0, errors=0) > > The ptest execution seems to be skipped for all images. I think Alex covers this. You can compare it to what is shown on the autobuilder output. You can also compare the testresults.json file too using "resulttool report" to compare results with what the autobuilder runs. Cheers, Richard
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#186570): https://lists.openembedded.org/g/openembedded-core/message/186570 Mute This Topic: https://lists.openembedded.org/mt/100910036/21656 Group Owner: openembedded-core+ow...@lists.openembedded.org Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-