Hi all,
we're currently in process of setting up a shared sstate stream between
our workers and have recently run into issues with
`version-going-backwards` package_qa errors being raised by builds
consuming the shared sstate.
In our setup all builders contribute to the shared sstate cache and the
builders are using a local PRserv database that is updated whenever
changes are merged to the mainline branch. Similarly a the buildhistory
is updated whenever commits are merged to the mainline branch and shared
with the builders.
We've been able to pinpoint the root cause to the following scenario:
Consider a sequence of builds consisting of two commits (named #1 and
#2) . Both commits affect some recipe R.
1. In the first build both commits are included. The PR allocated to the
packages of R is r0.1. The resulting sstate cache is uploaded to a
shared location so it can be used by subsequent builds.
2. Commit #1 is merged. The build is run again, this time without commit
#2. The PR allocated to the packages of R is r0.2. The resulting sstate
cache is uploaded to a shared location so it can be used by subsequent
builds.
3. The first build is repeated after merging in the latest branch
content. Given that this build includes both #1 (from the branch) and
#2, the task hashes are the same as in step 1. and the sstate entry is
reused causing a version-going-backwards error.
Unfortunately, this scenario is fairly common in our setup (we typically
bundle commits for validation and pick the ones which pass so future
validation bundles end up combining merged and unmerged commits) and
we're now looking for ways to mitigate this.
So far we've come up with the following options to mitigate this issue
1. Stop using PRserv: This isn't our preferred option as we'd like to
judge from package names whether content has changed / might be different.
2. Stop using sstate for the packagedata task: We haven't evaluated this
yet but the general idea is that an up-to-date PR number is allocated
from PRserv in `packagedata`. Given that we do not have a global PRServ
at play a new PR would be allocated in the third build and the version
would no longer be going backwards. This would not work if a global
PRserv database was used but this is not something that we are planning
to do.
Is this issue generally known and are there any other recommendations on
how to deal with this?
Thanks,
Philip
--
Philip Lorenz
BMW Car IT GmbH, Software-Plattform, -Integration Connected Company,
Lise-Meitner-Straße 14, 89081 Ulm
-------------------------------------------------------------------------
BMW Car IT GmbH
Management: Chris Brandt and Michael Böttrich
Domicile and Court of Registry: München HRB 134810
-------------------------------------------------------------------------
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#64115): https://lists.yoctoproject.org/g/yocto/message/64115
Mute This Topic: https://lists.yoctoproject.org/mt/109194097/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-