On Fri, 2024-10-04 at 04:11 -0700, philip.dawson via Lists.Yoctoproject.Org 
wrote:
> > It sounds like you're on the right track but there is a piece of
> > information which I can highlight: do_fetch is not an sstate
> > accelerated task.
> > 
> > As such, I'd not expect a do_fetch task to come from sstate. What
> > happens is that a recipe's other sstate tasks (populate_sysroot,
> > package_write_XXX, package, package_qa, packagedata) come from
> > sstate
> > and if that happens, tasks like fetch, configure, compile and so on
> > don't need to happen.
> > 
> > I'd therefore look at the populate_sysroot task and compare the
> > sigs
> > there.
> The tasks with sstate artifacts all have different hashes than the
> cached versions, It's a little hard to tell as I'm trying to grab
> logs of a CI system that cleans up build folders often, but I think
> this is following a task dependency chain back to the do_fetch.
>  
> Given that the do_fetch in the stamps folder and the do_fetch in the
> cache have different task hashes wouldn't that imply that all the
> dependant tasks have different task hashes and need to re-run as
> well? (though unpack/configure etc to the package/deploy tasks with
> sstate artifacts beyond just siginfo files).

Yes, if the hashes for do_fetch are different, that would be a problem.

> > Also, FWIW do_fetch can have task dependencies although you may be
> > right that yours does not.
> Yes in this case from the siginfo it doesn't but I will bear it in
> mind.
> Is there something else that could be affecting the do_fetch task
> hash given that the sig info reports "Tasks this task depends on: []"
> and the base hash is unchanged?

Hash Equivalence is the only thing which comes to mind and that
shouldn't affect non-sstate tasks without dependencies.

I don't understand how you have two siginfo files with different sigs
and the same data.

Now I think about it, there was a bug recently fixed:

https://git.yoctoproject.org/poky/commit/bitbake?id=c6b883106bc414312e58fe8c682b3ccc1257f114

it could be that. Do you have that fix in your build?

That was triggered by differing paths to the layers on disk which could
differ between your machine and CI.

Cheers,

Richard




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#63939): https://lists.yoctoproject.org/g/yocto/message/63939
Mute This Topic: https://lists.yoctoproject.org/mt/108814179/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to