Am 12.02.2025 um 16:07 schrieb Bruce Ashfield:
On Wed, Feb 12, 2025 at 9:36 AM Stefan Herbrechtsmeier via
lists.openembedded.org <http://lists.openembedded.org>
<stefan.herbrechtsmeier-oss=weidmueller....@lists.openembedded.org> wrote:
Am 11.02.2025 um 22:46 schrieb Richard Purdie:
On Tue, 2025-02-11 at 16:00 +0100, Stefan Herbrechtsmeier
vialists.openembedded.org <http://lists.openembedded.org> wrote:
From: Stefan Herbrechtsmeier<stefan.herbrechtsme...@weidmueller.com>
<mailto:stefan.herbrechtsme...@weidmueller.com>
Signed-off-by: Stefan Herbrechtsmeier<stefan.herbrechtsme...@weidmueller.com>
<mailto:stefan.herbrechtsme...@weidmueller.com>
---
.../python/python3-bcrypt-crates.inc | 84 -------------------
.../python/python3-bcrypt_4.2.1.bb <http://python3-bcrypt_4.2.1.bb>
| 4 +-
2 files changed, 1 insertion(+), 87 deletions(-)
delete mode 100644 meta/recipes-devtools/python/python3-bcrypt-crates.inc
So let me as the silly question. This removes the crates.inc file and
doesn't appear to add any kind of new list of locked down modules.
The list is generated on the fly like gitsm and doesn't require an
extra step.
This means that inspection tools just using the metadata can't see
"into" this recipe any longer for component information.
We support and use python code inside the variables and thereby
need a preprocessing of the metadata in any case.
What do you mean by "component information"?
This was
something that some people felt strongly that was a necessary part of
recipe metadata, for license, security and other manifest activities.
Why can't they use the SBOM for this?
Are we basically saying that information is now only available after
the build takes place?
They are only available after a special task run.
I'm very worried that the previous discussions didn't reach a
conclusion and this is moving the "magic" out of bitbake and into some
vendor classes without addressing the concerns previously raised about
transparency into the manifests of what is going on behind the scenes.
I try to address the concerns but don't realize that the missing
information in the recipe is a blocker.
This version gives the user the possibility to influence the
dependencies via patches or alternative lock file. It creates a
vendor folder for easy patch and debug. It integrates the
dependencies into the SBOM for security tracking.
I skipped the license topic for now because the package managers
don't handle license integrity. We have to keep the information in
the recipe but hopefully the license information doesn't change
with each update.
I don't understand the requirement for the plain inspection. In my
opinion external tools should always use a defined output and
shouldn't depend on the project internal details. I adapt the
existing users of the SRC_URI to include the dynamic SRC_URIs.
I appreciate some of the requirements are conflicting.
For the record in some recent meetings, I was promised that help would
be forthcoming in helping guide this discussion. I therefore left
things alone in the hope that would happen. It simply hasn't, probably
due to time/work issues, which I can sympathise with but it does mean
I'm left doing a bad job of trying to respond to your patches whilst
trying to do too many other things badly too. That leaves us both very
frustrated.
I really want to see you succeed in reworking this and I appreciate the
time and effort put into the patches. To make this successful, I know
there are key stakeholders who need to buy into it and right now,
they're more likely just to keep doing their own things as it is easier
since this isn't going the direction they want. A key piece of making
this successful is negotiating something which can work for a
significant portion of them. I'm spelling all this out since I do at
least want to make the situation clear.
Yes, I'm very upset the OE community is putting me in this position
despite me repeatedly asking for help and that isn't your fault, which
just frustrates me more.
My problem is the double standards. We support a fetcher which
dynamic resolve dependencies and without manual update step since
years. Nobody suggests to make the gitsm fetcher obsolete and
requests the users to run an update task after a SRC_URI change to
create a .inc file with the SRC_URIs of all the recursive
submodules. Nobody complains about the missing components in the
recipe.
There's no double standard, I'd simply say that design decisions of
the past doesn't mean that there aren't better ways to do something new.
Richard went out of his way to explain the status and what sort of
review needs to happen, I'll add that while getting frustrated with it
is natural, pushing back on people doing reviews isn't going to help
get things merged, it will do the opposite.
There have been plenty of complaints and issues with the gitsm
fetcher, but the reality is that if someone wants to get at the base
components of what it is doing, they can do so. I've had to take
several of my maintained recipes out of gitsm and back to the base git
fetches. The submodules were simply fetching code that didn't build
and there was no way to fetch it. The gitsm fetcher is also
relatively lightly used, much less complicated and doesn't need much
extra in infrastructure to support it.
Thanks for your insides. There a two main solutions for the problem. Add
patch support to gitsm so that you could use the git submodule command
and create a patch or generate a .inc file and manipulate the SRCREVs. I
assume you would prefer a .inc file. What do you think is the downside
of a patch?
Whether we have hard requirements and introduce a git submodule
support which satisfy the requirements or we accept the advantages
of a simple user interface and minimize the disadvantages.
Unfortunately in my experience the simple interfaces hiding complexity
don't help when things go wrong. That's how I ended up where I am with
my go recipes, and why I ended up tearing my gitsm recipe back into
its components. There was no way to influence / fix the build
otherwise, and they didn't support bleeding edge development very well.
Do you have a good example for a problematic go recipe to test my approach?
I'm definitely one of the people Richard is mentioning as a
stakeholder, and one that could likely just ignore all of this .. but
I'm attempting to wade into it again.
I am very grateful for that.
None of us have the hands on, daily experience with the components at
play as you do right now, so patience on your part will be needed as
we ask many not-so-intelligent questions.
That's no problem.
It doesn't matter if we run the resolve function inside a resolve,
fetch or update task. The questions is do we want to support
dynamic SRC_URIs or do we want an manual update task. The task
needs to be manual run after a SRC_URI change and can produces a
lot of noise in the update commit. In any case the manual editing
of the SRC_URI isn't practical and the users will use the package
manager to update dependencies and its recursive dependencies.
I don't understand the series quite enough yet to say "why can't we do
both", if there was a way to abstract / componentize what is
generating those dynamic SCR_URIS in such a way that an external tool
or update task could generate them, and if they were already in place
the dynamic generation wouldn't run at build time, that should keep
both modes working.
If it is desired I can add both variants.
I admit to not understanding why we'd be overly concerned about noise
in the commits (for the dependencies) if they are split into separate
files in the recipe. More information is always better when I'm
dealing with the updates. I just scroll past it if I'm not interested
and filter it if I am.
The problem is to identify the relevant parts. Lets say you update a
dependency because of an security issue. Afterwards you update the
project with a lot of dependency changes. You have to review the
complete noise to determine if your updated dependency doesn't go
backward in its version. It is much easier to use a patch. After the
project update the patch will fail or not. If it fails you have a direct
focus on the affected dependency. If you back port the patch from the
project you could simple drop it with the next update.
I feel the pain (and your pain) of this after supporting complicated
go/mixed language recipes through multiple major releases (and through
go's changing dependency model + bleeding edge code, etc) and needing
to track what has changed, so I definitely encourage you to keep
working on this.
As a compromise we could add a new feature to generate .inc cache
files before the main bitbake run. This would eliminate the manual
update run and the commit noise as well as special fetch, unpack
and patch task.
Can you elaborate on what you mean by before the main bitbake run ?
Would it be still under a single bitbake invokation or would it be
multiple runs (I support multiple runs, so don't take that as a
leading question).
I can't answer this questions and need Richard guidance to implement
such a feature. I would assume that bitbake already track file changes
and can update its state. The behavior should be similar to a change in
the .inc file. Bitbake will detect that a "include_cache" file is
missing and run an update_cache task on the recipe. Afterwards bitbake
detect a file change on the "include_cache" file and parse it. We need a
possibility to mark patches which shouldn't be applied if the
"include_cache" file is missing because the dependencies are missing. We
need to run the fetch, unpack and patch task before the update_cache
task to generate the .inc file.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#211262):
https://lists.openembedded.org/g/openembedded-core/message/211262
Mute This Topic: https://lists.openembedded.org/mt/111123548/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-