Hi Stefan,
On 4/28/25 11:14 AM, Stefan Hajnoczi wrote:
On Thu, Apr 24, 2025 at 2:35 PM Pierrick Bouvier
<pierrick.bouv...@linaro.org> wrote:
Feedback
========
The goal of this series is to be spark a conversation around following topics:
- Would you be open to such an approach? (expose all code, and restrict commands
registered at runtime only for specific targets)
- Are there unexpected consequences for libvirt or other consumers to expose
more definitions than what we have now?
- Would you recommend another approach instead? I experimented with having per
target generated files, but we still need to expose quite a lot in headers,
so
my opinion is that it's much more complicated for zero benefit. As well, the
code size impact is more than negligible, so the simpler, the better.
Do you anticipate that Linux distributions will change how they
package QEMU? For example, should they ship a single qemu-all package
in addition to or as a replacement for the typical model today where
qemu-system-aarch64, qemu-system-x86_64, etc are shipped as separate
packages?
Different distributions will have different opinions.
In case we decide one day (which is *not* short term future) to replace
existing binaries with a single one, it's probably a discussion that
will happen.
My personal "anticipation" is that if we unify all targets in a single
binary (which is not happening tomorrow), distributions can always
create a qemu-system-common package, and depend on it for all targets.
Thus, every qemu-system-X will simply include the expected symlink (or
wrapper script, or whatever) to the single binary.
Or they can recompile the single binary for every subpackage they want
in case they want to absolutely reduce the code size for a single
target, even though the sum of binaries will be infinitely bigger than
using the single one.
In any case, it's not something that will happen soon, except if
everyone in the community becomes convinced of the advantage of building
QEMU as a single binary, instead of per target binaries.
Even if this never converges, there are still benefits left for what is
done right now:
- Faster multi targets build: less compilation units == less time.
- Smaller multi targets build footprint: seems relevant as disk space on
GitLab CI is a recurrent complaint.
- Clarification of code: I hope C developers are objectively (i.e. not
personal preference) convinced that less ifdef soup is better.
It would be nice to hear from packager maintainers in this discussion
so that there is a consensus between developers and package
maintainers.
Sure.
Maybe there is a misunderstanding, but at this point, we are not trying
to invent anything new. We are just looking for a way to build QAPI
generated code only once, so it's possible to link together object files
coming from two different targets.
My mistake was to not mention introspection in the cover letter, but
thanks to Markus and Daniel, I understood the consequences of that, and
my position is to keep the current schema and serialization methods
*exactly* as they are, so consumers don't see any change. The only place
where we need to do changes are scripts/qapi and qapi/.
Stefan
Regards,
Pierrick