03.01.2025 15:04, Michael Tokarev wrote:

Things turned out to be more interesting.
More findings are at https://gitlab.com/qemu-project/qemu/-/issues/1129 .

Thanks to ardb, we found the actual root cause of this issue.  It was the
way qemu code generation works in 7.2, and some arm64-specific constructs
generated by the compiler.  And we found a real solution to this issue,
which lets me to re-enable capstone disassembler on arm64 for qemu.

The root cause was that qemu used (large) tables of pointers to static inline
functions.  The compiler had to generate code to make these functions visible
for their possible use in a (different) shared object.  And on AArch64, this
code has to be located in GOT which has size limits.  And we're very close to
this limit already, and adding capstone (which, btw, uses a similar construct
too!) makes it to overflow.

By telling the compiler there's no need to make these symbols visible (which
is obviously the case, after all, we're building a single executable!), we
can significantly reduce the GOT (so it isn't over the limit anymore), *and*
make whole thing a bit faster too, by letting the compiler to use more
efficient function calls - the latter one is true for all platforms, not
just aarch64, and not just the static build but all of qemu.

Apparently the usage of static inline here in qemu was not exactly the best
idea, and later code changed this part anyway (due to other reasons).

I've a simple 2-line patch (one removing the --disable-capstone configure
option and another adding visibility(hidden))  which fixes all this for good
and real, without workarounds.

It is building now, once I'm satisfied with the results I'll make Yet Another
upload.  I'm sorry for all this noise qemu is making, - but fixing this
static-pie thing is really important, especially with new kernel (with
additional KASLR randomization) already uploaded to bookworm..

I weren't aware the proposed qemu had build issues on debian bookworm, - if
I did, I'd try to address this sooner, without all this rush.

Thanks,

/mjt

Reply via email to