On Fri, Jul 28, 2023 at 12:01:29PM +0200, Thomas Huth wrote: > On 28/07/2023 11.50, Thomas Huth wrote: > > On 28/07/2023 11.32, Marc-André Lureau wrote: > > > Hi > > > > > > On Fri, Jul 28, 2023 at 12:59 PM Daniel P. Berrangé > > > <berra...@redhat.com> wrote: > > > > > > > > On Fri, Jul 28, 2023 at 10:35:35AM +0200, Thomas Huth wrote: > > > > > On 27/07/2023 12.39, Daniel P. Berrangé wrote: > > > > > > On Wed, Jul 26, 2023 at 08:21:33PM +0200, Thomas Huth wrote: > > > > > > > On 26/07/2023 18.19, Daniel P. Berrangé wrote: > > > > > ... > > > > > > > Anyway, before we unify the compiler package name suffix between > > > > > > > the two > > > > > > > jobs, I really would like to see whether the mingw > > > > > > > Clang builds QEMU faster > > > > > > > in the 64-bit job ... but so far I failed to convince meson to > > > > > > > accept the > > > > > > > Clang from the mingw package ... does anybody know how to use > > > > > > > Clang with > > > > > > > MSYS2 properly? > > > > > > > > > > > > AFAIK it shouldn't be anything worse than > > > > > > > > > > > > CC=clang ./configure .... > > > > > > > > > > > > if that doesn't work then its a bug IMHO > > > > > > > > > > No, it's not that easy ... As Marc-André explained to me, MSYS2 > > > > > maintains a > > > > > completely separate environment for Clang, i.e. you have to select > > > > > this > > > > > different environment with $env:MSYSTEM = 'CLANG64' and then install > > > > > the > > > > > packages that have the "mingw-w64-clang-x86_64-" prefix. > > > > > > > > > > After lots of trial and error, I was able to get a test build here: > > > > > > > > > > https://gitlab.com/thuth/qemu/-/jobs/4758605925 > > > > > > > > > > I had to disable Spice and use --disable-werror in that build to make > > > > > it > > > > > succeed, but at least it shows that Clang seems to be a little bit > > > > > faster - > > > > > the job finished in 58 minutes. So if we can get the warnings fixed, > > > > > this > > > > > might be a solution for the timeouts here... > > > > > > > > Those packing warnings look pretty serious > > > > > > > > C:/GitLab-Runner/builds/thuth/qemu/include/block/nvme.h:1781:16: > > > > warning: unknown attribute 'gcc_struct' ignored > > > > [-Wunknown-attributes] > > > > > > > > This means CLang is using the MSVC struct packing ABI for bitfields, > > > > which is different from the GCC struct packing ABI. If any of those > > > > structs use bitfields and are exposed as guest hardware ABI, or in > > > > migration vmstate, then this is potentially broken compilation. > > > > > > > > > > Yes .. gcc >=4.7 and clang >=12 have mms-bitfiles enabled by default, > > > but we can't undo that MS struct packing on clang apparently: > > > https://discourse.llvm.org/t/how-to-undo-the-effect-of-mms-bitfields/72271 > > > > I wonder whether we really still need the gcc_struct in QEMU... > > As far as I understand, this was mainly required for bitfields in packed > > structs in the past > > Ok, never mind, according to this post: > > https://lists.gnu.org/archive/html/qemu-devel/2011-08/msg00964.html > > this affects all structs, not only the ones with bitfieds. > > And it seems like we also still have packed structs with bitfields in code > base, see e.g. "struct ip" in net/util.h, so using Clang on Windows likely > currently can't work?
Just because it has bitfields doesn't mean it will definitely be different. I'm not sure if it is an entirely accurate comparison, but I modified the native linux build to use 'gcc_struct' and again to use 'ms_struct'. Then fed all the .o files to 'pahole', and compared the output. There was only a single difference: union VTD_IR_TableEntry { struct { uint32_t present:1; /* 0: 0 4 */ uint32_t fault_disable:1; /* 0: 1 4 */ uint32_t dest_mode:1; /* 0: 2 4 */ uint32_t redir_hint:1; /* 0: 3 4 */ uint32_t trigger_mode:1; /* 0: 4 4 */ uint32_t delivery_mode:3; /* 0: 5 4 */ uint32_t __avail:4; /* 0: 8 4 */ uint32_t __reserved_0:3; /* 0:12 4 */ uint32_t irte_mode:1; /* 0:15 4 */ uint32_t vector:8; /* 0:16 4 */ uint32_t __reserved_1:8; /* 0:24 4 */ uint32_t dest_id; /* 4 4 */ uint16_t source_id; /* 8 2 */ /* Bitfield combined with previous fields */ uint64_t sid_q:2; /* 8:16 8 */ uint64_t sid_vtype:2; /* 8:18 8 */ uint64_t __reserved_2:44; /* 8:20 8 */ - } irte; /* 0 18 */ + } irte; /* 0 16 */ uint64_t data[2]; /* 0 16 */ }; from the intel_iommu.c file. IOW, ms_struct added a 2 byte padding after the uint16_t source_id field despite 'packed' attribute, but gcc_struct collapsed the uint16_t into the uint64_t bitfield since only 48 bits were consumed. IIUC, this could be made portable by changing uint16_t source_id; /* 8 2 */ to uint64_t source_id:16; /* 8 2 */ NB, this was a --target-list=x86_64-softmmu build only, so hasn't covered the hole codebase. Still shows the gcc_struct annotation might not be as critical as we imagined. NB a limitation of the pahole analysis is that it only reports structs that are actually declared as variabls somewhere - either stack allocated or heap allocate is fine, as long as there's a declarion of usage somewhre. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|