On Wed, 13 Apr 2022 at 17:09, Paolo Bonzini <pbonz...@redhat.com> wrote: > > The i386 target consolidates all vector registers so that instead of > XMMReg, YMMReg and ZMMReg structs there is a single ZMMReg that can > fit all of SSE, AVX and AVX512. > > When TCG copies data from and to the SSE registers, it uses the > full 64-byte width. This is not a correctness issue because TCG > never lets guest code see beyond the first 128 bits of the ZMM > registers, however it causes uninitialized stack memory to > make it to the CPU's migration stream. > > Fix it by only copying the low 16 bytes of the ZMMReg union into > the destination register. >
> +/* > + * Copy the relevant parts of a Reg value around. In the case where > + * sizeof(Reg) > SIZE, these helpers operate only on the lower bytes of > + * a 64 byte ZMMReg, so we must copy only those and keep the top bytes > + * untouched in the guest-visible destination destination register. > + * Note that the "lower bytes" are placed last in memory on big-endian > + * hosts, which store the vector backwards in memory. In that case the > + * copy *starts* at B(SIZE - 1) and ends at B(0), the opposite of > + * the little-endian case. > + */ > +#ifdef HOST_WORDS_BIGENDIAN > +#define MOVE(d, r) memcpy(&((d).B(SIZE - 1)), &(d).B(SIZE - 1), SIZE) Still has the typo where it's copying d to d, not r to d. > +#else > +#define MOVE(d, r) memcpy(&(d).B(0), &(r).B(0), SIZE) > +#endif Otherwise Reviewed-by: Peter Maydell <peter.mayd...@linaro.org> thanks -- PMM