On Tue, Dec 23, 2014 at 11:45:00PM +0000, Peter Maydell wrote: > On 23 December 2014 at 23:29, Rabin Vincent <ra...@rab.in> wrote: > > +static size_t round4(size_t size) > > +{ > > + return ((size + 3) / 4) * 4; > > +} > > Is this different from ROUND_UP(size, 4) ? > If we can use the standard macro from the headers we should; > if there's a real difference we should comment about what it is.
No, I'll use ROUND_UP. > > +int arm_cpu_write_elf64_note(WriteCoreDumpFunction f, CPUState *cs, > > + int cpuid, void *opaque) > > +{ > > + aarch64_elf_prstatus prstatus = {.pid = cpuid}; > > + ARMCPU *cpu = ARM_CPU(cs); > > + > > + memcpy(&(prstatus.regs), cpu->env.xregs, sizeof(cpu->env.xregs)); > > + prstatus.pc = cpu->env.pc; > > + prstatus.pstate = cpu->env.pstate; > > You need to use the correct accessor function for pstate, not > all the bits are kept in env.pstate. Call pstate_read(). OK. > Can we get here when a 64-bit CPU is in AArch32 mode? (eg, > 64 bit guest OS running a 32 bit compat process at the > point of taking the memory dump). If so, what sort of > core file should we be writing? I'd say still 64-bit. > Assuming the answer is "still 64 bit core dump" you need > to do something here to sync the 32 bit TCG state into the > 64 bit xregs array. (KVM can take care of itself.) I have now tested this by triggering a dump while a 32-bit process is incrementing a register in a tight loop, and the following, which I lifted off the exception handling code, appears to work: if (!is_a64(&cpu->env)) { int i; for (i = 0; i < 15; i++) { prstatus.regs[i] = cpu->env.regs[i]; } } > > +int cpu_get_dump_info(ArchDumpInfo *info, > > + const struct GuestPhysBlockList *guest_phys_blocks) > > +{ > > + info->d_machine = ELF_MACHINE; > > + info->d_class = (info->d_machine == EM_ARM) ? ELFCLASS32 : ELFCLASS64; > > + > > +#ifdef TARGET_WORDS_BIGENDIAN > > + info->d_endian = ELFDATA2MSB; > > +#else > > + info->d_endian = ELFDATA2LSB; > > +#endif > > Note that in fact ARM is never going to be TARGET_WORDS_BIGENDIAN, > even if the guest is big-endian, because the #define represents > the bus endianness, not whether the CPU happens to currently be > doing byte-swizzling. Do you need to key d_endian off the CPU's > current endianness setting? The current endianness of EL1? > Something else? IIUC we don't currently support anything other than little endian in system emulation? Attempting to boot a BE ARMv7 vexpress kernel hits the unimplementation of setend pretty quickly, and I don't see any machine initializing the bswap_code to big endian. According the the ELF specification for ARM, the choice between ELFDATA2LSB and ELFDATA2MSB "will be governed by the default data order in the execution environment". Since we dump the full system memory I would interpret this to be the "lowest" execution environment. So I guess for ARM this would mean setting big endian if (SCTLR.EE || SCTLR.B) and for AArch64 if SCTLR_EL1.E0E is set? (I had assumed that post-analysis tools would refuse to open a dump if the endianness does not match but this does not seem to be the case. I tested by generating dumps with this d_endian hardcoded to both ELFDATA2LSB and ELFDATA2MSB and gdb appears to open them and show the registers and memory without complaining.)