On 05/03/2018 11:42 AM, Thomas Huth wrote: > On 03.05.2018 11:33, Daniel P. Berrangé wrote: >> On Wed, May 02, 2018 at 01:05:21PM +0100, Peter Maydell wrote: >>> On 2 May 2018 at 12:58, Daniel P. Berrangé <berra...@redhat.com> wrote: >>>> I'm curious what is the compelling benefit of having a single fat QEMU >>>> binary that included all archiectures at once ? >>> >>> The motivation is "I want to model a board with an SoC that has >>> both Arm cores and Microblaze cores". One binary seems the most >>> sensible way to do that, since otherwise we'd end up with some >>> huge multiplication of binaries for all the possible architecture >>> combinations. It also would reduce the number of times we end up >>> recompiling and shipping any particular PCI device. From the >>> perspective of QEMU as emulation environment, it's a nice >>> simplification. >> >> Ah that's interesting - should have known there was wierd hardware >> like that out there :-) > > It's not that weird. A lot of "normal" machines have a service processor > (aka. BMC - board management controller) on board - and this service > processor is completely different to the main CPU. For example, the main > CPU could be an x86 or PPC, and the service processor is an embedded ARM > chip. To emulate a complete board, you'd need both CPU types in one QEMU > binary. Or you need to come up with some fancy interface between two > QEMU instances...
like a QEMU chardev backend to implement an IPMI BT interface between a QEMU Aspeed SoC machine and a QEMU PowerNV machine ? It works :) But, you need real HW to find timing issues ... They are plenty of other "wires" to fully model an OpenPower system, LPC, MBOX, etc. The dialogues between the ppc64 host firmware and the BMC firmware are very diverse. Coming back to the initial motivation that Peter pointed out, would the goal to be able to run vcpus of different architectures ? It would certainly be interesting to model a platform, specially if we can synchronize the execution in some ways and find timing issues. C.