Hi Peter, Thanks for the reply, yes i am interested in (2). i will follow up on the suggested mailing list.
Just in case if you know, does the gicv3 support available in QEMU, or there is a plan. Regards, Tirumaqlesh. On 01-Mar-2014, at 5:58 am, Peter Maydell <peter.mayd...@linaro.org> wrote: > On 28 February 2014 07:08, Chalamarla, Tirumalesh > <tirumalesh.chalama...@caviumnetworks.com> wrote: >> Is there any one, trying out cross compiling and running qemu >> on aarch64 host. if so is there a development branch where this wrok is >> progressing. >> >> Some one could please let me know the plan/time frame, for >> qemu on arm64 hosts running arm64 guests. > > It's not entirely clear which of the various possible QEMU aarch64 > setups you're interested in. Summary: > > (1) using QEMU on aarch64 hosts to emulate other CPU architectures > (eg x86, MIPS): this went into QEMU about six months ago and was > in the last release of QEMU (1.7) > > (2) using QEMU on aarch64 hosts as the userspace component > of a VM using KVM kernel support to run an aarch64 guest: this > should work with current QEMU, though some functionality (for > instance, migration) is not yet implemented > > (3) using QEMU to emulate individual Linux AArch64 binaries, running > on any host (typically x86): this works in current upstream master, but > some instructions (parts of SIMD) are not yet implemented. I hope > we'll get the SIMD coverage completed within the next few weeks, > in time to put it into QEMU 2.0. > > (4) using QEMU to emulate an entire AArch64 system that can > boot a guest kernel, typically running on an x86 host: we're > working on this right now; we have work-in-progress code which > will boot a kernel and are working on cleaning it up to upstream > quality. I expect we'll have this done within a month or two, but > it won't make it into the QEMU 2.0 release (slightly too late). > > I'm guessing you're interested in (2) or maybe (1). For (2), > kvm...@lists.cs.columbia.edu is a good list to follow to > monitor what's currently going on. > > thanks > -- PMM