Le 11/03/2020 à 22:44, Paolo Bonzini a écrit :
> On 11/03/20 22:21, Maxime Villard wrote:
>>> Yes, you don't know how long that run would take. I don't know about
>>> NVMM but for KVM it may even never leave if the guest is in HLT state.
>> Ok, I see, tha
Le 11/03/2020 à 21:42, Paolo Bonzini a écrit :
> On 11/03/20 21:14, Maxime Villard wrote:
>>> The problem is that qcpu->stop is checked _before_ entering the
>>> hypervisor and not after, so there is a small race window.
>> Ok. I don't understand what's su
Le 11/03/2020 à 19:03, Paolo Bonzini a écrit :
> On 10/03/20 20:14, Maxime Villard wrote:
>> Maybe, whpx_vcpu_kick() causes a WHvRunVpExitReasonCanceled in the
>> WHvRunVirtualProcessor() call that follows, which in turn causes "ret=1"
>> to leave th
Le 10/03/2020 à 11:58, Paolo Bonzini a écrit :
> On 10/03/20 07:45, Maxime Villard wrote:
>>> It reproduces the existing logic found in whpx-all.c, and if there is
>>>
>>
>> It's buggy there too and it has to be fixed in the hypervisor so it
>> can'
Le 02/03/2020 à 20:35, Paolo Bonzini a écrit :
>
>
> Il lun 2 mar 2020, 20:28 Maxime Villard <mailto:m...@m00nbsd.net>> ha scritto:
>
>
> >> + nvmm_vcpu_pre_run(cpu);
> >> +
> >> + if (atomic_read(&cpu->
Le 02/03/2020 à 19:13, Paolo Bonzini a écrit :
> On 06/02/20 22:32, Kamil Rytarowski wrote:
>> +get_qemu_vcpu(CPUState *cpu)
>> +{
>> +return (struct qemu_vcpu *)cpu->hax_vcpu;
>> +}
>
> Please make hax_vcpu a void * and rename it to "accel_data".
NVMM reproduces the existing logic in the oth
Le 02/03/2020 à 19:05, Kamil Rytarowski a écrit :
> On 02.03.2020 18:12, Paolo Bonzini wrote:
>> On 03/02/20 12:56, Kamil Rytarowski wrote:
>>> On 03.02.2020 12:41, Philippe Mathieu-Daudé wrote:
> @@ -1768,6 +1785,7 @@ disabled with --disable-FEATURE, default is
> enabled if available:
Hi
Le 03/02/2020 à 12:51, Philippe Mathieu-Daudé a écrit :
+static void
+nvmm_io_callback(struct nvmm_io *io)
+{
+ MemTxAttrs attrs = { 0 };
+ int ret;
+
+ ret = address_space_rw(&address_space_io, io->port, attrs, io->data,
+ io->size, !io->in);
+ if (ret != MEMTX_OK) {
+
Hi,
I am developing Qemu support for an accelerator, and I'm facing the following
situation:
The accelerator has a MemoryListener, with a region_add function. Qemu calls
region_add a certain number of times. At one point it wants to map pc.bios,
but the HVA it wants pc.bios mapped at happens to b