On 2014/12/19 19:12, Paolo Bonzini wrote:
On 19/12/2014 03:32, Chen, Tiejun wrote:
Are you saying something below?
if (enable_apicv)
...
else {
kvm_x86_ops->hwapic_irr_update = NULL;
Yes.
But this means we have to revise hadware_setup() to get 'kvm' inside,
This would not eve
Chen, Tiejun wrote:
> On 2014/12/21 20:46, Jamie Heilman wrote:
> >With v3.19-rc1 when I run qemu-system-x86_64 -machine pc,accel=kvm I
> >get:
> >
> >KVM: entry failed, hardware error 0x8021
>
> Looks some MSR writing issues such a failed entry.
>
> >If you're running a guest on an Intel mac
On 22/12/2014 07:39, Zhang Haoyu wrote:
> Hi,
>
> When I perform P2V from native servers with win2008 to kvm vm,
> some cases failed due to the physical disk was using GPT for partition,
> and QEMU doesn't support GPT by default.
>
> And, I see in below site that OVMF can be used to enable UEFI
On 22/12/2014 05:48, Wu, Feng wrote:
> Do you mean we don't support Lowest priority interrupts? As I mentioned
> before,
> Lowest priority interrupts is widely used in Linux, so I think supporting
> lowest priority
> interrupts is very important for Linux guest OS. Do you have any
> ideas/sugg
On 22/12/2014 10:01, Chen, Tiejun wrote:
> I can send out as a patch if we have on any objections.
No problem, I will apply it to kvm/queue.
Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
On 2014/12/22 17:28, Paolo Bonzini wrote:
>
>
> On 22/12/2014 07:39, Zhang Haoyu wrote:
>> Hi,
>>
>> When I perform P2V from native servers with win2008 to kvm vm,
>> some cases failed due to the physical disk was using GPT for partition,
>> and QEMU doesn't support GPT by default.
>>
>> And, I
Hi,
I cannot receive qemu-dev/kvm-dev mails sent by myself,
but mails from others can be received,
any helps?
Thanks,
Zhang Haoyu
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/
On 22/12/2014 10:40, Zhang Haoyu wrote:
>> 2) the FAT driver is not free, which prevents distribution in Fedora and
>> several other distributions
>>
> Sorry, I cannot follow you,
> the "FAT" mentioned above means FAT filesystem?
> what's the relationship between OVMF and FAT?
>
> I want to use
On 22/12/2014 10:48, Zhang Haoyu wrote:
> Hi,
>
> I cannot receive qemu-dev/kvm-dev mails sent by myself,
> but mails from others can be received,
> any helps?
For qemu-devel, you need to configure mailman to send messages even if
they are yours. For the kvm mailing list I'm not sure how it wo
On 2014/12/22 17:52, Paolo Bonzini wrote:
>
>
> On 22/12/2014 10:40, Zhang Haoyu wrote:
>>> 2) the FAT driver is not free, which prevents distribution in Fedora and
>>> several other distributions
>>>
>> Sorry, I cannot follow you,
>> the "FAT" mentioned above means FAT filesystem?
>> what's the
On 22/12/2014 04:47, Zhang Haoyu wrote:
> Hi,
>
> Does "is_guest_mode() == true" mean in L1 (guest hypervisor)?
No, it means in L2.
Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.
On 21/12/2014 04:31, Andy Lutomirski wrote:
> I'm looking at the vdso timing code, and I'm puzzled by the pvclock
> code. My motivation is comprehensibility, performance, and
> correctness.
>
> # for i in `seq 10`; do ./timing_test_64 10 vclock_gettime 0; done
> 1000 loops in 0.69138s = 69.
> -Original Message-
> From: Paolo Bonzini [mailto:pbonz...@redhat.com]
> Sent: Monday, December 22, 2014 5:28 PM
> To: Wu, Feng; Zhang, Yang Z; Thomas Gleixner; Ingo Molnar; H. Peter Anvin;
> x...@kernel.org; Gleb Natapov; dw...@infradead.org; j...@8bytes.org;
> Alex Williamson; Jiang Li
On 22/12/2014 12:04, Wu, Feng wrote:
> > Can you support them only if the destination is a single CPU?
>
> Sorry, I am not quite understand this. I still don't understand the "single
> CPU" here.
> Lowest priority interrupts always have a cpumask which contains multiple CPU.
Yes, and those need
> -Original Message-
> From: Paolo Bonzini [mailto:pbonz...@redhat.com]
> Sent: Monday, December 22, 2014 7:07 PM
> To: Wu, Feng; Zhang, Yang Z; Thomas Gleixner; Ingo Molnar; H. Peter Anvin;
> x...@kernel.org; Gleb Natapov; dw...@infradead.org; j...@8bytes.org;
> Alex Williamson; Jiang Li
On 22/12/2014 12:17, Wu, Feng wrote:
>> Yes, and those need not be accelerated. But what if you set
>> affinity to a single CPU?
>
> How do I set affinity to a single CPU if guest configure a lowest
> priority interrupt? Thanks a lot!
I mean if the guest (via irqbalance and /proc/irq/) configur
For arm 32 bit architecture, 8 bytes load/store operation in one instruction
will not be generated by compiler.
And before invoke mmio_read_buf, there is a piece of code:
"
len = run->mmio.len;
if (len > sizeof(unsigned long))
return -EINV
Warning message:
arch/arm/kvm/arm.c:85:17:
error: symbol 'kvm_get_running_vcpus' redeclared with different type
(originally declared at ./arch/arm/include/asm/kvm_host.h :190) -
different address spaces
Signed-off-by: Peng Fan
CC: Gleb Natapov
CC: Paolo Bonzini
CC: Christoffer Dall
CC: Marc Zy
On 2014/12/22 17:54, Paolo Bonzini wrote:
>
>
> On 22/12/2014 10:48, Zhang Haoyu wrote:
>> Hi,
>>
>> I cannot receive qemu-dev/kvm-dev mails sent by myself,
>> but mails from others can be received,
>> any helps?
>
> For qemu-devel, you need to configure mailman to send messages even if
> they a
On 22/12/14 11:23, Peng Fan wrote:
> For arm 32 bit architecture, 8 bytes load/store operation in one instruction
> will not be generated by compiler.
Ever heard of lrdr/strd?
> And before invoke mmio_read_buf, there is a piece of code:
> "
> len = run->mmio.len;
>
On 22/12/2014 12:40, Zhang Haoyu wrote:
> On 2014/12/22 17:54, Paolo Bonzini wrote:
>>
>>
>> On 22/12/2014 10:48, Zhang Haoyu wrote:
>>> Hi,
>>>
>>> I cannot receive qemu-dev/kvm-dev mails sent by myself,
>>> but mails from others can be received,
>>> any helps?
>>
>> For qemu-devel, you need to
On Mon, Dec 08, 2014 at 04:50:34PM +0800, Amos Kong wrote:
> When I hotunplug a busy virtio-rng device or try to access
> hwrng attributes in non-smp guest, it gets stuck.
>
> My hotplug tests:
>
> | test 0:
> | hotunplug rng device from qemu monitor
> |
> | test 1:
> | guest) # dd if=/dev/hw
On 2014/12/22 20:05, Paolo Bonzini wrote:
>
>
> On 22/12/2014 12:40, Zhang Haoyu wrote:
>> On 2014/12/22 17:54, Paolo Bonzini wrote:
>>>
>>>
>>> On 22/12/2014 10:48, Zhang Haoyu wrote:
Hi,
I cannot receive qemu-dev/kvm-dev mails sent by myself,
but mails from others can be re
Since most virtual machines raise this message once, it is a bit annoying.
Make it KERN_DEBUG severity.
Cc: sta...@vger.kernel.org
Fixes: 7a2e8aaf0f6873b47bc2347f216ea5b0e4c258ab
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
On Sat, Dec 20, 2014 at 07:31:19PM -0800, Andy Lutomirski wrote:
> I'm looking at the vdso timing code, and I'm puzzled by the pvclock
> code. My motivation is comprehensibility, performance, and
> correctness.
>
> # for i in `seq 10`; do ./timing_test_64 10 vclock_gettime 0; done
> 1000 loop
On Sat, Dec 20, 2014 at 07:31:19PM -0800, Andy Lutomirski wrote:
> I'm looking at the vdso timing code, and I'm puzzled by the pvclock
> code. My motivation is comprehensibility, performance, and
> correctness.
>
> # for i in `seq 10`; do ./timing_test_64 10 vclock_gettime 0; done
> 1000 loop
2014-12-19 17:50 GMT+08:00 Paolo Bonzini :
>
>
> On 19/12/2014 04:58, Wincy Van wrote:
>> 2014-12-17 18:46 GMT+08:00 Paolo Bonzini :
>>>
>>>
>>> On 17/12/2014 04:46, Wincy Van wrote:
Hi, all:
The patchset (https://lkml.org/lkml/2014/3/18/309) fixed migration of
Windows guests, b
On Mon, Dec 22, 2014 at 11:34:30AM -0200, Marcelo Tosatti wrote:
> > It would be even nicer, though, if we could do much the same thing but
> > without worrying about which vcpu we're on.
> >
> > Thoughts? Am I missing some considerations here?
>
> Maybe we can find another optimization opportun
On Mon, Dec 22, 2014 at 12:23:36PM +0100, Paolo Bonzini wrote:
>
>
> On 22/12/2014 12:17, Wu, Feng wrote:
> >> Yes, and those need not be accelerated. But what if you set
> >> affinity to a single CPU?
> >
> > How do I set affinity to a single CPU if guest configure a lowest
> > priority interru
Just began learning this piece of code.
Thanks for correcting me.
On 12/22/2014 07:51 PM, Marc Zyngier wrote:
> On 22/12/14 11:23, Peng Fan wrote:
>> For arm 32 bit architecture, 8 bytes load/store operation in one instruction
>> will not be generated by compiler.
>
> Ever heard of lrdr/strd?
>
Hi Paolo,
so I installed an old SUSE guest (SLES10, kernel is 2.6.16 + enterprise
stuff) and it was booting and all was fine but this week not anymore.
Host kernel is 3.19-rc1 + tip/master. I did miss some kvm config options
initially so I did
$ make kvmconfig
and it added those (see diff at th
On 22/12/2014 15:34, Borislav Petkov wrote:
> Any ideas how to debug this further? :)
Bisection? :)
Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Dec 22, 2014 at 5:34 AM, Marcelo Tosatti wrote:
> On Sat, Dec 20, 2014 at 07:31:19PM -0800, Andy Lutomirski wrote:
>> I'm looking at the vdso timing code, and I'm puzzled by the pvclock
>> code. My motivation is comprehensibility, performance, and
>> correctness.
>>
>> # for i in `seq 10`
On 22/12/2014 17:03, Andy Lutomirski wrote:
> This is wrong. The guest *kernel* might not see the intermediate
> state because the kernel (presumably it disabled migration while
> reading pvti), but the guest vdso can't do that and could very easily
> observe pvti while it's being written.
No.
On Mon, Dec 22, 2014 at 2:49 PM, Paolo Bonzini wrote:
>
>
> On 22/12/2014 17:03, Andy Lutomirski wrote:
>> This is wrong. The guest *kernel* might not see the intermediate
>> state because the kernel (presumably it disabled migration while
>> reading pvti), but the guest vdso can't do that and co
On 23/12/2014 00:00, Andy Lutomirski wrote:
> On Mon, Dec 22, 2014 at 2:49 PM, Paolo Bonzini wrote:
>>
>>
>> On 22/12/2014 17:03, Andy Lutomirski wrote:
>>> This is wrong. The guest *kernel* might not see the intermediate
>>> state because the kernel (presumably it disabled migration while
>>>
On Mon, Dec 22, 2014 at 3:14 PM, Paolo Bonzini wrote:
>
>
> On 23/12/2014 00:00, Andy Lutomirski wrote:
>> On Mon, Dec 22, 2014 at 2:49 PM, Paolo Bonzini wrote:
>>>
>>>
>>> On 22/12/2014 17:03, Andy Lutomirski wrote:
This is wrong. The guest *kernel* might not see the intermediate
stat
Paolo Bonzini wrote on 2014-12-19:
>
>
> On 19/12/2014 02:46, Zhang, Yang Z wrote:
>>> If the IRQ is posted, its affinity is controlled by guest (irq
>>> <---> vCPU <> pCPU), it has no effect when host changes its affinity.
>>
>> That's the problem: User is able to changes it in host but it
In Linux 3.18 and below, GCC hoists the lsl instructions in the
pvclock code all the way to the beginning of __vdso_clock_gettime,
slowing the non-paravirt case significantly. For unknown reasons,
presumably related to the removal of a branch, the performance issue
is gone as of
e76b027e6408 x86,
The pvclock vdso code was too abstracted to understand easily and
excessively paranoid. Simplify it for a huge speedup.
This opens the door for additional simplifications, as the vdso no
longer accesses the pvti for any vcpu other than vcpu 0.
Before, vclock_gettime using kvm-clock took about 64
This is a dramatic simplification and speedup of the vdso pvclock read
code. Is it correct?
Andy Lutomirski (2):
x86, vdso: Use asm volatile in __getcpu
x86, vdso, pvclock: Simplify and speed up the vdso pvclock reader
arch/x86/include/asm/vgtod.h | 6 ++--
arch/x86/vdso/vclock_gettime.c
Paolo Bonzini wrote on 2014-12-23:
>> The problem is we still need to support PI with lowest priority
>> delivery mode
> even if guest does not configure irq affinity via /proc/irq/. Don't we?
>
> Yes, but we can get the basic support working first.
>
> I and Feng talked on irc and agreed to star
On Mon, 12/22 20:21, Zhang Haoyu wrote:
>
> On 2014/12/22 20:05, Paolo Bonzini wrote:
> >
> >
> > On 22/12/2014 12:40, Zhang Haoyu wrote:
> >> On 2014/12/22 17:54, Paolo Bonzini wrote:
> >>>
> >>>
> >>> On 22/12/2014 10:48, Zhang Haoyu wrote:
> Hi,
>
> I cannot receive qemu-dev/kv
I can't get it to work at all. It fails with:
KVM internal error. Suberror: 1
emulation failure
EAX=000dee58 EBX= ECX= EDX=0cfd
ESI=0059 EDI= EBP= ESP=6fc4
EIP=000f17f4 EFL=00010012 [A--] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0010 00c09
On 2014/12/23 9:36, Fam Zheng wrote:
> On Mon, 12/22 20:21, Zhang Haoyu wrote:
>>
>> On 2014/12/22 20:05, Paolo Bonzini wrote:
>>>
>>>
>>> On 22/12/2014 12:40, Zhang Haoyu wrote:
On 2014/12/22 17:54, Paolo Bonzini wrote:
>
>
> On 22/12/2014 10:48, Zhang Haoyu wrote:
>> Hi,
>>>
On 2014/12/22 17:23, Jamie Heilman wrote:
Chen, Tiejun wrote:
On 2014/12/21 20:46, Jamie Heilman wrote:
With v3.19-rc1 when I run qemu-system-x86_64 -machine pc,accel=kvm I
get:
KVM: entry failed, hardware error 0x8021
Looks some MSR writing issues such a failed entry.
If you're runnin
On 2014/12/23 9:50, Chen, Tiejun wrote:
On 2014/12/22 17:23, Jamie Heilman wrote:
Chen, Tiejun wrote:
On 2014/12/21 20:46, Jamie Heilman wrote:
With v3.19-rc1 when I run qemu-system-x86_64 -machine pc,accel=kvm I
get:
KVM: entry failed, hardware error 0x8021
Looks some MSR writing issue
In most cases calling hwapic_isr_update(), we always check if
kvm_apic_vid_enabled() == 1, but actually,
kvm_apic_vid_enabled()
-> kvm_x86_ops->vm_has_apicv()
-> vmx_vm_has_apicv() or '0' in svm case
-> return enable_apicv && irqchip_in_kernel(kvm)
So its a little cost to r
On 23/12/2014 01:39, Andy Lutomirski wrote:
> This is a dramatic simplification and speedup of the vdso pvclock read
> code. Is it correct?
>
> Andy Lutomirski (2):
> x86, vdso: Use asm volatile in __getcpu
> x86, vdso, pvclock: Simplify and speed up the vdso pvclock reader
Patch 1 is ok,
Chen, Tiejun wrote:
> On 2014/12/23 9:50, Chen, Tiejun wrote:
> >On 2014/12/22 17:23, Jamie Heilman wrote:
> >>Chen, Tiejun wrote:
> >>>On 2014/12/21 20:46, Jamie Heilman wrote:
> With v3.19-rc1 when I run qemu-system-x86_64 -machine pc,accel=kvm I
> get:
>
> KVM: entry failed, har
On 2014/12/23 15:26, Jamie Heilman wrote:
Chen, Tiejun wrote:
On 2014/12/23 9:50, Chen, Tiejun wrote:
On 2014/12/22 17:23, Jamie Heilman wrote:
Chen, Tiejun wrote:
On 2014/12/21 20:46, Jamie Heilman wrote:
With v3.19-rc1 when I run qemu-system-x86_64 -machine pc,accel=kvm I
get:
KVM: entry
51 matches
Mail list logo