Fix Penguin Penalty 17th October2014 ( mail-archive.com )

2014-11-17 Thread stifled74394
Dear Sir Did your website get hit by Google Penguin update on October 17th 2014? What basically is Google Penguin Update? It is actually a code name for Google algorithm which aims at decreasing your websites search engine rankings that violate Google’s guidelines by using black hat SEO techniq

Possible approaches to limit csw overhead

2014-11-17 Thread Andrey Korolyov
Hello, I have a rather practical question, is it possible to limit amount of vm-initiated events for single VM? As and example, VM which experienced OOM and effectively stuck dead generates a lot of unnecessary context switches triggering do_raw_spin_lock very often and therefore increasing overal

Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Samuel Thibault
Jan Kiszka, le Mon 17 Nov 2014 07:28:23 +0100, a écrit : > > AIUI, the external interrupt is 0xf6, i.e. Linux' IRQ_WORK_VECTOR. I > > however don't see any of them, neither in L0's /proc/interrupts, nor in > > L1's /proc/interrupts... > > I suppose this is a SMP host and guest? L0 is a hyperthre

Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Gleb Natapov
On Sun, Nov 16, 2014 at 11:18:28PM +0100, Samuel Thibault wrote: > Hello, > > Jan Kiszka, le Wed 12 Nov 2014 00:42:52 +0100, a écrit : > > On 2014-11-11 19:55, Samuel Thibault wrote: > > > jenkins.debian.net is running inside a KVM VM, and it runs nested > > > KVM guests for its installation attem

Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Samuel Thibault
Gleb Natapov, le Mon 17 Nov 2014 10:58:45 +0200, a écrit : > Do you know how gnumach timekeeping works? Does it have a timer that fires > each 1ms? > Which clock device is it using? It uses the PIT every 10ms, in square mode (PIT_C0|PIT_SQUAREMODE|PIT_READMODE = 0x36). Samuel -- To unsubscribe f

Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Jan Kiszka
On 2014-11-17 10:03, Samuel Thibault wrote: > Gleb Natapov, le Mon 17 Nov 2014 10:58:45 +0200, a écrit : >> Do you know how gnumach timekeeping works? Does it have a timer that fires >> each 1ms? >> Which clock device is it using? > > It uses the PIT every 10ms, in square mode > (PIT_C0|PIT_SQUAR

Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Samuel Thibault
Jan Kiszka, le Mon 17 Nov 2014 10:04:37 +0100, a écrit : > On 2014-11-17 10:03, Samuel Thibault wrote: > > Gleb Natapov, le Mon 17 Nov 2014 10:58:45 +0200, a écrit : > >> Do you know how gnumach timekeeping works? Does it have a timer that fires > >> each 1ms? > >> Which clock device is it using?

Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Gleb Natapov
On Mon, Nov 17, 2014 at 10:10:25AM +0100, Samuel Thibault wrote: > Jan Kiszka, le Mon 17 Nov 2014 10:04:37 +0100, a écrit : > > On 2014-11-17 10:03, Samuel Thibault wrote: > > > Gleb Natapov, le Mon 17 Nov 2014 10:58:45 +0200, a écrit : > > >> Do you know how gnumach timekeeping works? Does it have

Re: [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits()

2014-11-17 Thread Paolo Bonzini
On 17/11/2014 02:34, Chen, Tiejun wrote: > On 2014/11/14 18:06, Paolo Bonzini wrote: >> >> >> On 14/11/2014 10:31, Tiejun Chen wrote: >>> In some real scenarios 'start' may not be less than 'end' like >>> maxphyaddr = 52. >>> >>> Signed-off-by: Tiejun Chen >>> --- >>> arch/x86/kvm/mmu.h | 2 ++

Re: [PATCH 0/3] KVM: simplification to the memslots code

2014-11-17 Thread Paolo Bonzini
On 17/11/2014 02:56, Takuya Yoshikawa wrote: >> > here are a few small patches that simplify __kvm_set_memory_region >> > and associated code. Can you please review them? > Ah, already queued. Sorry for being late to respond. While they are not in kvm/next, there's time to add Reviewed-by's an

Re: [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits()

2014-11-17 Thread Chen, Tiejun
On 2014/11/17 17:22, Paolo Bonzini wrote: On 17/11/2014 02:34, Chen, Tiejun wrote: On 2014/11/14 18:06, Paolo Bonzini wrote: On 14/11/2014 10:31, Tiejun Chen wrote: In some real scenarios 'start' may not be less than 'end' like maxphyaddr = 52. Signed-off-by: Tiejun Chen --- arch/x86/

Re: [PATCH 0/3] KVM: simplification to the memslots code

2014-11-17 Thread Takuya Yoshikawa
On 2014/11/17 18:23, Paolo Bonzini wrote: > > > On 17/11/2014 02:56, Takuya Yoshikawa wrote: here are a few small patches that simplify __kvm_set_memory_region and associated code. Can you please review them? >> Ah, already queued. Sorry for being late to respond. > > While they are

Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Samuel Thibault
Gleb Natapov, le Mon 17 Nov 2014 11:21:22 +0200, a écrit : > On Mon, Nov 17, 2014 at 10:10:25AM +0100, Samuel Thibault wrote: > > Jan Kiszka, le Mon 17 Nov 2014 10:04:37 +0100, a écrit : > > > On 2014-11-17 10:03, Samuel Thibault wrote: > > > > Gleb Natapov, le Mon 17 Nov 2014 10:58:45 +0200, a écr

Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Samuel Thibault
Also, I have made gnumach show a timer counter, it does get PIT interrupts every 10ms as expected, not more often. Samuel -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo

Re: vhost + multiqueue + RSS question.

2014-11-17 Thread Michael S. Tsirkin
On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote: > On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote: > > On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote: > > > Hi Michael, > > > > > > I am playing with vhost multiqueue capability and have a question abo

Re: Seeking a KVM benchmark

2014-11-17 Thread Wanpeng Li
Hi Paolo, On 11/11/14, 1:28 AM, Paolo Bonzini wrote: On 10/11/2014 15:23, Avi Kivity wrote: It's not surprising [1]. Since the meaning of some PTE bits change [2], the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush if EFER changed between two invocations of the same VPID

Re: Seeking a KVM benchmark

2014-11-17 Thread Paolo Bonzini
On 17/11/2014 12:17, Wanpeng Li wrote: >> >>> It's not surprising [1]. Since the meaning of some PTE bits change [2], >>> the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush >>> if EFER changed between two invocations of the same VPID, which isn't >>> the case. > > If the

Re: vhost + multiqueue + RSS question.

2014-11-17 Thread Gleb Natapov
On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote: > On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote: > > On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote: > > > On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote: > > > > Hi Michael, > > > >

RE: [RFC v2 0/9] KVM-VFIO IRQ forward control

2014-11-17 Thread Wu, Feng
> -Original Message- > From: linux-kernel-ow...@vger.kernel.org > [mailto:linux-kernel-ow...@vger.kernel.org] On Behalf Of Alex Williamson > Sent: Thursday, September 11, 2014 1:10 PM > To: Christoffer Dall > Cc: Eric Auger; eric.au...@st.com; marc.zyng...@arm.com; > linux-arm-ker...@list

[v2][PATCH] kvm: x86: mmio: fix setting the present bit of mmio spte

2014-11-17 Thread Tiejun Chen
In non-ept 64-bit of PAE case maxphyaddr may be 52bit as well, so we also need to disable mmio page fault. Here we can check MMIO_SPTE_GEN_HIGH_SHIFT directly to determine if we should set the present bit, and bring a little cleanup. Signed-off-by: Tiejun Chen --- v2: * Correct codes comments *

Re: [v2][PATCH] kvm: x86: mmio: fix setting the present bit of mmio spte

2014-11-17 Thread Paolo Bonzini
On 17/11/2014 12:31, Tiejun Chen wrote: > In non-ept 64-bit of PAE case maxphyaddr may be 52bit as well, There is no such thing as 64-bit PAE. On 32-bit PAE hosts, PTEs have bit 62 reserved, as in your patch: > + /* Magic bits are always reserved for 32bit host. */ > + mask |= 0x3ull <

Re: vhost + multiqueue + RSS question.

2014-11-17 Thread Michael S. Tsirkin
On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote: > On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote: > > On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote: > > > On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote: > > > > On Sun, Nov 16, 201

Re: Seeking a KVM benchmark

2014-11-17 Thread Wanpeng Li
Hi Paolo, On 11/17/14, 7:18 PM, Paolo Bonzini wrote: On 17/11/2014 12:17, Wanpeng Li wrote: It's not surprising [1]. Since the meaning of some PTE bits change [2], the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush if EFER changed between two invocations of the same VPI

Re: Seeking a KVM benchmark

2014-11-17 Thread Paolo Bonzini
On 17/11/2014 13:00, Wanpeng Li wrote: > Sorry, maybe I didn't state my question clearly. As Avi mentioned above > "In VMX we have VPIDs, so we only need to flush if EFER changed between > two invocations of the same VPID", so there is only one VPID if the > guest is UP, my question is if there n

Re: Seeking a KVM benchmark

2014-11-17 Thread Wanpeng Li
Hi Paolo, On 11/17/14, 8:04 PM, Paolo Bonzini wrote: On 17/11/2014 13:00, Wanpeng Li wrote: Sorry, maybe I didn't state my question clearly. As Avi mentioned above "In VMX we have VPIDs, so we only need to flush if EFER changed between two invocations of the same VPID", so there is only one VPI

Re: Seeking a KVM benchmark

2014-11-17 Thread Paolo Bonzini
On 17/11/2014 13:14, Wanpeng Li wrote: >> >>> Sorry, maybe I didn't state my question clearly. As Avi mentioned above >>> "In VMX we have VPIDs, so we only need to flush if EFER changed between >>> two invocations of the same VPID", so there is only one VPID if the >>> guest is UP, my question is

Re: vhost + multiqueue + RSS question.

2014-11-17 Thread Gleb Natapov
On Mon, Nov 17, 2014 at 01:58:20PM +0200, Michael S. Tsirkin wrote: > On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote: > > On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote: > > > On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote: > > > > On Sun, Nov 16, 201

Re: [kvm-unit-tests PATCH 0/6] arm: enable MMU

2014-11-17 Thread Paolo Bonzini
On 30/10/2014 16:56, Andrew Jones wrote: > This first patch of this series fixes a bug caused by attempting > to use spinlocks without enabling the MMU. The next three do some > prep for the fifth, and also fix arm's PAGE_ALIGN. The fifth is > prep for the sixth, which finally turns the MMU on for

Re: [RFC v2 0/9] KVM-VFIO IRQ forward control

2014-11-17 Thread Eric Auger
Hi Feng, I will submit a PATCH v3 release end of this week. Best Regards Eric On 11/17/2014 12:25 PM, Wu, Feng wrote: > > >> -Original Message- >> From: linux-kernel-ow...@vger.kernel.org >> [mailto:linux-kernel-ow...@vger.kernel.org] On Behalf Of Alex Williamson >> Sent: Thursday, Se

RE: [RFC v2 0/9] KVM-VFIO IRQ forward control

2014-11-17 Thread Wu, Feng
> -Original Message- > From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On > Behalf Of Eric Auger > Sent: Monday, November 17, 2014 9:42 PM > To: Wu, Feng; Alex Williamson; Christoffer Dall > Cc: eric.au...@st.com; marc.zyng...@arm.com; > linux-arm-ker...@lists.infradead

[PATCH 1/3] kvm: add a memslot flag for incoherent memory regions

2014-11-17 Thread Ard Biesheuvel
Memory regions may be incoherent with the caches, typically when the guest has mapped a host system RAM backed memory region as uncached. Add a flag KVM_MEMSLOT_INCOHERENT so that we can tag these memslots and handle them appropriately when mapping them. Signed-off-by: Ard Biesheuvel --- include

[PATCH 2/3] arm, arm64: KVM: allow forced dcache flush on page faults

2014-11-17 Thread Ard Biesheuvel
From: Laszlo Ersek To allow handling of incoherent memslots in a subsequent patch, this patch adds a paramater 'ipa_uncached' to cache_coherent_guest_page() so that we can instruct it to flush the page's contents to DRAM even if the guest has caching globally enabled. Signed-off-by: Laszlo Ersek

[PATCH 3/3] arm, arm64: KVM: handle potential incoherency of readonly memslots

2014-11-17 Thread Ard Biesheuvel
Readonly memslots are often used to implement emulation of ROMs and NOR flashes, in which case the guest may legally map these regions as uncached. To deal with the incoherency associated with uncached guest mappings, treat all readonly memslots as incoherent, and ensure that pages that belong to r

Re: [PATCH v4 3/6] hw_random: use reference counts on each struct hwrng.

2014-11-17 Thread Amos Kong
On Wed, Nov 12, 2014 at 02:11:23PM +1030, Rusty Russell wrote: > Amos Kong writes: > > From: Rusty Russell > > > > current_rng holds one reference, and we bump it every time we want > > to do a read from it. > > > > This means we only hold the rng_mutex to grab or drop a reference, > > so accessi

Re: [PATCH 3/3] arm, arm64: KVM: handle potential incoherency of readonly memslots

2014-11-17 Thread Paolo Bonzini
On 17/11/2014 15:58, Ard Biesheuvel wrote: > Readonly memslots are often used to implement emulation of ROMs and > NOR flashes, in which case the guest may legally map these regions as > uncached. > To deal with the incoherency associated with uncached guest mappings, > treat all readonly memslot

Re: [PATCH 3/3] arm, arm64: KVM: handle potential incoherency of readonly memslots

2014-11-17 Thread Marc Zyngier
Hi Paolo, On 17/11/14 15:29, Paolo Bonzini wrote: > > > On 17/11/2014 15:58, Ard Biesheuvel wrote: >> Readonly memslots are often used to implement emulation of ROMs and >> NOR flashes, in which case the guest may legally map these regions as >> uncached. >> To deal with the incoherency associat

Re: [PATCH 3/3] arm, arm64: KVM: handle potential incoherency of readonly memslots

2014-11-17 Thread Laszlo Ersek
On 11/17/14 16:29, Paolo Bonzini wrote: > > > On 17/11/2014 15:58, Ard Biesheuvel wrote: >> Readonly memslots are often used to implement emulation of ROMs and >> NOR flashes, in which case the guest may legally map these regions as >> uncached. >> To deal with the incoherency associated with unc

Re: [PATCH 3/3] arm, arm64: KVM: handle potential incoherency of readonly memslots

2014-11-17 Thread Paolo Bonzini
On 17/11/2014 16:39, Marc Zyngier wrote: > ARM is broadly similar, but there's a number of gotchas: > - uncacheable (guest level) + cacheable (host level) -> uncacheable: the > read request is going to be directly sent to RAM, bypassing the caches. > - Userspace is going to use a cacheable view o

Where is the VM live migration code?

2014-11-17 Thread Jidong Xiao
Hi, I saw this page: http://www.linux-kvm.org/page/Migration. It looks like Migration is a feature provided by KVM? But when I look at the Linux kernel source code, i.e., virt/kvm, and arch/x86/kvm, I don't see the code for this migration feature. So I wonder where is the source code for the li

Re: [Qemu-devel] Where is the VM live migration code?

2014-11-17 Thread Zhang Haoyu
> Hi, > > I saw this page: > > http://www.linux-kvm.org/page/Migration. > > It looks like Migration is a feature provided by KVM? But when I look > at the Linux kernel source code, i.e., virt/kvm, and arch/x86/kvm, I > don't see the code for this migration feature. > Most of live migration code

Re: vhost + multiqueue + RSS question.

2014-11-17 Thread Zhang Haoyu
> On Mon, Nov 17, 2014 at 01:58:20PM +0200, Michael S. Tsirkin wrote: > > On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote: > > > On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote: > > > > On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote: > > > > > On Sun, N

Re: [Qemu-devel] Where is the VM live migration code?

2014-11-17 Thread Jidong Xiao
On Mon, Nov 17, 2014 at 5:29 PM, Zhang Haoyu wrote: >> Hi, >> >> I saw this page: >> >> http://www.linux-kvm.org/page/Migration. >> >> It looks like Migration is a feature provided by KVM? But when I look >> at the Linux kernel source code, i.e., virt/kvm, and arch/x86/kvm, I >> don't see the code

Re: [RFC PATCH 0/6] ARM64: KVM: PMU infrastructure support

2014-11-17 Thread Anup Patel
On Tue, Nov 11, 2014 at 2:48 PM, Anup Patel wrote: > Hi All, > > I have second thoughts about rebasing KVM PMU patches > to Marc's irq-forwarding patches. > > The PMU IRQs (when virtualized by KVM) are not exactly > forwarded IRQs because they are shared between Host > and Guest. > > Scenario1 > -

Re: vhost + multiqueue + RSS question.

2014-11-17 Thread Jason Wang
On 11/17/2014 07:58 PM, Michael S. Tsirkin wrote: > On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote: >> > On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote: >>> > > On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote: > > > On Sun, Nov 16, 2014 at 08:56:0

Re: vhost + multiqueue + RSS question.

2014-11-17 Thread Jason Wang
On 11/18/2014 09:37 AM, Zhang Haoyu wrote: >> On Mon, Nov 17, 2014 at 01:58:20PM +0200, Michael S. Tsirkin wrote: >>> On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote: On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote: > On Mon, Nov 17, 2014 at 09:44:23AM +0200

can I make this work… (Foundation for accessibility project)

2014-11-17 Thread Eric S. Johansson
this is a rather different use case than what you've been thinking of for KVM. It could mean significant improvement of the quality of life of disabled programs like myself. It's difficult to convey what it's like to try to use computers with speech recognition for something other than writing

Re: vhost + multiqueue + RSS question.

2014-11-17 Thread Gleb Natapov
On Tue, Nov 18, 2014 at 11:41:11AM +0800, Jason Wang wrote: > On 11/18/2014 09:37 AM, Zhang Haoyu wrote: > >> On Mon, Nov 17, 2014 at 01:58:20PM +0200, Michael S. Tsirkin wrote: > >>> On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote: > On Mon, Nov 17, 2014 at 12:38:16PM +0200, Mich