had some performance testing, but it
could always benefit from more testing, on other systems.
Changes since v4:
- remove some code that did not help performance
- implement all the cleanups suggested by Mel Gorman
- lots more testing, by Chegu Vinod and myself
- rebase against -tip instead
had some performance testing, but it
could always benefit from more testing, on other systems.
Changes since v1:
- fix divide by zero found by Chegu Vinod
- improve comment, as suggested by Peter Zijlstra
- do stats calculations in task_numa_placement in local variables
Some performance
Mel Gorman suse.de> writes:
>
> Weighing in at 63 patches makes the term "basic" in the series title is
> a misnomer.
>
> This series still has roughly the same goals as previous versions. It
> reduces overhead of automatic balancing through scan rate reduction
> and the avoidance of TLB flushe
On 7/1/2013 10:49 PM, Rusty Russell wrote:
Chegu Vinod writes:
On 6/30/2013 11:22 PM, Rusty Russell wrote:
Chegu Vinod writes:
Hello,
Lots (~700+) of the following messages are showing up in the dmesg of a
3.10-rc1 based kernel (Host OS is running on a large socket count box
with HT-on
On 6/30/2013 11:22 PM, Rusty Russell wrote:
Chegu Vinod writes:
Hello,
Lots (~700+) of the following messages are showing up in the dmesg of a
3.10-rc1 based kernel (Host OS is running on a large socket count box
with HT-on).
[ 82.270682] PERCPU: allocation failed, size=42 align=16, alloc
Hello,
Lots (~700+) of the following messages are showing up in the dmesg of a
3.10-rc1 based kernel (Host OS is running on a large socket count box
with HT-on).
[ 82.270682] PERCPU: allocation failed, size=42 align=16, alloc from
reserved chunk failed
[ 82.272633] kvm_intel: Could not
}
+ if (unlikely(disable_hugepages)) {
+ vfio_lock_acct(1);
+ return 1;
+ }
+
/* Lock all the consecutive pages from pfn_base */
for (i = 1, vaddr += PAGE_SIZE; i < npage; i++, vaddr += PAGE_SIZE) {
unsigned long pfn = 0;
.
Te
tapov
http://article.gmane.org/gmane.comp.emulators.kvm.devel/99713 )
Signed-off-by: Chegu Vinod
---
arch/x86/include/asm/kvm_host.h |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4979778..bc57
On 4/22/2013 1:50 PM, Jiannan Ouyang wrote:
On Mon, Apr 22, 2013 at 4:44 PM, Peter Zijlstra wrote:
On Mon, 2013-04-22 at 16:32 -0400, Rik van Riel wrote:
IIRC one of the reasons was that the performance improvement wasn't
as obvious. Rescheduling VCPUs takes a fair amount of time, quite
proba
continue;
if (vcpu == me)
continue;
if (waitqueue_active(&vcpu->wq))
.
Reviewed-by: Chegu Vinod
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
tate == TASK_RUNNING)
+ vcpu->preempted = true;
kvm_arch_vcpu_put(vcpu);
}
.
Reviewed-by: Chegu Vinod
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kern
On 1/25/2013 11:05 AM, Rik van Riel wrote:
Many spinlocks are embedded in data structures; having many CPUs
pounce on the cache line the lock is in will slow down the lock
holder, and can cause system performance to fall off a cliff.
The paper "Non-scalable locks are dangerous" is a good referen
On 1/8/2013 2:26 PM, Rik van Riel wrote:
<...>
Performance is within the margin of error of v2, so the graph
has not been update.
Please let me know if you manage to break this code in any way,
so I can fix it...
Attached below is some preliminary data with one of the AIM7 micro-benchmark
wor
On 11/28/2012 5:09 PM, Chegu Vinod wrote:
On 11/27/2012 6:23 AM, Chegu Vinod wrote:
On 11/27/2012 2:30 AM, Raghavendra K T wrote:
On 11/26/2012 07:05 PM, Andrew Jones wrote:
On Mon, Nov 26, 2012 at 05:37:54PM +0530, Raghavendra K T wrote:
From: Peter Zijlstra
In case of undercomitted
ons(-)
.
Tested-by: Chegu Vinod
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On 11/27/2012 2:30 AM, Raghavendra K T wrote:
On 11/26/2012 07:05 PM, Andrew Jones wrote:
On Mon, Nov 26, 2012 at 05:37:54PM +0530, Raghavendra K T wrote:
From: Peter Zijlstra
In case of undercomitted scenarios, especially in large guests
yield_to overhead is significantly high. when run queu
On 9/21/2012 4:59 AM, Raghavendra K T wrote:
In some special scenarios like #vcpu <= #pcpu, PLE handler may
prove very costly,
Yes.
because there is no need to iterate over vcpus
and do unsuccessful yield_to burning CPU.
An idea to solve this is:
1) As Avi had proposed we can modify hardwar
17 matches
Mail list logo