On Tue, Jan 22, 2008 at 12:00:00PM -0800, Christoph Lameter wrote:
> Patches that I would recommend to test individually if you could do it
> (get the series via git pull
> git://git.kernel.org/pub/scm/linux/kernel/git/christoph/vm.git performance):
With these patches applied to 2.6.24-rc8, the
On Tue, 22 Jan 2008, Matthew Wilcox wrote:
> I also don't understand the dependency tree -- you seem to be saying
> that we could apply patch 6 without patches 1-5 and test that.
You could do that. Many patches can be moved at will, some require minor
mods to apply.
--
To unsubscribe from this l
On Tue, Jan 22, 2008 at 12:00:00PM -0800, Christoph Lameter wrote:
> It would be great if you could get stable results on these (with multiple
> differently compiled kernels! Apply some patch that should have no
> performance impact but adds some code to verify). I saw an overall slight
> perfor
On Fri, 18 Jan 2008, Matthew Wilcox wrote:
> > I repeatedly saw patches from Intel to do minor changes to SLAB that
> > increase performance by 0.5% or so (like the recent removal of a BUG_ON
> > for performance reasons). These do not regress again when you build a
> > newer kernel release?
>
On Fri, 18 Jan 2008, Matthew Wilcox wrote:
> I've found one backtrace which seems to be relevant. I believe this is
> due to 8/10.
Looks like NULL pointer dereferences. Rest may just be the consequence
of the oops. Not enough information though to figure out exactly what
is going on and the cu
On Wed, Jan 16, 2008 at 02:01:08PM -0800, Christoph Lameter wrote:
> Dec 6th? I was on vacation then and it seems that I was unable to
> reproduce the oopses. Can I get some backtraces or other information
> that would allow me to diagnose the problem?
I've found one backtrace which seems to be
On Wed, Jan 16, 2008 at 02:28:44PM -0800, Christoph Lameter wrote:
> On Wed, 16 Jan 2008, Matthew Wilcox wrote:
> > About 0.1-0.2% 0.3% is considered significant.
>
> The results are that stable? A kernel compilation which slightly
> rearranges cachelines due to code and data changes typically l
On Wed, 16 Jan 2008, Matthew Wilcox wrote:
> About 0.1-0.2% 0.3% is considered significant.
The results are that stable? A kernel compilation which slightly
rearranges cachelines due to code and data changes typically leads to a
larger variance on my 8 way box (gets even larger under NUMA). I
On Wed, Jan 16, 2008 at 02:01:08PM -0800, Christoph Lameter wrote:
> On Wed, 16 Jan 2008, Matthew Wilcox wrote:
> > I sent you a mail on December 6th ... here are the contents of that
> > mail:
>
> Dec 6th? I was on vacation then and it seems that I was unable to
> reproduce the oopses. Can I get
On Wed, 16 Jan 2008, Matthew Wilcox wrote:
> I sent you a mail on December 6th ... here are the contents of that
> mail:
Dec 6th? I was on vacation then and it seems that I was unable to
reproduce the oopses. Can I get some backtraces or other information
that would allow me to diagnose the pro
On Wed, Jan 16, 2008 at 12:39:31PM -0800, Christoph Lameter wrote:
> Ahhh.. Good to hear that the issue on x86_64 gets better. I am still
> waiting for a test with the patchset that I did specifically to address
> your regression: http://lkml.org/lkml/2007/10/27/245 (where I tried to
> come up w
On Wed, 16 Jan 2008, Matthew Wilcox wrote:
> We tested 2.6.24-rc5 + 76be895001f2b0bee42a7685e942d3e08d5dd46c
>
> For 2.6.24-rc5 before that patch, slub had a performance penalty of
> 6.19%. With the patch, slub's performance penalty was reduced to 4.38%.
> This is great progress. Can you think
12 matches
Mail list logo