fs.ext2] 8
The proposed patch was tested with a 2.6.22-based kernel, and
compile tested with a 2.6.24-based tree from 31 January 2008
(85004cc367abc000aa36c0d0e270ab609a68b0cb).
Signed-off-by: Alan D. Brunelle <[EMAIL PROTECTED]>
---
block/blk-core.c | 12
1 files change
Andrew Vasquez wrote:
> On Tue, 05 Feb 2008, Alan D. Brunelle wrote:
>
>> commit 9b73e76f3cf63379dcf45fcd4f112f5812418d0a
>> Merge: 50d9a12... 23c3e29...
>> Author: Linus Torvalds <[EMAIL PROTECTED]>
>> Date: Fri Jan 25 17:19:08 2008 -0800
>>
>&
Andrew Vasquez wrote:
> On Tue, 05 Feb 2008, Andrew Vasquez wrote:
>
>> On Tue, 05 Feb 2008, Alan D. Brunelle wrote:
>>
>>> commit 9b73e76f3cf63379dcf45fcd4f112f5812418d0a
>>> Merge: 50d9a12... 23c3e29...
>>> Author: Linus Torvalds <[EMAIL PROTEC
Alan D. Brunelle wrote:
>
> Hopefully, the first column is self-explanatory - these are the settings
> applied to the queue_affinity, completion_affinity and rq_affinity tunables.
> Due to the fact that the standard deviations are so large coupled with the
> very close avera
Back on the 32-way, in this set of tests we're running 12 disks spread out
through the 8 cells of the 32-way. Each disk will have an Ext2 FS placed on it,
a clean Linux kernel source untar()ed onto it, then a full make (-j4) and then
a make clean performed. The 12 series are done in parallel - s
Whilst running a series of file system related loads on our 32-way*, I dropped
down to a 16-way w/ only 24 disks, and ran two kernels: the original set of
Jens' patches and then his subsequent kthreads-based set. Here are the results:
Original:
A Q C | MBPS Avg Lat StdDev | Q-local Q-remote
Comparative results between the original affinity patch and the kthreads-based
patch on the 32-way running the kernel make sequence.
It may be easier to compare/contrast with the graphs provided at
http://free.linux.hp.com/~adb/jens/kernmk.png (kernmk.agr also provided, if you
want to run xmgr
Taking a step back, I went to a very simple test environment:
o 4-way IA64
o 2 disks (on separate RAID controller, handled by separate ports on the same
FC HBA - generates different IRQs).
o Using write-cached tests - keep all IOs inside of the RAID controller's
cache, so no perturbations due
Jens Axboe wrote:
> Hi,
>
> Here's a variant using kernel threads only, the nasty arch bits are then
> not needed. Works for me, no performance testing (that's a hint for Alan
> to try and queue up some testing for this variant as well :-)
>
>
I'll get to that, working my way through the first b
ing with this code, and it seems relatively
stable given this.
The application used was doing 64KiB asynchronous direct reads, and had a
minimum average per-IO latency of 42.426310 milliseconds, and average of
42.486557 milliseconds (std dev of 0.0041561), and a max of 42.561360
milli
The test case chosen may not be a very good start, but anyways, here are some
initial test results with the "nasty arch bits". This was performed on a 32-way
ia64 box with 1 terrabyte of RAM, and 144 FC disks (contained in 24 HP MSA1000
RAID controlers attached to 12 dual-port adapters). Each te
Mathieu Desnoyers wrote:
>> remember that we have seen and discussed something like this before,
>> it's still a puzzle to me...
>>
>>
> I do wonder about that performance _increase_ with blktrace enabled. I
>
> Interesting question indeed.
>
> In those tests, when blktrace is running, are the
Andrew Morton wrote:
(cc lkml restored, with permission)
On Wed, 14 Nov 2007 10:48:10 -0500 "Alan D. Brunelle" <[EMAIL PROTECTED]> wrote:
Andrew Morton wrote:
On Mon, 15 Oct 2007 16:13:15 -0400
Rik van Riel <[EMAIL PROTECTED]> wrote:
Since you ha
Arjan van de Ven wrote:
On Wed, 14 Nov 2007 18:18:05 +0100
Ingo Molnar <[EMAIL PROTECTED]> wrote:
* Andrew Morton <[EMAIL PROTECTED]> wrote:
ooh, more performance testing. Thanks
* The overwriter task (on an 8GiB file), average over 10 runs:
o 2.6.24 - 300.8822
Oh, and the runs were done in single-user mode...
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Here are the results for the latest tests, some notes:
o The machine actually has 8GiB of RAM, so the tests still may end up
using (some) page cache. (But at least it was the same for both kernels!
:-) )
o Sorry the results took so long - the updated tree size caused the
runs to take > 12
Alan D. Brunelle wrote:
Read large file:
Kernel MinAvgMax Std Dev%user %system %iowait
--
base : 201.6 215.1 275.5 22.8 0.26%4.69% 33.54%
arjan: 198.0 210.3 261.5 18.5 0.33% 10.24% 54.00
Ray Lee wrote:
Out of curiosity, what are the mount options for the freshly created
ext3 fs? In particular, are you using noatime, nodiratime?
Ray
Nope, just mount. However, the tool I'm using to read the large file &
overwrite the large file does open with O_NOATIME for reads...
The tool
the right order
o Sent up mapped-from and mapped-to device information
Signed-off-by: Alan D. Brunelle <[EMAIL PROTECTED]>
---
block/ll_rw_blk.c|4
drivers/md/dm.c |4 ++--
include/linux/blktrace_api.h |3 ++-
3 files changed, 8 insertions(+), 3 del
: Alan D. Brunelle <[EMAIL PROTECTED]>
---
block/ll_rw_blk.c | 24 +++-
drivers/md/bitmap.c|3 +--
drivers/md/dm-table.c |3 +--
drivers/md/linear.c|3 +--
drivers/md/md.c|4 ++--
drivers/md/multipath.c |3 +--
drivers/md/raid0.c
Jens Axboe wrote:
On Tue, May 01 2007, Alan D. Brunelle wrote:
Jens Axboe wrote:
On Mon, Apr 30 2007, Alan D. Brunelle wrote:
The results from a single run of an AIM7 DBase load on a 16-way ia64 box
(64GB RAM + 144 FC disks) showed a slight regression (~0.5%) by adding
in this
Jens Axboe wrote:
On Mon, May 21 2007, Alan D. Brunelle wrote:
Jens Axboe wrote:
On Tue, May 01 2007, Alan D. Brunelle wrote:
Jens Axboe wrote:
On Mon, Apr 30 2007, Alan D. Brunelle wrote:
The results from a single run of an AIM7 DBase load on a
terestingly enough this patch also seems to remove some noise during
the run - see the chart at http://free.linux.hp.com/~adb/cfq/rkb_s.png
Alan D. Brunelle
HP / Open Source and Linux Organization / Scalability and Performance Group
-
To unsubscribe from this list: send the line "unsubscr
Hi Jens -
The attached patch speeds it up even more - I'm finding a >9% reduction
in %system with no loss in IO performance. This just sets the cached
element when the first is looked for.
Alan
From: Alan D. Brunelle <[EMAIL PROTECTED]>
Update cached leftmost every time it is
Jens Axboe wrote:
On Wed, Apr 25 2007, Jens Axboe wrote:
On Wed, Apr 25 2007, Jens Axboe wrote:
On Wed, Apr 25 2007, Alan D. Brunelle wrote:
Hi Jens -
The attached patch speeds it up even more - I'm finding a >9% reduction
in %system with no loss in IO performance. This just sets th
k.git
Thanks,
Alan
From: Alan D. Brunelle <[EMAIL PROTECTED]>
Fix unplug/insert trace inversion problem.
Signed-off-by: Alan D. Brunelle <[EMAIL PROTECTED]>
---
block/ll_rw_blk.c |8
include/linux/blkdev.h |1 +
2 files changed, 5 insertions(+), 4 deletions
it is something to keep an eye on
as the regression showed itself across the complete run.
Alan D. Brunelle
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo
Jens Axboe wrote:
On Mon, Apr 30 2007, Alan D. Brunelle wrote:
The results from a single run of an AIM7 DBase load on a 16-way ia64 box
(64GB RAM + 144 FC disks) showed a slight regression (~0.5%) by adding
in this patch. (Graph can be found at
http://free.linux.hp.com/~adb/cfq
Jens Axboe wrote:
On Mon, Apr 30 2007, Alan D. Brunelle wrote:
The results from a single run of an AIM7 DBase load on a 16-way ia64 box
(64GB RAM + 144 FC disks) showed a slight regression (~0.5%) by adding
in this patch. (Graph can be found at
http://free.linux.hp.com/~adb/cfq
e stable, we'll try to get some "real"
Oracle benchmark runs done to gage the impact of the markers changes to
performance...
Alan D. Brunelle
Hewlett-Packard / Open Source and Linux Organization / Scalability and
Performance Group
-
To unsubscribe from this list: send the line
Mathieu Desnoyers wrote:
* Alan D. Brunelle ([EMAIL PROTECTED]) wrote:
Taking Linux 2.6.23-rc6 + 2.6.23-rc6-mm1 as a basis, I took some sample
runs of the following on both it and after applying Mathieu Desnoyers
11-patch sequence (19 September 2007).
* 32-way IA64 + 132GiB + 10 FC
ed in boot-time argument to set the default IO scheduler. (From
as-iosched.txt)
o Added in sysfs mount instructions. (From deadline-iosched.txt)
Signed-off-by: Alan D. Brunelle <[EMAIL PROTECTED]>
---
Documentation/block/as-iosched.txt | 21 +-
32 matches
Mail list logo