Here are some tests I ran on a file server I have at home. Its running
kernel 2.6.27.41-170.2.117.fc10.i686, has 1.5GB of RAM, and has a 2GB FC HBA
connected to an Apple Xserve RAID, which is using hardware RAID5 across 7
disks for this particular device.

I ran the tests with bonnie++. To summarize, cfq had the fastest block read
and block write speed. One thing that is puzzling me at the moment is why
the latency for character write for both deadline and anticipatory was
output in microseconds (us) instead of milliseconds(ms). It's 3am at the
moment so I'm too tired to figure that out, I will sleep on it.

Heres the output from the bonnie++ runs:

cfq:
Version  1.96       ------Sequential Output------ --Sequential Input-
--Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
test.ph.cox.net  4G   113  96 53828  38 36092  22   398  91 106166  34 483.6
 37
Latency               111ms    3078ms    2113ms     127ms     514ms
528ms
Version  1.96       ------Sequential Create------ --------Random
Create--------
test.ph.cox.net     -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
                 16  8206  71 +++++ +++ 10033  69  8631  73 +++++ +++  9832
 66
Latency              2104us     505us     802us    3977us      67us
856us


noop:
Version  1.96       ------Sequential Output------ --Sequential Input-
--Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
test.ph.cox.net  4G   118  95 51698  37 30495  20   423  90 93542  28 488.4
 28
Latency               112ms    3049ms    2044ms     179ms     515ms
555ms
Version  1.96       ------Sequential Create------ --------Random
Create--------
test.ph.cox.net     -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
                 16  8673  69 +++++ +++ 10841  63  8952  69 +++++ +++ 11101
 64
Latency               833us     462us     838us     972us      87us
848us


deadline:
Version  1.96       ------Sequential Output------ --Sequential Input-
--Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
test.ph.cox.net  4G   122  98 52869  41 29895  21   398  93 90557  27 499.9
 30
Latency             71554us    2727ms    2052ms   28918us     509ms
537ms
Version  1.96       ------Sequential Create------ --------Random
Create--------
test.ph.cox.net     -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
                 16  8668  71 +++++ +++ 10842  64  8934  70 +++++ +++ 11124
 66
Latency               895us     460us     843us     923us      73us
525us


anticipatory:
Version  1.96       ------Sequential Output------ --Sequential Input-
--Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
test.ph.cox.net  4G   118  97 53695  37 33008  21   408  93 90683  27 322.5
 22
Latency             74519us    3030ms    2051ms   48513us     508ms
555ms
Version  1.96       ------Sequential Create------ --------Random
Create--------
test.ph.cox.net     -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
                 16  8386  69 +++++ +++ 10858  65  8937  70 +++++ +++ 10636
 63
Latency               874us     442us     850us    1340us      66us
900us

On Thu, Dec 16, 2010 at 11:47 PM, Matt Simmons <
standalone.sysad...@gmail.com> wrote:

> I just decided to do some quick tests on my system, and here's what I
> got...
>
> The setup is kernel 2.6.18-92.1.13.elPAE (it's an older machine, so
> this may be part of the problem). It's got a 4GB FC HBA connected to
> an EMC AX4 that is definitely not optimized.
>
> Anyway, the hardware doesn't change between trials. To get these
> numbers, I just ran successive commands of:
> dd if=/dev/zero of=testfile bs=1024k count=10240
> which gave an 11GB file.
>
> noop
> 107.3
> 105.1
> 112.2
> 107.2
> Avg: 107.95
>
> cfq
> 83.4
> 78.7
> 86.7
> 93.5
> Avg: 85.575
>
> anticipatory
> 106.5
> 100.6
> 99.30
> 106.7
> Avg: 103.275
>
> deadline
> 97.1
> 93.3
> 97.7
> 90.7
> Avg: 94.7
>
> The winner was CFQ, followed by Deadline.
>
> If someone wants to run that on a "modern" kernel against a similar
> setup, it would be interesting to see how the numbers change.
>
> --Matt
>
>
> On Thu, Dec 16, 2010 at 1:48 PM, Ski Kacoroski <ckacoro...@nsd.org> wrote:
> > Hi,
> >
> > We have a communigate pro email server.  For some time now we have had
> > issues where it was limited to about 800 IOPs and we could not figure
> > out what was going on.  Well it fell off the cliff yesterday and we
> > tried everything.  Today on a whim, I changed the I/O scheduler from CFQ
> > to NOOP and bang, the IOPs jumped to 3500 (maxed out our SAN).  The
> > reason I think it made such a big difference is the communigate uses one
> > large process with many internal threads instead of several processes.
> > Anyway, if you are having processes that seem to be I/O bound on linux,
> > try changing the I/O scheduler as it may help.  It is easy to do:
> >
> > To see current scheduler
> > cat /sys/block/<device>/queue/scheduler
> > noop anticipatory deadline [cfq]
> >
> > To change it:
> > echo 'NOOP' /sys/block/<device>/queue/scheduler
> > cat /sys/block/<device>/queue/scheduler
> > [noop] anticipatory deadline cfq
> >
> > cheers,
> >
> > ski
> >
> > --
> > "When we try to pick out anything by itself, we find it
> >  connected to the entire universe"            John Muir
> >
> > Chris "Ski" Kacoroski, ckacoro...@nsd.org, 206-501-9803
> > or ski98033 on most IM services
> > _______________________________________________
> > Tech mailing list
> > Tech@lists.lopsa.org
> > https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
> > This list provided by the League of Professional System Administrators
> >  http://lopsa.org/
> >
>
>
>
> --
> LITTLE GIRL: But which cookie will you eat FIRST?
> COOKIE MONSTER: Me think you have misconception of cookie-eating process.
> _______________________________________________
> Tech mailing list
> Tech@lists.lopsa.org
> https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
>  http://lopsa.org/
>
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to