On Sat, 7 Feb 2009, Omer Faruk Sen wrote:
I have installed a new server to test performance results. BIOS and RAID
BIOS is the latest in this server ( Raid controller is a Intel SRCSASBB8I )
Hi Omer--
Comparing I/O and file system performance is a bit fraught with peril,
especially given that the performance of disks varies a great deal based on
where on the disk you're writing, etc. Ellard and Seltzer have a nice NFS
benchmarking paper that includes a "Benchmarking traps" section of note in
this regard:
http://www.eecs.harvard.edu/~margo/papers/freenix03/
The source of the performance difference you're seeing could come from a
number of places:
- I/O to different parts of the disk due to partition layouts can perform
quite differently -- try out diskinfo -t to get an idea of the kind of
variance you can see. For example, I get this on my local 3ware array:
Transfer rates:
outside: 102400 kbytes in 4.056540 sec = 25243 kbytes/sec
middle: 102400 kbytes in 2.531003 sec = 40458 kbytes/sec
inside: 102400 kbytes in 3.947725 sec = 25939 kbytes/sec
That's a massive performance difference and presumably entirely a result of
disk layout. The best way to address this is to make sure you're using the
same part of the disk for all tests -- so if you're installing different
OS's on different partitions, use a single shared partition for the test.
- File system can behave quite differently, especially if they vary in size
and layout strategies. Do the dd(1) directly to a disk partition, making
sure to use the right device on Linux (I believe you want the character
device in order to bypass the buffer cache and compare apples to apples).
So if you go ahead and do dd(1) directly to the same partition using
unbuffered I/O and no file system, it should become more obvious as to whether
we're looking at performance loss due to a device driver difference, a file
system difference, or perhaps just a disk layout difference.
Robert N M Watson
Computer Laboratory
University of Cambridge
# dmesg |grep -i mfi
mfi0: <LSI MegaSAS 1078> port 0x2000-0x20ff mem
0xb8b00000-0xb8b3ffff,0xb8b40000-0xb8b7ffff irq 16 at device 0.0 on
pci10
mfi0: Megaraid SAS driver Ver 3.00
mfi0: 1870 (287314652s/0x0020/info) - Shutdown command received from host
mfi0: 1871 (boot + 3s/0x0020/info) - Firmware initialization started
(PCI ID 0060/1000/1013/8086)
mfi0: 1872 (boot + 3s/0x0020/info) - Firmware version 1.20.72-0562
mfi0: 1873 (boot + 3s/0x0020/info) - Board Revision
mfi0: 1874 (boot + 15s/0x0002/info) - Inserted: PD 08(e0xff/s8)
mfi0: 1875 (boot + 15s/0x0002/info) - Inserted: PD 08(e0xff/s8) Info:
enclPd=ffff, scsiType=0, portMap=00,
sasAddr=5000c5000bcb90b1,0000000000000000
mfi0: 1876 (boot + 15s/0x0002/info) - Inserted: PD 09(e0xff/s9)
mfi0: 1877 (boot + 15s/0x0002/info) - Inserted: PD 09(e0xff/s9) Info:
enclPd=ffff, scsiType=0, portMap=01,
sasAddr=5000c5000bcb962d,0000000000000000
mfi0: 1878 (boot + 15s/0x0002/info) - Inserted: PD 0a(e0xff/s10)
mfi0: 1879 (boot + 15s/0x0002/info) - Inserted: PD 0a(e0xff/s10) Info:
enclPd=ffff, scsiType=0, portMap=02,
sasAddr=5000c5000bcb8f8d,0000000000000000
mfi0: [ITHREAD]
mfi0: 1880 (287314711s/0x0020/info) - Time established as 02/07/09
9:38:31; (52 seconds since power on)
mfid0: <MFI Logical Disk> on mfi0
mfid0: 556928MB (1140588544 sectors) RAID volume ''
I have installed RHEL 5.3 x64 on this server and here is simple dd
performance test:
Last login: Fri Feb 6 12:15:32 2009 from 10.0.0.51
[r...@localhost ~]# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes (819 MB) copied, 1.8473 seconds, 443 MB/s
[r...@localhost ~]# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes (819 MB) copied, 2.05412 seconds, 399 MB/s
[r...@localhost ~]# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes (819 MB) copied, 1.94096 seconds, 422 MB/s
[r...@localhost ~]# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes (819 MB) copied, 2.1092 seconds, 388 MB/s
[r...@localhost ~]# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes (819 MB) copied, 1.84414 seconds, 444 MB/s
[r...@localhost ~]# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes (819 MB) copied, 2.09829 seconds, 390 MB/s
And after that I have tested with FreeBSD 6.3 and 7.1:
6.3:
# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes transferred in 3.534685 secs (231760388 bytes/sec)
# cd /var/tmp/
# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes transferred in 3.051668 secs (268443342 bytes/sec)
# cd /
# cd /var/tmp/
# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes transferred in 3.001266 secs (272951481 bytes/sec)
# cd /home/
cd: can't cd to /home/
# cd /usr/
# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes transferred in 3.678156 secs (222720290 bytes/sec)
# cd /boot
# dd if=/dev/zero of=bigfile bs=8192 count=100000
cd /usr/100000+0 records in
100000+0 records out
819200000 bytes transferred in 2.985471 secs (274395563 bytes/sec)
# local
# pwd
/usr/local
# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes transferred in 3.501986 secs (233924406 bytes/sec)
7.1:
Filesystem Size Used Avail Capacity Mounted on
/dev/mfid0s1a 9.7G 2.6G 6.3G 29% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/mfid0s1d 478G 4.0K 440G 0% /opt
# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes transferred in 3.441476 secs (238037393 bytes/sec)
# cd /
# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes transferred in 3.132512 secs (261515371 bytes/sec)
# cd /usr/local/
# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes transferred in 3.296514 secs (248504951 bytes/sec)
# cd /usr
# dd if=/dev/zero of=bigfile bs=8192 count=100000
100000+0 records in
100000+0 records out
819200000 bytes transferred in 3.069655 secs (266870386 bytes/sec)
as you can see there is a big difference in just simple dd test. Is
there additional steps that I can follow to increase performance? By
the way If I use write-thru cache on this raid card FreeBSD 6.3 and
FreeBSD 7.1 only gives 13MB/s~ which is very very bad. Linux gives a
slight decrease on dd test with write-thru cache but freebsd goes from
230-270 MB/s to only 13 MB/s
Regards.
_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"
_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"