Emanuel Strobl wrote:
Dear best guys,
I really love 5.3 in many ways but here're some unbelievable transfer rates,
after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve
my performance problem (*laugh*):
(In short, see *** below)
Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI
Desktop adapter MT) connected directly without a switch/hub and "device
polling" compiled into a custom kernel with HZ set to 256 and
kern.polling.enabled set to "1":
LOCAL:
(/samsung is ufs2 on /dev/ad4p1, a SAMSUNG SP080N2)
test3:~#7: dd if=/dev/zero of=/samsung/testfile bs=16k
^C10524+0 records in
10524+0 records out
172425216 bytes transferred in 3.284735 secs (52492882 bytes/sec)
->
^^^^^^^^ ~ 52MB/s
NFS(udp,polling):
(/samsung is nfs on test3:/samsung, via em0, x-over, polling enabled)
test2:/#21: dd if=/dev/zero of=/samsung/testfile bs=16k
^C1858+0 records in
1857+0 records out
30425088 bytes transferred in 8.758475 secs (3473788 bytes/sec)
-> ^^^^^^^ ~ 3,4MB/s
This example shows that using NFS over GigaBit Ethernet decimates performance
by the factor of 15, in words fifteen!
GGATE with MTU 16114 and polling:
test2:/dev#28: ggatec create 10.0.0.2 /dev/ad4p1
ggate0
test2:/dev#29: mount /dev/ggate0 /samsung/
test2:/dev#30: dd if=/dev/zero of=/samsung/testfile bs=16k
^C2564+0 records in
2563+0 records out
41992192 bytes transferred in 15.908581 secs (2639594 bytes/sec)
-> ^^^^^^^ ~ 2,6MB/s
GGATE without polling and MTU 16114:
test2:~#12: ggatec create 10.0.0.2 /dev/ad4p1
ggate0
test2:~#13: mount /dev/ggate0 /samsung/
test2:~#14: dd if=/dev/zero of=/samsung/testfile bs=128k
^C1282+0 records in
1281+0 records out
167903232 bytes transferred in 11.274768 secs (14891945 bytes/sec)
-> ^^^^^^^^ ~ 15MB/s
.....and with 1m blocksize:
test2:~#17: dd if=/dev/zero of=/samsung/testfile bs=1m
^C61+0 records in
60+0 records out
62914560 bytes transferred in 4.608726 secs (13651182 bytes/sec)
-> ^^^^^^^^ ~ 13,6MB/s
I can't imagine why there seems to be a absolute limit of 15MB/s that can be
transfered over the network
But it's even worse, here two excerpts of NFS (udp) with jumbo Frames
(mtu=16114):
test2:~#23: mount 10.0.0.2:/samsung /samsung/
test2:~#24: dd if=/dev/zero of=/samsung/testfile bs=1m
^C89+0 records in
88+0 records out
92274688 bytes transferred in 13.294708 secs (6940708 bytes/sec)
-> ^^^^^^^ ~7MB/s
.....and with 64k blocksize:
test2:~#25: dd if=/dev/zero of=/samsung/testfile bs=64k
^C848+0 records in
847+0 records out
55508992 bytes transferred in 8.063415 secs (6884055 bytes/sec)
And with TCP-NFS (and Jumbo Frames):
test2:~#30: mount_nfs -T 10.0.0.2:/samsung /samsung/
test2:~#31: dd if=/dev/zero of=/samsung/testfile bs=64k
^C1921+0 records in
1920+0 records out
125829120 bytes transferred in 7.461226 secs (16864403 bytes/sec)
-> ^^^^^^^^ ~ 17MB/s
Again NFS (udp) but with MTU 1500:
test2:~#9: mount_nfs 10.0.0.2:/samsung /samsung/
test2:~#10: dd if=/dev/zero of=/samsung/testfile bs=8k
^C12020+0 records in
12019+0 records out
98459648 bytes transferred in 10.687460 secs (9212633 bytes/sec)
-> ^^^^^^^ ~ 10MB/s
And TCP-NFS with MTU 1500:
test2:~#12: mount_nfs -T 10.0.0.2:/samsung /samsung/
test2:~#13: dd if=/dev/zero of=/samsung/testfile bs=8k
^C19352+0 records in
19352+0 records out
158531584 bytes transferred in 12.093529 secs (13108794 bytes/sec)
-> ^^^^^^^^ ~ 13MB/s
GGATE with default MTU of 1500, polling disabled:
test2:~#14: dd if=/dev/zero of=/samsung/testfile bs=64k
^C971+0 records in
970+0 records out
63569920 bytes transferred in 6.274578 secs (10131346 bytes/sec)
-> ^^^^^^^^ ~ 10M/s
Conclusion:
***
- It seems that GEOM_GATE is less efficient with GigaBit (em) than NFS via TCP
is.
- em seems to have problems with MTU greater than 1500
- UDP seems to have performance disadvantages over TCP regarding NFS which
should be vice versa AFAIK
- polling and em (GbE) with HZ=256 is definitly no good idea, even 10Base-2
can compete
You should be setting HZ to 1000 or higher.
- NFS over TCP with MTU of 16114 gives the maximum transferrate for large
files over GigaBit Ethernet with a value of 17MB/s, a quarter of what I'd
expect with my test equipment.
- overall network performance (regarding large file transfers) is horrible
Please, if anybody has the knowledge to dig into these problems, let me know
if I can do any tests to help getting ggate and NFS useful in fast 5.3-stable
environments.
if_em in 5.3 has a large performance penalty in the common case due to a
programming error. I fixed it in 6-CURRENT and 5.3-STABLE. You might
want to try updating to the RELENG_5 branch to see if you get better
results.
Scott
_______________________________________________
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"