Re: Awful forwarding rate [7.2-Release, igb]

2009-05-11 Thread Michael Vince
Also make sure that the route for this specific test isn't going out on 
the Internet and coming back in at your outside link speed of 
around117kbits/sec?


I had a similar problem once where I had 3 boxes hooked up and the 
speeds were blistering fast for 2 tests but the third test was horrid slow.
Turns out I simply forgot to add the route so the packets jumped across 
the local network, instead the packets were taking a route of a 
pointless jump out to my first upstream provider and back again :)


Mike


Oleg Baranov wrote:

Hello!

I have extremely low forwarding speed on 7.2-Release box with dual 
Intel 82575.


Box "B" with dual 82575 nic is connected between A and C using gigabit 
swithes

A <---> B <> C


iperf run from A to C shows:

$ iperf -w 128k -c 192.168.111.3


Client connecting to 192.168.111.3, TCP port 5001
TCP window size:   129 KByte (WARNING: requested   128 KByte)

[  3] local 192.168.1.15 port 51077 connected with 192.168.111.3 port 
5001

[ ID] Interval   Transfer Bandwidth
[  3]  0.0-11.2 sec160 KBytes117 Kbits/sec



the same run from A to B shows:

]$ iperf -w 128k -c 192.168.1.153

Client connecting to 192.168.1.153, TCP port 5001
TCP window size:   129 KByte (WARNING: requested   128 KByte)

[  3] local 192.168.1.15 port 60907 connected with 192.168.1.153 port 
5001

[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  1.09 GBytes933 Mbits/sec


and from B to C shows:

$ iperf -w 128k -c 192.168.111.3

Client connecting to 192.168.111.3, TCP port 5001
TCP window size:   129 KByte (WARNING: requested   128 KByte)

[  3] local 192.168.111.254 port 64290 connected with 192.168.111.3 
port 5001

[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  1.08 GBytes930 Mbits/sec


Boxes B and C are both dual quad-core e5420 CPUs on Supermicro X7DWN+ 
motherboard.
As A I tried several machines including dual quad-core Phenom system 
as well as some portable PCs and workstations residing in the same LAN.


Here is ifconfig from B

$ ifconfig
igb0: flags=8843 metric 0 mtu 
1500

   options=19b
   ether 00:30:48:c8:19:66
   inet 192.168.1.153 netmask 0xff00 broadcast 192.168.1.255
   media: Ethernet autoselect (1000baseTX )
   status: active
igb1: flags=8843 metric 0 mtu 
1500

   options=19b
   ether 00:30:48:c8:19:67
   media: Ethernet autoselect (1000baseTX )
   status: active
   lagg: laggdev lagg0
lo0: flags=8049 metric 0 mtu 16384
   inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3
   inet6 ::1 prefixlen 128
   inet 127.0.0.1 netmask 0xff00
lagg0: flags=8843 metric 0 mtu 
1500

   options=19b
   ether 00:30:48:c8:19:67
   inet 192.168.111.254 netmask 0xff00 broadcast 192.168.111.255
   media: Ethernet autoselect
   status: active
   laggproto lacp
   laggport: igb1 flags=1c
gif0: flags=8051 metric 0 mtu 1280
   tunnel inet 192.168.1.153 --> 192.168.1.156
   inet 192.168.111.254 --> 192.168.112.254 netmask 0x


I tried to remove lagg & gif interfaces, boot GENERIC kernel and even 
set up same net config from LiveFS cd - nothing helps. Forwarding 
speed sometimes goes up to 1-2 Mbit/sec while local speeds are always 
above 900Mbit.

System load is less 1%, logs contain nothing interesting...

Any clues and ideas would be appreciated



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


FAST_IPSEC vs IPSEC performanc

2004-11-23 Thread Michael Vince
Hey all.
I have been googling around the Internet for information about IPSEC and 
FAST_IPSEC for freebsd on the Internet and wondered what gives best 
performance
when I came across this  http://people.freebsd.org/~pjd/netperf/
It has some nice graphs / figures on performance of IPSEC and FAST_IPSEC 
via gigabit ethernet devices and as long as I am seeing this correctly 
it appears the regular IP_SEC is a fair bit faster in some areas then 
FAST_IPSEC.
Does any one know if  this information correct?
Anyone done there own benchmarks?

Thanks

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: IPMI doesn't work...

2005-03-14 Thread Michael Vince
Just out of interest has any one got serial console to work with this 
IPMI stuff?
I was looking at regular 9pin serial alternatives since Dell machines 
normally only have 1 serial port and I prefer 2.

Regards,
Mike
Bruce M Simpson wrote:
On Mon, Mar 14, 2005 at 04:26:16PM -0800, Jeff wrote:
 

I don't think it's the case of the OS turning off the NIC.  We can 
access/monitor/control the chassis via the BMC fine through the bios 
assigned IP address when the computer is off, and when it is booting, 
but lose control when the kernel loads (the bios assigned ip address is, 
of course, different from what OS assigns).  It seems odd to me how the 
BMC shares the NIC, but maybe this is normal...I'm new to IPMI.
   

I can only speak for looking at the Intel gigabit chip datasheets and
our em(4) driver somewhat, but there are registers which control the
'pass through' which IPMI uses. It could be that the bge driver is
unaware of the registers Broadcom added to support IPMI.
In this case we'd need to find out what they are and teach the driver
not to meddle with them.
Regards,
BMS
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"
 

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: cisco vpn experience?

2005-04-18 Thread Michael Vince
Yeah I hooked up 5.3 BSD box with to a big mobile phone companies 
$60,000 Cisco VPN piece of equipment, I got Cisco cert my self but I 
prefer FreeBSD :)
Used Racoon/ipsec tools and FastIPSec compiled into the kernel.

IPs are spoofed ,but just to give you the idea.
Mar 31 16:02:54 mord racoon: INFO: IPsec-SA request for 192.168.64.132 
queued due to no phase1 found.
Mar 31 16:02:54 mord racoon: INFO: initiate new phase 1 negotiation: 
192.168.207.68[500]<=>192.168.64.132[500]
Mar 31 16:02:54 mord racoon: INFO: begin Identity Protection mode.
Mar 31 16:02:54 mord racoon: INFO: received Vendor ID: CISCO-UNITY
Mar 31 16:02:54 mord racoon: INFO: received Vendor ID: DPD
Mar 31 16:02:54 mord racoon: INFO: received Vendor ID: 
draft-ietf-ipsra-isakmp-xauth-06.txt
Mar 31 16:02:54 mord racoon: INFO: ISAKMP-SA established 
192.168.207.68[500]-192.168.64.132[500] 
spi:03091ac91619:5bf5227037f4fa80
Mar 31 16:02:55 mord racoon: INFO: initiate new phase 2 negotiation: 
192.168.207.68[0]<=>192.168.64.132[0]
Mar 31 16:02:55 mord racoon: INFO: IPsec-SA established: ESP/Tunnel 
192.168.64.132->192.168.207.68 spi=30520619(0x1cb25c2)
Mar 31 16:02:55 mord racoon: INFO: IPsec-SA established: ESP/Tunnel 
192.168.207.68->192.168.64.132 spi=626279197(0x28e7c1b1

Julian Elischer wrote:
Has anyone connected a FreeBSD machine to a "cisco ipsec VPN" as 
exported by
various Cisco routers.

they have special solaris, linux and windows clients..
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


VPN setup script

2005-04-26 Thread Michael Vince
Hey all,
I have created a VPN setup script for FreeBSD, check it out here
http://roq.com/projects/vpnsetup/index.html
http://www.roq.com/projects/vpnsetup/vpnsetup.pl

Its still in its testing phase but as far as I can see its reasonably
complete.

If any one tries it out I would appreciate feed back

Cheers,
Michael


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Some notes on FAST_IPSEC...

2005-05-13 Thread Michael Vince

   Yeah,
   Does any one know if some one is going to add ipsec-tools to the ports
   tree?
   Cheers,
   Michael
   [EMAIL PROTECTED] wrote:

At Thu, 12 May 2005 05:25:24 + (UTC),
Bjoern A. Zeeb wrote:


On Thu, 12 May 2005, Qing Li wrote:

Hi,



  I'd like to volunteer for



Tasks to update FAST_IPSec
 Add IPv6 support (2-3 weeks)



  unless someone else has already claimed ownership.

  I can also help out on the racoon side so feel
  free to put my name down on that list.


from skipping through racoon-ml from time to time I think racoon got
announced as 0xdead project and one should switch to ipsec-tools?



Yes, the announcement can be found here:

[2]ftp://ftp.kame.net/pub/mail-list/snap-users/9012

Later,
George
___
[EMAIL PROTECTED] mailing list
[4]http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [5]"[EMAIL PROTECTED]"

References

   1. mailto:[EMAIL PROTECTED]
   2. ftp://ftp.kame.net/pub/mail-list/snap-users/9012
   3. mailto:freebsd-net@freebsd.org
   4. http://lists.freebsd.org/mailman/listinfo/freebsd-net
   5. mailto:[EMAIL PROTECTED]
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Network performance 6.0 with netperf

2005-10-14 Thread Michael VInce

Hey all,
I been doing some network benchmarking using netperf and just simple 
'fetch' on a new network setup to make sure I am getting the most out of 
the router and servers,  I thought I would post some results in case 
some one can help me with my problems or if others are just interested 
to see the results.


The network is currently like this, where machines A and B are the Dell 
1850s and C is the 2850 x 2 CPU (Server C has Apache2 worker MPM on it) 
and server B is the gateway and A is acting as a client for fetch and 
netperf tests.

A --- B --- C
The 2 1850s are running AMD64 Freebsd 6.0rc1 (A and B) while C is 
running 5.4-stable i386 from Oct 12


My main problem is that if I compile SMP into the machine C (5.4stable) 
the network speed goes down to a range between 6mbytes/sec to 
15mbytes/sec on SMP.
If I use GENERIC kernel the performance goes up to what I have show 
below which is around 65megabytes/sec for a 'fetch' get test from Apache 
server and 933mbits/sec for netperf.

Does any know why why network performance would be so bad on SMP?

Does any one think that if I upgrade the i386 SMP server to 6.0RC1 the 
SMP network performance would improve? This server will be running java 
so I need it to be stable and is the the reason I am using i386 and Java 1.4


I am happy with performance of direct machine to machine (non SMP) which 
is pretty much full 1gigabit/sec speeds.
Going through the gateway server-B seems to drop its speed down a bit 
for in and out direction tcp speed tests using netperf I get around 
266mbits/sec from server A through gateway Server-B to server-C which is 
quite adequate for the link I currently have for it.


Doing a 'fetch' get for a 1gig file from the Apache server gives good 
speeds of close to 600mbits/sec but netperf shows its weakness with 
266mbits/sec.
This is as fast as I need it to be but does any one know the weak points 
on the router gateway to make it faster? Is this the performance I 
should expect for FreeBSD as a router with gigabit ethers?


I have seen 'net.inet.ip.fastforwarding' in some peoples router setups 
on the list but nothing about what it does or what it can affect.
I haven't done any testing with polling yet but if I can get over 
900mbits/sec on the interfaces does polling help with passing packets 
from one interface to the other?
All machines have PF running other then that they don't really have any 
sysctls or special kernel options.


Here are some speed benchmarks using netperf and 'fetch' gets.

Server A to server C with server C using SMP kernel and just GENERIC 
kernel further below


B# /usr/local/netperf/netperf -l 10 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344

TCP STREAM TEST to server-C : +/-2.5% @ 99% conf. : histogram
Recv   SendSend
Socket Socket  Message  Elapsed
Size   SizeSize Time Throughput
bytes  bytes   bytessecs.10^6bits/sec

57344  57344   409610.06 155.99
tank# fetch -o - > /dev/null http://server-C/file1gig.iso
- 100% of 1055 MB   13 MBps 
00m00s


# Using generic non SMP kernel
Server A to server C with server C using GENERIC kernel.
A# fetch -o - > /dev/null http://server-C/file1gig.iso
- 100% of 1055 MB   59 MBps 
00m00s


A# ./tcp_stream_script server-C

/usr/local/netperf/netperf -l 60 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344


Recv   SendSend
Socket Socket  Message  Elapsed
Size   SizeSize Time Throughput
bytes  bytes   bytessecs.10^6bits/sec

57344  57344   409660.43 266.92


###
Connecting from server-A to B (gateway)
A# ./tcp_stream_script server-B



/usr/local/netperf/netperf -l 60 -H server-B -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344


TCP STREAM TEST to server-B : +/-2.5% @ 99% conf. : histogram
Recv   SendSend
Socket Socket  Message  Elapsed
Size   SizeSize Time Throughput
bytes  bytes   bytessecs.10^6bits/sec

57344  57344   409661.80 926.82


##
Connecting from server B (gateway) to server C
Fetch and Apache2 test
B# fetch -o - > /dev/null http://server-C/file1gig.iso
- 100% of 1055 MB   74 MBps 
00m00s


Netperf test
B# /usr/local/netperf/tcp_stream_script server-C

/usr/local/netperf/netperf -l 60 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344


TCP STREAM TEST to server-C : +/-2.5% @ 99% conf. : histogram
Recv   SendSend
Socket Socket  Message  Elapsed
Size   SizeSize Time Throughput
bytes  bytes   bytessecs.10^6bits/sec

57344  57344   409662.20 933.94



Cheers,
Mike


Re: Network performance 6.0 with netperf

2005-10-19 Thread Michael VInce

Robert Watson wrote:



On Fri, 14 Oct 2005, Michael VInce wrote:

I been doing some network benchmarking using netperf and just simple 
'fetch' on a new network setup to make sure I am getting the most out 
of the router and servers, I thought I would post some results in 
case some one can help me with my problems or if others are just 
interested to see the results.



Until recently (or maybe still), netperf was compiled with -DHISTOGRAM 
by our port/package, which resulted in a significant performance 
drop.  I believe that the port maintainer and others have agreed to 
change it, but I'm not sure if it's been committed yet, or which 
packages have been rebuilt.  You may want to manually rebuild it to 
make sure -DHISTOGRAM isn't set.


You may want to try setting net.isr.direct=1 and see what performance 
impact that has for you.


Robert N M Watson


I reinstalled the netperf to make sure its the latest.

I have also decided to upgrade Server-C (the i386 5.4 box) to 6.0RC1 and 
noticed it gave a large improvement of network performance with a SMP 
kernel.


As with the network setup ( A --- B --- C  ) with server B being the 
gateway, doing a basic 'fetch' from the gateway (B) to the Apache server 
(C) it gives up to 700mbits/sec transfer performance, doing a fetch from 
server A thus going through the gateway gives slower but still decent 
performance of up to 400mbits/sec.


B> fetch -o - > /dev/null http://server-c/file1gig.iso
- 100% of 1055 MB   69 MBps 
00m00s



A> fetch -o - > /dev/null http://server-c/file1gig.iso
- 100% of 1055 MB   39 MBps 
00m00s


Netperf from the gateway directly to the apache server (C) 916mbits/sec
B> /usr/local/netperf/netperf -l 20 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344

Elapsed Throughput - 10^6bits/sec: 916.50

Netperf from the client machine through the gateway to the apache server 
(C) 315mbits/sec
A> /usr/local/netperf/netperf -l 10 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344

Elapsed Throughput - 10^6bits/sec: 315.89

Client to gateway netperf test shows the direct connection between these 
machines is fast. 912mbits/sec
A> /usr/local/netperf/netperf -l 30 -H server-B -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 5734

Elapsed Throughput - 10^6bits/sec: 912.11

The strange thing now is in my last post I was able to get faster speeds 
from server A to C with 'fetch' tests on non-smp kernels and slower 
speeds with netperf tests. Now I get speeds a bit slower with fetch 
tests but faster netperf speed tests with or without SMP on server-C.


I was going to test with 'net.isr.dispatch' but the sysctl doesn't 
appear to exist, doing this returns nothing.

'sysctl -a | grep 'net.isr.dispatch'

I also tried polling but its also like that doesn't exist either.
ifconfig em3 inet 192.168.1.1 netmask 255.255.255.224 polling
ifconfig: polling: Invalid argument

When doing netperf tests there was high interrupt usage.
CPU states:  0.7% user,  0.0% nice, 13.5% system, 70.0% interrupt, 15.7% 
idle


Also the server B is using its last 2 gigabit ethernet ports which are 
listed from pciconf -lv as '82547EI Gigabit Ethernet Controller'

While the first 2 are listed as 'PRO/1000 P'
Does any one know if the PRO/1000P would be better?

[EMAIL PROTECTED]:4:0:   class=0x02 card=0x118a8086 chip=0x108a8086 rev=0x03 
hdr=0x00

   vendor   = 'Intel Corporation'
   device   = 'PRO/1000 P'

[EMAIL PROTECTED]:8:0:   class=0x02 card=0x016d1028 chip=0x10768086 rev=0x05 
hdr=0x00

   vendor   = 'Intel Corporation'
   device   = '82547EI Gigabit Ethernet Controller'

Cheers,
Mike





The network is currently like this, where machines A and B are the 
Dell 1850s and C is the 2850 x 2 CPU (Server C has Apache2 worker MPM 
on it) and server B is the gateway and A is acting as a client for 
fetch and netperf tests.

A --- B --- C
The 2 1850s are running AMD64 Freebsd 6.0rc1 (A and B) while C is 
running 5.4-stable i386 from Oct 12


My main problem is that if I compile SMP into the machine C 
(5.4stable) the network speed goes down to a range between 
6mbytes/sec to 15mbytes/sec on SMP.
If I use GENERIC kernel the performance goes up to what I have show 
below which is around 65megabytes/sec for a 'fetch' get test from 
Apache server and 933mbits/sec for netperf.

Does any know why why network performance would be so bad on SMP?

Does any one think that if I upgrade the i386 SMP server to 6.0RC1 
the SMP network performance would improve? This server will be 
running java so I need it to be stable and is the the reason I am 
using i386 and Java 1.4


I am happy with performance of direct machine to machine (non SMP) 
which is pretty much full 1gigabit/sec spe

Re: Network performance 6.0 with netperf

2005-10-20 Thread Michael VInce
Here is my probably final round of tests that I thought could possible 
be useful to others.


I have enabled polling on the interfaces and discovered some of the 
master secret holy grail sysctls that really make this stuff work.

I now get over 900mbits/sec router performance with polling.

Having sysctl either net.isr.direct=1 or net.inet.ip.fastforwarding=1 
gave roughly an extra 445mbits performance increase according to netperf 
tests, because my tests aren't really lab strict enough I still haven't 
been able to easily see a difference between having net.isr.direct=1 or 
0 while also having net.inet.ip.fastforwarding set to 1, it does appear 
that having net.isr.direct=1 might be stealing the job of the 
net.inet.ip.fastforwarding sysctl because when 
net.inet.ip.fastforwarding=0 and net.isr.direct=1 on the gateway I still 
get the 905.48mbit/sec route speed listed below.


From the client machine (A) through the gateway (B with polling 
enabled) to the server (C)

With net.isr.direct=1 and net.inet.ip.fastforwarding=1
A> /usr/local/netperf/netperf -l 10 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344

Elapsed Throughput - 10^6bits/sec: 905.48

With net.isr.direct=0 and net.inet.ip.fastforwarding=0
Elapsed Throughput - 10^6bits/sec: 460.15

Apache get 'fetch' test.
A> fetch -o - > /dev/null http://server-C/file1gig.iso
- 100% of 1055 MB   67 MBps 
00m00s


Interestingly when testing from the gateway it self (B) direct to server 
(C) having 'net.isr.direct=1'  slowed down performance to 583mbits/sec
B> /usr/local/netperf/netperf -l 10 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344

Elapsed Throughput - 10^6bits/sec: 583.57

Same test with 'net.isr.direct=0'
Elapsed Throughput - 10^6bits/sec: 868.94
I have to ask how can this be possible if when its being used as a 
router with net.isr.direct=1 it passes traffic at over 900mbits/sec
Having net.inet.ip.fastforwarding=1 doesn't affect the performance in 
these B to C tests.


I believe faster performance may still be possible as another rack of 
gear I have that has another AMD64 6.0 RC1 Dell 2850 (Kes) gives me up 
to 930mbits/sec in apache fetch tests, I believe its even faster here 
because its an AMD64 Apache server or its possible it could just have a 
bit better quality ether cables, as I mentioned before the Apache server 
for box "C" in above tests is i386 on 6.0RC1.


This fetch test is only on a switch with no router between them.
spin> fetch -o - > /dev/null http://kes/500megs.zip
- 100% of  610 MB   93 MBps

So far from this casual testing I have discovered these things on my 
servers.
Using 6.0 on SMP servers gives a big boost in network performance over 
5.x SMP using i386 or AMD64
FreeBSD as router on gigabit ethernet with the use of polling gives over 
x2 performance with the right sysctls.
Needs more testing but it appears using AMD64 FreeBSD might be better 
then i386 for Apache2 network performance on SMP kernels.
Single interface speeds tests from the router with polling enabled and 
with 'net.isr.direct=1' appears to affect performance.


Regards,
Mike

Michael VInce wrote:


Robert Watson wrote:



On Fri, 14 Oct 2005, Michael VInce wrote:

I been doing some network benchmarking using netperf and just simple 
'fetch' on a new network setup to make sure I am getting the most 
out of the router and servers, I thought I would post some results 
in case some one can help me with my problems or if others are just 
interested to see the results.




Until recently (or maybe still), netperf was compiled with 
-DHISTOGRAM by our port/package, which resulted in a significant 
performance drop.  I believe that the port maintainer and others have 
agreed to change it, but I'm not sure if it's been committed yet, or 
which packages have been rebuilt.  You may want to manually rebuild 
it to make sure -DHISTOGRAM isn't set.


You may want to try setting net.isr.direct=1 and see what performance 
impact that has for you.


Robert N M Watson



I reinstalled the netperf to make sure its the latest.

I have also decided to upgrade Server-C (the i386 5.4 box) to 6.0RC1 
and noticed it gave a large improvement of network performance with a 
SMP kernel.


As with the network setup ( A --- B --- C  ) with server B being the 
gateway, doing a basic 'fetch' from the gateway (B) to the Apache 
server (C) it gives up to 700mbits/sec transfer performance, doing a 
fetch from server A thus going through the gateway gives slower but 
still decent performance of up to 400mbits/sec.


B> fetch -o - > /dev/null http://server-c/file1gig.iso
- 100% of 1055 MB   69 
MBps 00m00s



A> fetch -o - > /dev/null http://server-c/file1gig.iso
-  

Re: Network performance 6.0 with netperf

2005-10-20 Thread Michael VInce
Here is my probably final round of tests that I thought could possible 
be useful to others.


I have enabled polling on the interfaces and discovered some of the 
master secret holy grail sysctls that really make this stuff work.

I now get over 900mbits/sec router performance with polling.

Having sysctl either net.isr.direct=1 or net.inet.ip.fastforwarding=1 
gave roughly an extra 445mbits performance increase according to netperf 
tests, because my tests aren't really lab strict enough I still haven't 
been able to easily see a difference between having net.isr.direct=1 or 
0 while also having net.inet.ip.fastforwarding set to 1, it does appear 
that having net.isr.direct=1 might be stealing the job of the 
net.inet.ip.fastforwarding sysctl because when 
net.inet.ip.fastforwarding=0 and net.isr.direct=1 on the gateway I still 
get the 905.48mbit/sec route speed listed below.


From the client machine (A) through the gateway (B with polling 
enabled) to the server (C)

With net.isr.direct=1 and net.inet.ip.fastforwarding=1
A> /usr/local/netperf/netperf -l 10 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344

Elapsed Throughput - 10^6bits/sec: 905.48

With net.isr.direct=0 and net.inet.ip.fastforwarding=0
Elapsed Throughput - 10^6bits/sec: 460.15

Apache get 'fetch' test.
A> fetch -o - > /dev/null http://server-C/file1gig.iso
- 100% of 1055 MB   67 MBps 
00m00s


Interestingly when testing from the gateway it self (B) direct to server 
(C) having 'net.isr.direct=1'  slowed down performance to 583mbits/sec
B> /usr/local/netperf/netperf -l 10 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344

Elapsed Throughput - 10^6bits/sec: 583.57

Same test with 'net.isr.direct=0'
Elapsed Throughput - 10^6bits/sec: 868.94
I have to ask how can this be possible if when its being used as a 
router with net.isr.direct=1 it passes traffic at over 900mbits/sec
Having net.inet.ip.fastforwarding=1 doesn't affect the performance in 
these B to C tests.


I believe faster performance may still be possible as another rack of 
gear I have that has another AMD64 6.0 RC1 Dell 2850 (Kes) gives me up 
to 930mbits/sec in apache fetch tests, I believe its even faster here 
because its an AMD64 Apache server or its possible it could just have a 
bit better quality ether cables, as I mentioned before the Apache server 
for box "C" in above tests is i386 on 6.0RC1.


This fetch test is only on a switch with no router between them.
spin> fetch -o - > /dev/null http://kes/500megs.zip
- 100% of  610 MB   93 MBps

So far from this casual testing I have discovered these things on my 
servers.
Using 6.0 on SMP servers gives a big boost in network performance over 
5.x SMP using i386 or AMD64
FreeBSD as router on gigabit ethernet with the use of polling gives over 
x2 performance with the right sysctls.
Needs more testing but it appears using AMD64 FreeBSD might be better 
then i386 for Apache2 network performance on SMP kernels.
Single interface speeds tests from the router with polling enabled and 
with 'net.isr.direct=1' appears to affect performance.


Regards,
Mike

Michael VInce wrote:


Robert Watson wrote:



On Fri, 14 Oct 2005, Michael VInce wrote:

I been doing some network benchmarking using netperf and just simple 
'fetch' on a new network setup to make sure I am getting the most 
out of the router and servers, I thought I would post some results 
in case some one can help me with my problems or if others are just 
interested to see the results.




Until recently (or maybe still), netperf was compiled with 
-DHISTOGRAM by our port/package, which resulted in a significant 
performance drop.  I believe that the port maintainer and others have 
agreed to change it, but I'm not sure if it's been committed yet, or 
which packages have been rebuilt.  You may want to manually rebuild 
it to make sure -DHISTOGRAM isn't set.


You may want to try setting net.isr.direct=1 and see what performance 
impact that has for you.


Robert N M Watson



I reinstalled the netperf to make sure its the latest.

I have also decided to upgrade Server-C (the i386 5.4 box) to 6.0RC1 
and noticed it gave a large improvement of network performance with a 
SMP kernel.


As with the network setup ( A --- B --- C  ) with server B being the 
gateway, doing a basic 'fetch' from the gateway (B) to the Apache 
server (C) it gives up to 700mbits/sec transfer performance, doing a 
fetch from server A thus going through the gateway gives slower but 
still decent performance of up to 400mbits/sec.


B> fetch -o - > /dev/null http://server-c/file1gig.iso
- 100% of 1055 MB   69 
MBps 00m00s



A> fetch -o - > /dev/null http://server-c/file1gig.iso
-  

Re: Network performance 6.0 with netperf

2005-10-20 Thread Michael VInce

Sten Daniel Sørsdal wrote:


Michael VInce wrote:

 


I reinstalled the netperf to make sure its the latest.

I have also decided to upgrade Server-C (the i386 5.4 box) to 6.0RC1 and
noticed it gave a large improvement of network performance with a SMP
kernel.

As with the network setup ( A --- B --- C  ) with server B being the
gateway, doing a basic 'fetch' from the gateway (B) to the Apache server
(C) it gives up to 700mbits/sec transfer performance, doing a fetch from
server A thus going through the gateway gives slower but still decent
performance of up to 400mbits/sec.
   



Are you by any chance using PCI NIC's? PCI Bus is limited to somewhere around 1 
Gbit/s.
So if you consider;
Theoretical maxium = ( 1Gbps - pci_overhead )

 

The 4 ethernet ports on the Dell server are all built-in so I am 
assuming they are on the best bus available.


Mike
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Network performance 6.0 with netperf

2005-10-20 Thread Michael VInce



On Thu, Oct 20, 2005 at 04:26:31PM +0200, Brad Knowles wrote:
 


At 10:49 PM +1000 2005-10-20, Michael VInce wrote:

   


> The 4 ethernet ports on the Dell server are all built-in so I am assuming
> they are on the best bus available.
 



	In my experience, the terms "Dell" and "best available" very 
rarely go together.


	Dell has made a name for themselves by shipping the absolutely 
cheapest possible hardware they can, with the thinnest possible 
profit margins, and trying to make up the difference in volume. 
Issues like support, ease of management, freedom from overheating, 
etc... get secondary or tertiary consideration, if they get any 
consideration at all.


But maybe that's just me.

--
Brad Knowles, <[EMAIL PROTECTED]>
   


I think that's unfair.


I have a couple of Dell machines and my biggest complaint with them 
has been

their use of proprietary bolt patterns for their motherboards and similar
tomfoolery, preventing you from migrating their hardware as your needs 
grow.


This also guarantees that your $75 power supply becomes a $200 one 
once the

warranty ends - good for them, not good for you.

Other than that, I've been pretty happy with their stuff. Sure beats a lot
of other "PC" vendors out there in terms of reliability, heat management,
BIOS updates, etc.

--
--
Karl Denninger ([EMAIL PROTECTED]) Internet Consultant & Kids Rights 
Activist


I have to agree Karl,
Those slots aren't proprietary there PCI Express.
When I went to open the machine up to put in a PCI multi serial card all 
I saw were those little modern mean looking PCI Express slots which have 
the ability to scare any techie, there are no old PCI slots on it, I had 
to dump my serial card and change over to usb2serial converters by 
loading the ucom and uplcom as kernel modules so I could use tip to 
serial out of usb into the single serial port on the Dell machines when 
the ethernet is down which ended up working out great, I will never need 
clunky old (and price) multi port PCI serial cards again.


If you look at the chipset Intel E7520 of the Dell 1850/2850 (The 2850 
is really just a bigger case machine to hold more drives)

http://www.intel.com/design/chipsets/embedded/e7520.htm
You will see it just only has PCI Express as a minimum which is 
64bit/133mhz which does a minimum of 2.5GBs/sec in 1 direction and its a 
switched based bus technology where there is no sharing of the lanes,

there is no old school PCI 32bit/33mhz buses.
http://www.pcstats.com/articleview.cfm?articleid=1087&page=3

As for service, I actually ordered two much smaller Dell 750's but 
because there were out of them for a couple of weeks due to some big 
company ordering 500 of them I had a bit of an argue with the Dell guy 
on the phone and got 1850s with scsi raid 1 out of him for the same price.
Its been Dell that has shown me how good (and maybe a bit evil) big 
companies can be.



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: em(4) patch for test

2005-10-22 Thread Michael VInce

Gleb Smirnoff wrote:


 Colleagues,

 since the if_em problem was taken as a late showstopper for 6.0-RELEASE,
I am asking you to help with testing of the fixes made in HEAD.

 Does your em(4) interface wedge for some time?
 Do you see a lot of errors in 'netstat -i' output? Does these errors
 increase not monotonously but they have peaks?

If the answer is yes, then the attached patch is likely to fix your problem.
If the answer is no, then you are still encouraged to help with testing
and install the patch to check that no regressions are introduced. If you
skip this, then you may encounter regressions after release, so you have
been warned.

 So, in short: please test! Thanks in advance!

The patch is against fresh RELENG_6.
 

Here are some results with my testing A-B-C network. This round of tests 
was largely focused on my gateway (B) which has 4 em built-in gigabit 
ether devices with polling and packet filter enabled.

Server A and B are Dells 1850 6.0RC1 AMD64, Server C Dell 2850 6.0RC1 i386

I have pushed through many gigs of traffic through B's 2 interfaces 
using netperf and 'fetch' tests of large files.
I have yet to see any errors before patching or after patching on the 
gateway.
But after I finished testing I checked netstat -i and I have errors on 
my client (A) and server (B) machines which appear to be caused by 
earlier testing I was doing using Apache benchmark 'ab' so I am going to 
start testing them. Using the netperf and 'fetch' tests have appeared to 
never budge any Oerrs or Ierrs stats on any of the machines.


After patch.

B> netstat -i
NameMtu Network   Address  Ipkts IerrsOpkts 
Oerrs  Coll
em0*   1500   00:0e:0c:f3:d4:f20 00 
0 0
em1*   1500   00:0e:0c:f3:d4:f30 00 
0 0
em21500   00:14:22:f5:ff:4b 39887301 0 24145946 
0 0
em31500   00:14:22:f5:ff:4c 24144573 0 39885777 
0 0



Before patch .
B> netstat -i | grep "Link"
em0*   1500   00:0e:0c:f3:d4:f20 00 
0 0
em1*   1500   00:0e:0c:f3:d4:f30 00 
0 0
em21500   00:14:22:f5:ff:4b 67671693 0 51036449 
0 0
em31500   00:14:22:f5:ff:4c 49964821 0 65868324 
0 0



Client test machine (A) and server machine (C) yet to be patched does 
have Ierrs errors but not many, the other thing to note that doing the 
benchmarks netperf and 'fetct' tests listed below (done repeatedly) 
never made these error stats budge at all. It turns out it was caused by 
some tests I was doing on these machines a day earlier using apache 'ab' 
tests which have been really beating up the servers quite well, this being

'A> ab  -k -n 19000 -c 1500 http://server-c/133kbyte_file' .
I will post some results after this email. Using this test.

A> netstat -i | egrep 'em2.*Link|Name'
NameMtu Network   Address  Ipkts IerrsOpkts 
Oerrs  Coll
em21500   00:14:ff:15:ff:8e 225763513  5025 
311375238 0 0


C> netstat -i | egrep 'em0.*Link|Name'
NameMtu Network   Address  Ipkts IerrsOpkts 
Oerrs  Coll
em01500   00:14:ff:12:4c:03 347060671 13317 
251444959 0 0


Tests used and benchmark results using the netperf and fetch tests.
Interestingly it appears to be benchmarking a tiny bit faster after 
patching, I am I believe its just been luck of the draw testing.

A> fetch -o - > /dev/null http://server-C/file1gig.iso
A> /usr/local/netperf/netperf -l 60 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344


Before patch
Fetch test: 84 MBps
Netperf Elapsed Throughput - 10^6bits/sec: 905.77

After patch
Fetch test: 85 MBps
Netperf Elapsed Throughput - 10^6bits/sec: 909.60

Before patch
B> netstat -s
tcp:
   1070840 packets sent
   55362 data packets (7666434 bytes)
   183 data packets (189863 bytes) retransmitted
   3 data packets unnecessarily retransmitted
   0 resends initiated by MTU discovery
   733560 ack-only packets (178 delayed)
   0 URG only packets
   0 window probe packets
   281674 window update packets
   62 control packets
   1791219 packets received
   53340 acks (for 7668053 bytes)
   512 duplicate acks
   0 acks for unsent data
   1738233 packets (2511401694 bytes) received in-sequence
   102 completely duplicate packets (83167 bytes)
   0 old duplicate packets
   9 packets with some dup. data (5184 bytes duped)
   564 out-of-order packets (610315 bytes)
   1 packet (0 bytes) of data after window
   0 window probes
   186 window update packets
   0 packets received after close
   0 discarded for bad checksums
   0 discarded for bad header offset fields
 

Re: IPSec tcp session stalling

2005-10-22 Thread Michael VInce
Try sending different sized pings or other packet size control utils to 
really make sure its not MTU related.
Maybe there is an upstream router thats blocking ICMP fragment packets, 
have you ever seen them? try forcing the creation of some.


Mike

Volker wrote:


Still having the same problem with an IPSec tunnel between FreeBSD 5.4R
hosts.

Problem description:
scp session tries to transfer a large file through an IPSec tunnel. The
file is being transmitted but scp says 'stalled' after 56K (49152 bytes
file size). The IPSec tunnel itself is still up even after the scp
abort. Other tcp sessions break, too when sending too much traffic
through the tunnel.

I've taken a closer look to it and tried to get something useful out of
the tcpdump but I'm unable to see any errors or I'm misinterpreting
something.

The connection looks like:

extIP: A.B.C.D
extIP: E.F.G.H
host A -- (internet) -- host B
tunnelIP: 10.128.1.6   tunnelIP:
10.128.6.1

host A just has an external interface (em1) connected to a leased line
with a fixed IP address (IP-addr A.B.C.D).
host B has an S-DSL connection at xl0, PPPoE at ng0 (IP-addr. E.F.G.H).

Both hosts are using gif for the IPSec tunnel.

The routing tables (netstat -rnWf inet) are looking good and IMHO the
MTU is fine.

host A:
em1: flags=8843 mtu 1500
   options=b
   inet A.B.C.D netmask 0xfff8 broadcast A.B.C.z
   ether 00:c0:9f:46:ec:c7
   media: Ethernet autoselect (100baseTX )
   status: active
gif6: flags=8051 mtu 1280
   tunnel inet A.B.C.D --> E.F.G.H
   inet 10.128.1.6 --> 10.128.6.1 netmask 0x
   inet6 fe80::2c0:9fff:fe46:ecc6%gif6 prefixlen 64 scopeid 0x4

Routing tables (shortened)
DestinationGatewayFlagsRefs  UseMtu
Netif Expire
defaultA.B.C.x  UGS 2   516686   1500  em1
10.128.1.6 127.0.0.1  UH  0   14  16384  lo0
10.128.6.1 10.128.1.6 UH  0 6017   1280 gif6
127.0.0.1  127.0.0.1  UH  031633  16384  lo0
A.B.C.x/29   link#2 UC  00   1500  em1
A.B.C.D  00:c0:9f:46:ec:c7  UHLW0  112   1500  lo0

On host B the interfaces and routing tables are looking like:
xl0: flags=8843 mtu 1500
   options=8
   inet 0.0.0.0 netmask 0xff00 broadcast 0.255.255.255
   inet6 fe80::260:8ff:fe6c:e73c%xl0 prefixlen 64 scopeid 0x1
   ether 00:60:08:6c:e7:3c
   media: Ethernet 10baseT/UTP (10baseT/UTP )
   status: active
gif1: flags=8051 mtu 1280
   tunnel inet E.F.G.H --> A.B.C.D
   inet6 fe80::260:8ff:fe6c:e73c%gif1 prefixlen 64 scopeid 0x4
   inet 10.128.6.1 --> 10.128.1.6 netmask 0x
ng0: flags=88d1 mtu 1456
   inet E.F.G.H --> 217.5.98.186 netmask 0x

Routing tables (shortened)
DestinationGatewayFlagsRefs  UseMtu
Netif Expire
0  link#1 UC  00   1500
xl0 =>
default217.5.98.186   UGS 138474   1456  ng0
10.128.1.6 10.128.6.1 UH  4 2196   1280 gif1
127.0.0.1  127.0.0.1  UH  080424  16384  lo0
217.5.98.186   E.F.G.H   UH  10   1456  ng0
E.F.G.H   lo0UHS 00  16384  lo0

While trying to fetch a file by scp on host A (receiver) from host B
(sender), I captured the following tcpdump on host B:

tcpdump -netttvvi gif1:
 


23 AF 2 1280: IP (tos 0x8, ttl  64, id 13202, offset 0, flags [none], length: 1280) 
10.128.6.1.22 > 10.128.1.6.53160: . 43864:45092(1228) ack 1330 win 33156 

000207 AF 2 1280: IP (tos 0x8, ttl  64, id 52187, offset 0, flags [none], length: 1280) 
10.128.6.1.22 > 10.128.1.6.53160: . 45092:46320(1228) ack 1330 win 33156 

000220 AF 2 1280: IP (tos 0x8, ttl  64, id 33774, offset 0, flags [none], length: 1280) 
10.128.6.1.22 > 10.128.1.6.53160: . 46320:47548(1228) ack 1330 win 33156 

003524 AF 2 52: IP (tos 0x8, ttl  64, id 42063, offset 0, flags [none], length: 52) 
10.128.1.6.53160 > 10.128.6.1.22: . [tcp sum ok] 1330:1330(0) ack 38952 win 33156 

24 AF 2 1280: IP (tos 0x8, ttl  64, id 48541, offset 0, flags [none], length: 1280) 
10.128.6.1.22 > 10.128.1.6.53160: . 47548:48776(1228) ack 1330 win 33156 

011203 AF 2 52: IP (tos 0x8, ttl  64, id 60517, offset 0, flags [none], length: 52) 
10.128.1.6.53160 > 10.128.6.1.22: . [tcp sum ok] 1330:1330(0) ack 41408 win 32542 

58 AF 2 1280: IP (tos 0x8, ttl  64, id 15798, offset 0, flags [none], length: 1280) 
10.128.6.1.22 > 10.128.1.6.53160: . 48776:50004(1228) ack 1330 win 33156 

000246 AF 2 1280: IP (tos 0x8, ttl  64, id 31721, offset 0, flags [none], length: 1280) 
10.128.6.1.22 > 10.128.1.6.53160: . 50004:51232(1228) ack 1330 win 33156 

005147 AF 2 52: IP (tos 0x8, ttl  64, id 22347, offset 0, flags 

Re: IPSec tcp session stalling

2005-10-22 Thread Michael VInce
I am using FAST_IPSEC on a multi subnet VPN with the guys on other side 
having Check Point VPN / Firewall.
Its a VPN that does almost non stop usage, the people on the other side 
have 24 monitoring utils on it and its never had a problem.
Its on 5.3 i386, and I fear to upgrade it, when it comes to VPN I 
believe in the rule if its not broke don't fix it.


When I think about if I haven't had much luck when trying regular IPSEC 
despite docs saying its better supported? But then again I never gave it 
a good shot, FAST_IPSEC just sounded 'faster'


Mike

Volker wrote:


Max & Co:

I've just seen I'm using kernel config 'options IPSEC' on both machines.
Should I try 'options FAST_IPSEC'? Would take some hours for kernel
recompile. Does the code IPSEC / FAST_IPSEC make a difference (even
while having not hardware crypto accelerator)?

May I use FAST_IPSEC even without any hw-crypto devices? While reading
`man fast_ipsec' I would think it depends on a hw-crypto device...

Please tell me if we should check IPSEC / FAST_IPSEC and I'll start a
recompile.

Volker


On 2005-10-23 00:40, Max Laier wrote:
 

To try something else: Could you guys try to disable SACK on the machines 
involved?  I haven't looked at the dumps as of yet, but that's one simple 
test that might help to identify the problem.


sysctl net.inet.tcp.sack.enable=0

On Sunday 23 October 2005 02:23, Volker wrote:

   


Michael,

I not that sure if I'm right in checking what you suggested but when
trying to do ping hostB from hostA with oversized packets through the
IPSec tunnel by:

# ping -c 10 -s 12000 10.128.6.1

I'm getting replies easily.

While doing that and tcpdump'ing the gif interface, I'm seeing the
fragmented packets coming in properly.

If that's a reliable check for MTU than the problem should not be MTU
related. Is there any other way to check MTU problems by using `ping'?

Thanks,

Volker

On 2005-10-22 20:16, Michael VInce wrote:

 


Try sending different sized pings or other packet size control utils to
really make sure its not MTU related.
Maybe there is an upstream router thats blocking ICMP fragment packets,
have you ever seen them? try forcing the creation of some.

Mike

Volker wrote:

   


Still having the same problem with an IPSec tunnel between FreeBSD 5.4R
hosts.

Problem description:
scp session tries to transfer a large file through an IPSec tunnel. The
file is being transmitted but scp says 'stalled' after 56K (49152 bytes
file size). The IPSec tunnel itself is still up even after the scp
abort. Other tcp sessions break, too when sending too much traffic
through the tunnel.

I've taken a closer look to it and tried to get something useful out of
the tcpdump but I'm unable to see any errors or I'm misinterpreting
something.

The connection looks like:

extIP: A.B.C.D
extIP: E.F.G.H
host A -- (internet) -- host B
tunnelIP: 10.128.1.6   tunnelIP:
10.128.6.1

host A just has an external interface (em1) connected to a leased line
with a fixed IP address (IP-addr A.B.C.D).
host B has an S-DSL connection at xl0, PPPoE at ng0 (IP-addr. E.F.G.H).

Both hosts are using gif for the IPSec tunnel.

The routing tables (netstat -rnWf inet) are looking good and IMHO the
MTU is fine.

host A:
em1: flags=8843 mtu 1500
 options=b
 inet A.B.C.D netmask 0xfff8 broadcast A.B.C.z
 ether 00:c0:9f:46:ec:c7
 media: Ethernet autoselect (100baseTX )
 status: active
gif6: flags=8051 mtu 1280
 tunnel inet A.B.C.D --> E.F.G.H
 inet 10.128.1.6 --> 10.128.6.1 netmask 0x
 inet6 fe80::2c0:9fff:fe46:ecc6%gif6 prefixlen 64 scopeid 0x4

Routing tables (shortened)
DestinationGatewayFlagsRefs  UseMtu
Netif Expire
defaultA.B.C.x  UGS 2   516686   1500  em1
10.128.1.6 127.0.0.1  UH  0   14
16384  lo0
10.128.6.1 10.128.1.6 UH  0 6017
1280 gif6
127.0.0.1  127.0.0.1  UH  031633
16384  lo0
A.B.C.x/29   link#2 UC  00   1500  em1
A.B.C.D  00:c0:9f:46:ec:c7  UHLW0  112   1500  lo0

On host B the interfaces and routing tables are looking like:
xl0: flags=8843 mtu 1500
 options=8
 inet 0.0.0.0 netmask 0xff00 broadcast 0.255.255.255
 inet6 fe80::260:8ff:fe6c:e73c%xl0 prefixlen 64 scopeid 0x1
 ether 00:60:08:6c:e7:3c
 media: Ethernet 10baseT/UTP (10baseT/UTP )
 status: active
gif1: flags=8051 mtu 1280
 tunnel inet E.F.G.H --> A.B.C.D
 inet6 fe80::260:8ff:fe6c:e73c%gif1 prefixlen 64 scopeid 0x4
 inet 10.128.6.1 --> 10.128.1.6 netmask 0x
ng0: flags=88d1 mtu 1456
 inet E.F.G.H --> 217.5.98.186 netmask 0x

Routing tables (shortened)
Destinat

Re: em(4) patch for test

2005-10-23 Thread Michael VInce
Here is my second round of my non scientific benchmarking and tests, I 
hope this is useful.
I been having fun benchmarking these machines but I am starting to get 
sick of it as well :) but I find it important to know that things are 
going to work right when they are launched to do their real work.


The final results look good after patching and running ab tests I was 
unable to get errors out of netstat -i output, even when grilling the 
server-C machine to a rather high load.



Test 1 (Non patched)

netperf test, non patched machines. load results below with top -S, no 
Ierrs or Oerrs.
A> /usr/local/netperf/netperf -l 60 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344


Server-C load snap shot

last pid:  1644;  load averages:  0.94,  0.48,  
0.23
up 0+06:30:41  12:29:59

239 processes: 7 running, 130 sleeping, 102 waiting
CPU states:  0.5% user,  0.0% nice,  2.3% system,  9.4% interrupt, 87.9% 
idle

Mem: 125M Active, 1160M Inact, 83M Wired, 208K Cache, 112M Buf, 1893M Free
Swap: 4096M Total, 4096M Free

 PID USERNAME  THR PRI NICE   SIZERES STATE  C   TIME   WCPU COMMAND
  13 root1 171   52 0K 8K CPU1   0   0:00 99.02% idle: cpu1
  14 root1 171   52 0K 8K RUN0 386:01 98.97% idle: cpu0
  12 root1 171   52 0K 8K CPU2   2 389:02 97.75% idle: cpu2
  11 root1 171   52 0K 8K RUN3 385:10 61.87% idle: cpu3
  62 root1 -68 -187 0K 8K WAIT   3   1:03 14.16% irq64: em0
 112 root1 -44 -163 0K 8K CPU2   3   1:35 11.23% swi1: net
1644 root1   40  1640K  1016K sbwait 3   0:09  7.25% netserver
  30 root1 -64 -183 0K 8K CPU3   3   0:19  2.15% irq16: 
uhci0



Server-A load snap shot

last pid:  1550;  load averages:  0.34,  0.32,  
0.21
up 0+07:28:38  12:41:33

134 processes: 3 running, 52 sleeping, 78 waiting, 1 lock
CPU states:  0.8% user,  0.0% nice, 10.2% system, 42.1% interrupt, 47.0% 
idle

Mem: 13M Active, 27M Inact, 70M Wired, 24K Cache, 213M Buf, 1810M Free
Swap: 4096M Total, 4096M Free

 PID USERNAME  THR PRI NICE   SIZERES STATETIME   WCPU COMMAND
  11 root1 171   52 0K16K RUN438:50 54.98% idle
  83 root1 -44 -163 0K16K WAIT 2:41 17.63% swi1: net
  59 root1 -68 -187 0K16K RUN  1:48 11.23% irq64: em2
1547 root1   40  3812K  1356K sbwait   0:17  8.84% netperf
  27 root1 -68 -187 0K16K *Giant   0:51  2.78% irq16: 
em0 uhci0



Test 2 (Non patched)

On the Apache beat up test with: A> 'ab -k -n 25500 -c 900 
http://server-c/338kbyte.file'

You can see some error output from netstat -i on both machines,

C> netstat -i | egrep 'Name|em0.*Link'
NameMtu Network   Address  Ipkts IerrsOpkts 
Oerrs  Coll
em01500   00:14:22:12:4c:03 85133828  2079 63248162 
0 0


top highest load.
last pid:  2170;  load averages: 35.56, 16.54,  
8.52
up 0+07:28:47  13:28:05

1182 processes:125 running, 954 sleeping, 103 waiting
CPU states:  5.4% user,  0.0% nice, 37.2% system, 32.4% interrupt, 25.0% 
idle

Mem: 372M Active, 1161M Inact, 131M Wired, 208K Cache, 112M Buf, 1595M Free
Swap: 4096M Total, 4096M Free

 PID USERNAME  THR PRI NICE   SIZERES STATE  C   TIME   WCPU COMMAND
  13 root1 171   52 0K 8K CPU1   0   0:00 100.00% idle: 
cpu1

  62 root1 -68 -187 0K 8K WAIT   3   5:57 89.79% irq64: em0
  30 root1 -64 -183 0K 8K CPU0   0   2:11 36.08% irq16: 
uhci0

  12 root1 171   52 0K 8K RUN2 439:16  2.98% idle: cpu2
  14 root1 171   52 0K 8K RUN0 435:36  2.98% idle: cpu0
  11 root1 171   52 0K 8K RUN3 430:35  2.98% idle: cpu3
2146 root1  -80  4060K  2088K piperd 2   0:01  0.15% rotatelogs
 129 root1  200 0K 8K syncer 0   0:08  0.05% syncer
 112 root1 -44 -163 0K 8K WAIT   0   4:50  0.00% swi1: net
 110 root1 -32 -151 0K 8K WAIT   3   1:05  0.00% swi4: 
clock sio

2149 www66 1130 44476K 31276K RUN3   0:08  0.00% httpd


Server-A netstat and highest top. (non patched)
A> netstat -i | egrep 'em2.*Link|Name'
NameMtu Network   Address  Ipkts IerrsOpkts 
Oerrs  Coll
em21500   00:14:22:15:ff:8e 61005124   690 84620697 
0 0


last pid:  1698;  load averages:  0.65,  0.29,  
0.13
up 0+08:15:10  13:28:05

136 processes: 6 running, 53 sleeping, 76 waiting, 1 lock
CPU states:  3.4% user,  0.0% nice, 58.6% system, 32.0% interrupt,  6.0

Re: em(4) patch for test

2005-10-23 Thread Michael VInce
I just have to point out that below I made a statement that proved I 
should of gone to bed earlier instead of doing benchmarks :). The 901 
http States and ssh state have nothing to do with each other as there on 
different pf rules.


Mike

Michael VInce wrote:

I did watch the gateway (B) pf state table and did an ab test with and 
without pf running, I didn't see any difference in results when having 
pf running with stateful rules, ab's Time per requests stayed low and 
transfer rates stayed high. Most of the time the total states were 
exactly 900 (plus 1 for ssh session) which would make sense 
considering the 900 keep-alive concurrency level on the ab test.


pftop output
RULE ACTION   DIR LOG Q IF PR   K PKTSBYTES   STATES   MAX 
INFO
  0 Pass In  Q em2tcp  M 37362067 1856847K  901   
inet from any to server-c port = http





___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: WG511T problem using dhcp

2005-11-09 Thread Michael Vince

dennis binder wrote:


hello,

i'm trying to get a wlan card WG511T from netgear to work
and connect to the internet via an accesspoint.
The accesspoint has an SSID= "WLAN" and provides DHCP.

dmesg bings up the following:
ath0:  mem 0x8800-0x8800 irq 9 at device 0.0 on
cardbus0
ath0: mac 5.9 phy 4.3 5ghz radio 4.6
ath0: Ethernet address: 00:0f:b5:67:1b:4f
ath0: 11b rates: 1Mbps 2Mbps 5.5Mbps 11Mbps
ath0: 11g rates: 1Mbps 2Mbps 5.5Mbps 11Mbps 6Mbps 9Mbps 12Mbps 18Mbps 
24Mbps

36Mbps 48Mbps 54Mbps

my /etc/rc.conf  looks like this:
removable_interfaces="ath0 ep0"
ifconfig_ath0="NO"

my /etc/start_if.ath0 is empty. After plugging the card "ifconfig ath0"
prints this:
ath0: flags=8802 mtu 1500
   ether 00:0f:b5:67:1b:4f
   media: IEEE 802.11 Wireless Ethernet autoselect
   status: no carrier
   ssid ""
   channel -1 authmode OPEN powersavemode OFF powersavesleep 100
   rtsthreshold 2312 protmode CTS
   wepmode OFF weptxkey 1

after this I want to manually connect to the internet.
I try
# ifconfig ath0 ssid WLAN
# ifconfig ath0
ath0: flags=8843 mtu 1500
   inet6 fe80::20f:b5ff:fe67:1b4f%ath0 prefixlen 64 scopeid 0x4
   inet 0.0.0.0 netmask 0xff00 broadcast 255.255.255.255
   ether 00:0f:b5:67:1b:4f
   media: IEEE 802.11 Wireless Ethernet autoselect (DS/1Mbps)
   status: no carrier
   ssid WLAN 1:WLAN
   channel -1 authmode OPEN powersavemode OFF powersavesleep 100
   rtsthreshold 2312 protmode CTS
   wepmode OFF weptxkey 1

After running "dhclient ath0" the netgear-card finds the accesspoint ( 
both

leds blinking simultanously).
"ifconfig ath0" prints "status: active" but the inet-address remains
0.0.0.0.

How can I assign the card a valid ip-address via dhcp ?
Or what is wrong in my setup ?
My second interface fxp0 has no problems getting valid
ip-address via dhcp.

Any hints are very welcome.

Dennis Binder


I am using the same card, in my rc.conf I just have this
ifconfig_ath0="DHCP WPA"
I am using WPA encryption and have the relevant information in 
/etc/wpa_supplicant.conf but if I am using no encryption I wouldn't need 
the WPA part at all.


Mike



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: em interrupt storm

2005-11-23 Thread Michael Vince

Kris Kennaway wrote:


On Tue, Nov 22, 2005 at 08:54:49PM -0800, John Polstra wrote:
 


On 23-Nov-2005 Kris Kennaway wrote:
   


I am seeing the em driver undergoing an interrupt storm whenever the
amr driver receives interrupts.  In this case I was running newfs on
the amr array and em0 was not in use:

  28 root1 -68 -187 0K 8K CPU1   1   0:32 53.98% irq16: em0
  36 root1 -64 -183 0K 8K RUN1   0:37 27.75% irq24: amr0

# vmstat -i
interrupt  total   rate
irq1: atkbd0   2  0
irq4: sio0   199  1
irq6: fdc032  0
irq13: npx01  0
irq14: ata0   47  0
irq15: ata1  931  5
irq16: em0   6321801  37187
irq24: amr028023164
cpu0: timer   337533   1985
cpu1: timer   337285   1984
Total7025854  41328

When newfs finished (i.e. amr was idle), em0 stopped storming.

MPTable: 
 


This is the dreaded interrupt aliasing problem that several of us have
experienced with this chipset.  High-numbered interrupts alias down to
interrupts in the range 16..19 (or maybe 16..23), a multiple of 8 less
than the original interupt.

Nobody knows what causes it, and nobody knows how to fix it.
   



This would be good to document somewhere so that people don't either
accidentally buy this hardware, or know what to expect when they run
it.

Kris
 

This is Intels latest server chipset designs and Dell are putting that 
chipset in all their servers.
Luckily I haven't not seen the problem on any of my Dell servers (as 
long as I am looking at this right).


This server has been running for a long time.
vmstat -i
interrupt  total   rate
irq1: atkbd0   6  0
irq4: sio0 23433  0
irq6: fdc010  0
irq8: rtc 2631238611128
irq13: npx01  0
irq14: ata0   99  0
irq16: uhci0  1507608958 73
irq18: uhci242005524  2
irq19: uhci1   3  0
irq23: atapci0   151  0
irq46: amr0 41344088  2
irq64: em01513106157 73
irq0: clk 2055605782 99
Total 7790932823379

This one just transfered over 8gigs of data in 77seconds with around 
1000 simultaneous tcp connections under a load of 35. Both seem OK.

vmstat -i
interrupt  total   rate
irq4: sio0   315  0
irq13: npx01  0
irq14: ata0   47  0
irq16: uhci0 2894669  2
irq18: uhci2  977413  0
irq23: ehci0   3  0
irq46: amr0   883138  0
irq64: em0   2890414  2
cpu0: timer   2763566717   1999
cpu3: timer   2763797300   1999
cpu1: timer   2763551479   1999
cpu2: timer   2763797870   1999
Total11062359366   8004

Mike



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: em interrupt storm

2005-11-23 Thread Michael Vince

Scott Long wrote:


Michael Vince wrote:


Kris Kennaway wrote:


On Tue, Nov 22, 2005 at 08:54:49PM -0800, John Polstra wrote:
 


On 23-Nov-2005 Kris Kennaway wrote:
 


I am seeing the em driver undergoing an interrupt storm whenever the
amr driver receives interrupts.  In this case I was running newfs on
the amr array and em0 was not in use:

  28 root1 -68 -187 0K 8K CPU1   1   0:32 53.98% 
irq16: em0
  36 root1 -64 -183 0K 8K RUN1   0:37 27.75% 
irq24: amr0


# vmstat -i
interrupt  total   rate
irq1: atkbd0   2  0
irq4: sio0   199  1
irq6: fdc032  0
irq13: npx01  0
irq14: ata0   47  0
irq15: ata1  931  5
irq16: em0   6321801  37187
irq24: amr028023164
cpu0: timer   337533   1985
cpu1: timer   337285   1984
Total7025854  41328

When newfs finished (i.e. amr was idle), em0 stopped storming.

MPTable: 




This is the dreaded interrupt aliasing problem that several of us have
experienced with this chipset.  High-numbered interrupts alias down to
interrupts in the range 16..19 (or maybe 16..23), a multiple of 8 less
than the original interupt.

Nobody knows what causes it, and nobody knows how to fix it.
  




This would be good to document somewhere so that people don't either
accidentally buy this hardware, or know what to expect when they run
it.

Kris
 

This is Intels latest server chipset designs and Dell are putting 
that chipset in all their servers.
Luckily I haven't not seen the problem on any of my Dell servers (as 
long as I am looking at this right).


This server has been running for a long time.
vmstat -i
interrupt  total   rate
irq1: atkbd0   6  0
irq4: sio0 23433  0
irq6: fdc010  0
irq8: rtc 2631238611128
irq13: npx01  0
irq14: ata0   99  0
irq16: uhci0  1507608958 73
irq18: uhci242005524  2
irq19: uhci1   3  0
irq23: atapci0   151  0
irq46: amr0 41344088  2
irq64: em01513106157 73
irq0: clk 2055605782 99
Total 7790932823379

This one just transfered over 8gigs of data in 77seconds with around 
1000 simultaneous tcp connections under a load of 35. Both seem OK.

vmstat -i
interrupt  total   rate
irq4: sio0   315  0
irq13: npx01  0
irq14: ata0   47  0
irq16: uhci0 2894669  2
irq18: uhci2  977413  0
irq23: ehci0   3  0
irq46: amr0   883138  0
irq64: em0   2890414  2
cpu0: timer   2763566717   1999
cpu3: timer   2763797300   1999
cpu1: timer   2763551479   1999
cpu2: timer   2763797870   1999
Total11062359366   8004

Mike




Looks like at least some of your interrupts are being aliased to 
irq16, which just happens to be USB(uhci) in this case.  Note that the 
rate is

the same between irq64 and irq16, and the totals are pretty close.  If
you don't need USB, I'd suggest turning it off.

Scott


Most of my Dell servers occasionally use the USB ports to serial out via 
tip using a usb2serial cable with the uplcom driver and then into 
another servers real serial port (sio) so its not really an option to 
disable USB.


How much do you think it affects performance if the USB device is 
actually rarely used.


I also have a 6-stable machine and noticed that the vmstat -i output 
lists the em and usb together, but em0 isn't used at all, em2 and em3 
are the active ones, it doesn't seem reasonable that my usb serial usage 
would be that high for irq16 or could it be that em2 and em3 and also 
going through irq16


vmstat -i
interrupt  total   rate
irq4: sio0   228  0
irq14: ata0   47  0
irq16: em0 uhci0  917039 11
irq18: uhci2   54823  0
irq23: ehci0   3  0
irq46: amr045998  0
irq64: em2898628 11
lapic0: ti

Re: Routing SMP benefit

2006-01-01 Thread Michael Vince

Andre Oppermann wrote:


Markus Oestreicher wrote:
 


Currently running a few routers on 5-STABLE I have read the
recent changes in the network stack with interest.
   



You should run 6.0R.  It contains many improvements over 5-STABLE.

 


A few questions come to my mind:

- Can a machine that mainly routes packets between two em(4)
interfaces benefit from a second CPU and SMP kernel? Can both
CPUs process packets from the same interface in parallel?
   



My testing has shown that a machine can benefit from it but not
much in the forwarding performance.  The main benefit is the
prevention of lifelock if you have very high packet loads.  The
second CPU on SMP keeps on doing all userland tasks and running
routing protocols.  Otherwise your BGP sessions or OSPF hellos
would stop and remove you from the routing cloud.

 


- From reading the lists it appears that net.isr.direct
and net.ip.fastforwarding are doing similar things. Should
they be used together or rather not?
   



net.inet.ip.fastforwarding has precedence over net.isr.direct and
enabling both at the same doesn't gain you anything.  Fastforwarding
is about 30% faster than all other methods available, including
polling.  On my test machine with two em(4) and an AMD Opteron 852
(2.6GHz) I can route 580'000 pps with zero packet loss on -CURRENT.
An upcoming optimization that will go into -CURRENT in the next
few days pushes that to 714'000 pps.  Futher optimizations are
underway to make a stock kernel do close to or above 1'000'000 pps
on the same hardware.
 

I have tested on 6R with fastforwarding and net.isr.direct and found 
that by them selves they don't compare in network performance boosts 
compared to enabling polling, but you have made me feel like retesting, 
this is on 6R or stable though.
When do you think some of these new network improvements in -current 
will go into 6-stable?


Regards,

Mike


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: tcp performance

2006-01-01 Thread Michael Vince

Zongsheng Zhang wrote:


Hi, *,

For testing throughput of a TCP connection, the following topology is used:
Host-A ---GB Ethernet--- Dummynet ---GB Ethernet--- Host-B

Host-A/B use FreeBSD v6.0. Sysctl parameters of Host-A/B are:
kern.ipc.nmbclusters=32768
net.inet.tcp.inflight.enable=0
net.inet.tcp.sendspace=2097152  # 2M
net.inet.tcp.recvspace=2097152  # 2M

When RTT in Dummynet is set to 0ms, the throughput (A--B) is about
900Mbps. The buffer size is enough to fill a link bandwidth=800Mbps, and
RTT=20ms. However, if RTT is set to 20ms, the throughput is only about
500Mbps.

Are there other parameters which are necessary to adjust? Does anyone
have suggestion for high throughput?

Thanks in advance.

 

What are you following on your hosts? Release/stable or current? I only 
use just release or stable.


For your middle router try net.inet.ip.fastforwarding or net.isr.direct 
but not at the same time, then try on top enabling polling.
Personally I found enabling polling worked best combined with 
net.inet.ip.fastforwarding.


Andre Oppermann claimed in a post just recently he gets best performance 
using just net.inet.ip.fastforwarding without polling but that might be 
for just -current, I am not sure.


You could also try using current, but if I had to guess you already are?

Regards,
Mike

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Router + ADM64

2006-01-04 Thread Michael Vince

Jon Otterholm wrote:


Hi!

What is there to gain in performance choosing AMD64 on a Dell PE1850 
(Xeon EMT64) when used as router?


/Jon 


I have one running under Amd64 FreeBSD.
When polling is enabled I do get transfer speeds of up to 
112megabytes/sec, the only real down side as far as I am concerned is 
missing out on VPN capability, which is broken on the AMD64 arch for 
unknown reasons, I can only hope I won't need it.


Mike



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Router + ADM64

2006-01-10 Thread Michael Vince
On FreeBSD amd64 if you compile in FAST_IPSEC and even regular IPSEC and 
do something like run setkey you get a panic.

VPN on AMD64 FreeBSD has never worked.
Mike

Gleb Smirnoff wrote:


On Thu, Jan 05, 2006 at 02:38:02PM +1100, Michael Vince wrote:
M> >What is there to gain in performance choosing AMD64 on a Dell PE1850 
M> >(Xeon EMT64) when used as router?
M> 
M> I have one running under Amd64 FreeBSD.
M> When polling is enabled I do get transfer speeds of up to 
M> 112megabytes/sec, the only real down side as far as I am concerned is 
M> missing out on VPN capability, which is broken on the AMD64 arch for 
M> unknown reasons, I can only hope I won't need it.


Which exact VPN capability are you talking about?

 



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: packet drop with intel gigabit / marwell gigabit

2006-03-19 Thread Michael Vince
I use netperf which is a pure network traffic tester I also just use 
basic 'ab/apache' tests which would also test HD/IO if getting large files.
For the 'em' driver I have seen some posts/cvs commit updates to the 
driver saying it now works better without polling then with polling. I 
think this is in -stable but it might just be in current. I haven't done 
any testing for a while.




OxY wrote:


i changed sk to em.
how could i measure speed or benchmark the network performance?

- Original Message - From: "Bjoern A. Zeeb" 
<[EMAIL PROTECTED]>

To: "OxY" <[EMAIL PROTECTED]>
Cc: ; 
Sent: Sunday, March 19, 2006 7:26 PM
Subject: Re: packet drop with intel gigabit / marwell gigabit



On Sun, 19 Mar 2006, OxY wrote:

Hi,

Just on a hunch, can you try putting the card in a different PCI  
slot? There may be interrupt routing issues.



okay, i will try it in a couple days



the card also has a sysctl for intr moderation. See man 4 sk. The
default changed with Pyun's updated driver, I think, but you could
play with that too.

Further I still have the feeling that your measurings are not
comparable.

--
Bjoern A. Zeeb bzeeb at Zabbadoz dot NeT 



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


VPN with FAST_IPSEC and ipsec tools

2006-06-15 Thread Michael Vince

Hey all.
I have been trying to setup a VPN between 2 FreeBSD hosts, but I can't 
get any IKE exchange activity via ipsec tools happening.


I used this script http://thebeastie.org/projects/vpnsetup/vpnsetup.pl 
which I created for my self to help me remember all the knobs, its been 
about a year since I last did a VPN and I am finding that with 
FAST_IPSEC (haven't tested yet with other IPSec) and with FreeBSD 
6.1release that I am not getting any IKE exchange activity including any 
kind of attempts for the VPNs to connect to each other, it just appear 
that ipsec-tools doesn't identify any interesting traffic that I set, I 
am guessing its something to do with FAST_IPSEC but I am not sure.


I have setup the GRE tunneling and that is working fine doing pings and 
tracerts when I disable ipsec and ipsec-tools, its just the encryption 
side thats the problem.


Can any one help?

Thanks
Mike

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: VPN with FAST_IPSEC and ipsec tools

2006-06-19 Thread Michael Vince

   Brian Candler wrote:

On Fri, Jun 16, 2006 at 01:43:54PM +1000, Michael Vince wrote:
  

I have setup the GRE tunneling and that is working fine doing pings and 
tracerts when I disable ipsec and ipsec-tools, its just the encryption 
side thats the problem.


Ah, I guess this means you're following the instructions in the FreeBSD
handbook, which last time I looked gave a most bizarre and unnecessary way
of setting up IPSEC (GIF tunneling running on top of IPSEC *tunnel* mode). I
raised it on this list before.

Most people are better off just setting up IPSEC tunnel mode. A few use GIF
running on top of IPSEC _transport_ mode (e.g. those running routing
protocols like OSPF over tunnels)

Regards,

Brian.
  

   Yeah I did build it based on the Handbook howto on VPNs, I had no idea
   it wasn't right.
   Interestingly I have managed to get this type of setting going with
   Checkpoint.
   Mike
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


FAST_IPSEC and NAT-T

2006-06-20 Thread Michael Vince

Hey All,
When installing the ipsec-tools it says if you want NAT-T you need to 
install this patch, http://ipsec-tools.sourceforge.net/freebsd6-natt.diff
Can any one tell me if this patch works with Fast_ipsec or is it just 
for the other ipsec?


Cheers,
Mike

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: FAST_IPSEC and NAT-T

2006-06-20 Thread Michael Vince

VANHULLEBUS Yvan wrote:


On Tue, Jun 20, 2006 at 11:26:15PM +1000, Michael Vince wrote:
 


Hey All,
When installing the ipsec-tools it says if you want NAT-T you need to 
install this patch, http://ipsec-tools.sourceforge.net/freebsd6-natt.diff
Can any one tell me if this patch works with Fast_ipsec or is it just 
for the other ipsec?
   



Hi.

I didn't have time to port it to FAST_IPSEC now, so it currently only
works with IPSEC.

But FAST_IPSEC support is on my TODO list, and shouldn't be too
difficult when I'll have time to work on it, and when we'll
synchronize with other people who are actually working on IPSec
stacks.


Yvan.
 

OK cool, the thing that really turns my off about that IPSec is when I 
reboot with it compiled in says "Expect reduced performance" because its 
not mpsafe.


Also I just tried to compile a kernel with that Nat-T patch on the other 
IPSEC kernel on 6.1-release and it failed.
I can't think of anything I have done wrong on this machine its pretty 
fresh, I did cvsup with "RELENG_6_1" before hand
maybe there is a tiny enough about of changes since the RELENG_6_1_0 
release for it to fail but I didn't notice anything serious changed, I 
also used the new pure C csup over cvsup client.


The patch installed fine with no errors but the kernel failed to compile 
ending with this..


/usr/src/sys/netinet/udp_usrreq.c:1046: warning: 'udp4_espinudp' defined 
but not used


The kernel was quite generic listed here below, the GENERIC2 just  
missing a few things like scsi and raid bits this machine doesn't need.


include GENERIC2

ident   FIREWALL

options DEVICE_POLLING
options HZ=1000

options IPSEC
options IPSEC_ESP
options IPSEC_DEBUG

#options FAST_IPSEC
#device crypto
#device cryptodev

options ALTQ

options ALTQ_CBQ
options ALTQ_RED
options ALTQ_RIO
options ALTQ_HFSC
options ALTQ_CDNR
options ALTQ_PRIQ


Mike


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: VPN with FAST_IPSEC and ipsec tools

2006-06-22 Thread Michael Vince

David DeSimone wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Brian Candler <[EMAIL PROTECTED]> wrote:
 


Ah, I guess this means you're following the instructions in the
FreeBSD handbook, which last time I looked gave a most bizarre and
unnecessary way of setting up IPSEC (GIF tunneling running on top of
IPSEC *tunnel* mode).  I raised it on this list before.
   



I ran into the same thing when analyzing the handbook's examples, and
quickly abandoned the handbook when writing my own configs.

 


Most people are better off just setting up IPSEC tunnel mode.  A few
use GIF running on top of IPSEC _transport_ mode (e.g.  those running
routing protocols like OSPF over tunnels)
   



The main reason to use IPSEC tunnel mode and avoid GIF is that such a
config is interoperable with other IPSEC implementations (Cisco,
Checkpoint, etc), and thus is much more useful in the real world.

- -- 
David DeSimone == Network Admin == [EMAIL PROTECTED]


OK that said, how do you create a network to network  tunnel based VPN 
without using the gif or gre devices?
I been trying to link up 2 networks between to VPN gateways and I have 
kind of given up, all the examples out there use a gif tunnel or a gre 
tunnel.
I simply haven't been able to work out the routes or how to make 
ipsec-tools trigger based on seeing interesting traffic, its using a 
preshared key configuration.


I have been using the typical ipsec.conf settings that most examples 
give for tunnel configurations but still no luck.
At first I thought it was a NAT-T problem as the reason the IKE wasn't 
kicking in but after testing with pure internet IPs and no nat I 
realized it wasn't that.


If I could just have an example to look at I think it could really help.

Thanks
Mike




___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: VPN with FAST_IPSEC and ipsec tools

2006-06-25 Thread Michael Vince

David DeSimone wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Michael Vince <[EMAIL PROTECTED]> wrote:
 


The main reason to use IPSEC tunnel mode and avoid GIF is that such
a config is interoperable with other IPSEC implementations, and thus
is much more useful in the real world.
 


OK that said, how do you create a network to network tunnel based VPN
without using the gif or gre devices?
   



Ok, here's a typical setup that I've used.

Suppose you have two gateways:
   Gateway 1IP = 1.2.3.4
Networks behind it:

192.168.1.0/24
192.168.2.0/24

Most of the examples you'll find will teach you to use IPSEC in
Transport mode.  But Transport mode is only used for one endpoint to
talk to another endpoint.  What you want here (and what other gateways
like Cisco will implement) is Tunnel mode, where the traffic is
encapsulated rather than merely encrypted.

- -- 
David DeSimone == Network Admin == [EMAIL PROTECTED]
 

OK that is a great guide and should be placed in the handbook, maybe 
over top of the old one or under a title "Standard VPN"


I copied and pasted most of it and replaced those IPs with mine, but am 
still having problems.


After reloading ipsec and racoon I tried to do a traceroute from a 
client behind the local gateway to a client behind the remote gateway, 
it went off and did a typical traceroute through the gateway out over 
the Internet like a regular traceroute, completely being missed by the 
kernel/ipsec rules, nothing stopped it or tried to tunnel it or trigger 
racoon IKE activity.


I tried putting 'require' in my ipsec rules, didn't change anything.
Did you have any special routes to tell the ipsec/kernel to start 
encrypting the traffic?

Are you using FAST_IPSEC or the other IPsec? If so I will have to change.
Which version of FreeBSD are you using?
I need to try and isolate what is different.

Its Fast_IPSEC in my kernel, it is a AMD64 server, I remember AMD64 
would instantly trigger a kernel panic with FAST_IPSEC in the past, but 
I am assuming it works now.
I have actually managed to trigger a full phase 1 and 2 successful 
connection activity if I use the older style gif/gre and on top use a 
IPsec gateway to gateway tunnel rule (on top of the regular internal 
network ipsec rules) and with the special gif/gre routes most examples 
on the net say to use.


Having a gateway to gateway tunnel rule was the only thing that finally 
ever triggered racoon and IPSec activity which is pretty weird, and is 
double to triple different compared to your example because I needed 
gif/gre and routes to trigger racoon/ipsec to start working.
#spdadd 1.2.3.4/32 5.6.7.8/32 any -P out ipsec 
esp/tunnel/1.2.3.4-5.6.7.8/require ;
#spdadd 5.6.7.8/32 1.2.3.4/32 any -P in ipsec 
esp/tunnel/5.6.7.8-1.2.3.4/require ;


I need to have it all working under the standard way out you have laid 
out, like I said my ipsec is in the kernel and fully loaded, but with 
your example rules it behaves as if its not there at all.


Mike

sysctl -a | grep ipsec
 ipsecpolicy34 9K   -11913  256
ipsecrequest 4 1K   -  181  256
  ipsec-misc 0 0K   -  258  32,64
   ipsec-saq 0 0K   -  167  128
   ipsec-reg 3 1K   -  153  32
net.inet.ipsec.def_policy: 1
net.inet.ipsec.esp_trans_deflev: 1
net.inet.ipsec.esp_net_deflev: 1
net.inet.ipsec.ah_trans_deflev: 1
net.inet.ipsec.ah_net_deflev: 1
net.inet.ipsec.ah_cleartos: 1
net.inet.ipsec.ah_offsetmask: 0
net.inet.ipsec.dfbit: 0
net.inet.ipsec.ecn: 0
net.inet.ipsec.debug: 0
net.inet.ipsec.esp_randpad: -1
net.inet.ipsec.crypto_support: 0
net.inet6.ipsec6.def_policy: 1
net.inet6.ipsec6.esp_trans_deflev: 1
net.inet6.ipsec6.esp_net_deflev: 1
net.inet6.ipsec6.ah_trans_deflev: 1
net.inet6.ipsec6.ah_net_deflev: 1
net.inet6.ipsec6.ecn: 0
net.inet6.ipsec6.debug: 0
net.inet6.ipsec6.esp_randpad: -1



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: VPN with FAST_IPSEC and ipsec tools

2006-06-26 Thread Michael Vince

David DeSimone wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

David DeSimone <[EMAIL PROTECTED]> wrote:
 


Hmm...  In examining my kernel configuration I found these options:

   options IPSEC
   options IPSEC_ESP
   options IPSEC_DEBUG
   # options   IPSEC_FILTERGIF
   # options   FAST_IPSEC

So it appears that I am NOT using FAST_IPSEC.
   



I have now recompiled my kernel with the following options:

   # options   IPSEC
   # options   IPSEC_ESP
   # options   IPSEC_DEBUG
   # options   IPSEC_FILTERGIF
   options FAST_IPSEC

   device  crypto

After rebooting, I noticed the startup messages show I am indeed using
FAST_IPSEC.

My other configuration remains unchanged.  I can still establish and use
the tunnels I have set up, so I don't believe this is an IPSEC vs
FAST_IPSEC problem you're seeing.

- -- 
David DeSimone == Network Admin == [EMAIL PROTECTED]


Darn, maybe you should try upgrading to 6.1 release and see if that does 
any thing.

Also I am using the latest ipsec-tools in the ports tree 0.6.6

Mike


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: VPN with FAST_IPSEC and ipsec tools

2006-06-26 Thread Michael Vince

David DeSimone wrote:

- -- 
David DeSimone == Network Admin == [EMAIL PROTECTED]
 


I got it going!
Its working like a dream now.
I don't have a for sure reason why it wasn't working but my best guess 
is it was one that actually boiled down to a silly mistake as you suggested.


I feel quite silly as it appears after some testing whats was holding it 
back was simply failing to reload the ipsec rules properly.
Most if not all the time I was doing /etc/rc.d/ipsec restart, when I 
should of been either using setkey manually or /etc/rc.d/ipsec reload.
After looking at the ipsec shell that the restart function doesn't do 
the equivalent effect as 'reload'

Personally I see this as a trap any one could fall into.

Big thanks to you, as if you weren't there I probably would of given up 
earlier and had to replace the gateway with something else altogether.


Thanks,
Mike

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Gigabit ethernet questions?

2006-08-09 Thread Michael Vince
JICUDN, I have been using nc,dd and systat to check TCP performance on 
my servers, the good thing about it is it requires little setup and 
gives results fast.

For example on host A start the nc server.
nc -4kl 3000 > /dev/null

Then start another one on hostb sending data via nc from /dev/zero
cat /dev/zero | dd bs=1m | nc hosta 3000
^C0+226041 records in
0+226040 records out
14813757440 bytes transferred in 246.184967 secs (60173282 bytes/sec)

You can take out the dd and just use systat -if depending on what you like.

Mike


Dima Roshin wrote:


Thanks Jon, I did it on both sides, thats much better now:

gate1# sysctl kern.polling.idle_poll=1
kern.polling.idle_poll: 0 -> 1
gate1# sysctl net.inet.ip.fastforwarding
net.inet.ip.fastforwarding: 0
gate1# sysctl net.inet.ip.fastforwarding=1
net.inet.ip.fastforwarding: 0 -> 1
gate1# iperf -c 192.168.0.2 -N -w 196000

Client connecting to 192.168.0.2, TCP port 5001
TCP window size:   192 KByte (WARNING: requested   191 KByte)

[  3] local 192.168.0.1 port 63941 connected with 192.168.0.2 port 5001
[  3]  0.0-10.0 sec762 MBytes639 Mbits/sec

But there is still some bottleneck, and I can't understand where.



-Original Message-
From: Jon Otterholm <[EMAIL PROTECTED]>
To: Dima Roshin <[EMAIL PROTECTED]>
Date: Wed, 09 Aug 2006 13:54:18 +0200
Subject: Re: Gigabit ethernet questions?

 


Dima Roshin wrote:
   


Greeting colleagues. I've got two DL-360(pciX bus) servers, with BCM5704 
NetXtreme Dual Gigabit Adapters(bge). The Uname is 6.1-RELEASE-p3. The bge 
interfaces of the both servers are connected with each other with a cat6 
patchcord.
  Here are my settings:
kernel config:
options DEVICE_POLLING 
options HZ=1000 #


sysctl.conf:
kern.polling.enable=1
net.inet.ip.intr_queue_maxlen=5000
kern.ipc.maxsockbuf=8388608
net.inet.tcp.sendspace=3217968
net.inet.tcp.recvspace=3217968
net.inet.tcp.rfc1323=1

bge1: flags=8843 mtu 9000
   options=5b
   inet 192.168.0.1 netmask 0xff00 broadcast 192.168.0.255
   ether 00:17:a4:3a:e1:81
   media: Ethernet autoselect (1000baseTX )
   status: active
(note mtu 9000)

and here are tests results:

netperf:

TCP STREAM TEST to 192.168.0.1
Recv   SendSend
Socket Socket  Message  Elapsed
Size   SizeSize Time Throughput
bytes  bytes   bytessecs.10^6bits/sec

6217968 6217968 621796810.22 320.04

UDP UNIDIRECTIONAL SEND TEST to 192.168.0.1
Socket  Message  Elapsed  Messages
SizeSize Time Okay Errors   Throughput
bytes   bytessecs#  #   10^6bits/sec

 92169216   10.00  118851 1724281 876.20
41600   10.00   0  0.00]



iperf:
gate2# iperf -s -N

Server listening on TCP port 5001
TCP window size: 3.07 MByte (default)

[  4] local 192.168.0.2 port 5001 connected with 192.168.0.1 port 52597
[  4]  0.0-10.1 sec384 MBytes319 Mbits/sec

Also I can say, that I've managed to achieve about 500mbit.s by tuning tcp 
window with -w key in iperf.

How can we explain such a low tcp performance? What else is to tune? Is there 
somebody who achieved gigabit speed with tcp on freebsd?

Thanks.



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"
 
 

You also need kern.polling.idle_poll=1 and maybe 
net.inet.ip.fastforwarding=1 though I never noticed any difference with 
that one enabled. I got about 840Mbit/s routed through a dell 1850 
(EMT64 running AMD64) with em-interfaces (I only used one physical IF 
though with 2 VLAN-if).



   


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"
 



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"