Large scale NAT problems

2003-12-16 Thread Andriy Korud
Hi,
I'm tring to make NAT on FreeBSD box for 2500 clients on 35Mbit uplink.
Box is Xeon 2.8GHz, 1G RAM, 2xIntel PRO/1000 (em) adapters.
FreeBSD 4.9-STABLE, kernel is configured for single processor (HT not used),
with DEVICE_POLLING and HZ=2000, LARGE_NAT defined.
Nat was done using ipnat, no additional filtering.

The problem is that when traffic grows to 10Mbit and number of active NAT
sessions reach 7, CPU usage exponentialy grows and system spends all CPU
time in interrupts handling. 
The system become completely unreponsible and unsable and only hard reset is the
solution.

And worse thing is that Linux on Cel/800 with SOHO cards do that NATing with 5%
CPU load without any problem :-(.

Maybe I shoud try natd? May this help?
Any suggestions?

thanks in advance,

Andriy Korud


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Large scale NAT problems

2003-12-16 Thread Andriy Korud
Цитую Attila Nagy <[EMAIL PROTECTED]>:

> Andriy Korud wrote:
> > The problem is that when traffic grows to 10Mbit and number of active NAT
> > sessions reach 7, CPU usage exponentialy grows and system spends all
> CPU
> > time in interrupts handling. 
> > The system become completely unreponsible and unsable and only hard reset
> is the
> > solution.
> Did you try OpenBSD's pf?
> 
Is it ported to 4.9-STABLE?
How can I configure and try it?

Andriy

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Large scale NAT problems

2003-12-16 Thread Andriy Korud
Цитую Q <[EMAIL PROTECTED]>:

> You have set the 'sysctl kern.polling.enable=1' bit right?
> 
> Seeya...Q
> 
Yes, and I 'systat -v 1' show 2000 timer interrupts and 0 em0,

Andriy
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Large scale NAT problems

2003-12-16 Thread Andriy Korud
Цитую DrumFire <[EMAIL PROTECTED]>:

> On Tue, 16 Dec 2003 11:40:11 +0200
> Andriy Korud <[EMAIL PROTECTED]> wrote:
> 
> First of try OpenBSD pf, that works only on a 5.x-Release,
> try to disable device polling in your kernel configuration.
> 
> I've made some test with device_polling enabled, and I have
> less performance than with device_polling disabled.
> 
With disabled polling the situating was the same, the only difference is that 
'systat -v 1' shows ~3000 of em0 interrupts/s (with polling_enaled - 0 em0
interrupts and 2000 timer interrupts/s).

Andriy

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Large scale NAT problems

2003-12-16 Thread Julian Elischer
did you try natd?
(for comparison)

On Tue, 16 Dec 2003, Andriy Korud wrote:

> Hi,
> I'm tring to make NAT on FreeBSD box for 2500 clients on 35Mbit uplink.
> Box is Xeon 2.8GHz, 1G RAM, 2xIntel PRO/1000 (em) adapters.
> FreeBSD 4.9-STABLE, kernel is configured for single processor (HT not used),
> with DEVICE_POLLING and HZ=2000, LARGE_NAT defined.
> Nat was done using ipnat, no additional filtering.
> 
> The problem is that when traffic grows to 10Mbit and number of active NAT
> sessions reach 7, CPU usage exponentialy grows and system spends all CPU
> time in interrupts handling. 
> The system become completely unreponsible and unsable and only hard reset is the
> solution.
> 
> And worse thing is that Linux on Cel/800 with SOHO cards do that NATing with 5%
> CPU load without any problem :-(.
> 
> Maybe I shoud try natd? May this help?
> Any suggestions?
> 
> thanks in advance,
> 
> Andriy Korud
> 
> 
> ___
> [EMAIL PROTECTED] mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "[EMAIL PROTECTED]"
> 

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Update 4.6 to 4.8

2003-12-16 Thread Eicke
Hi folks, I am trying to update a system from 4.6 to 4.8. 
When I try to run  o make buildworld the following erro appear:

# make buildworld
Makefile:137: *** missing separator.  Stop.

I remove /usr/src and download already via cvsup but the error appear yet.
Could you help me?
Regards.
Eicke.

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Large scale NAT problems

2003-12-16 Thread Max Laier
On Tuesday 16 December 2003 10:40, Andriy Korud wrote:
> Цитую Attila Nagy <[EMAIL PROTECTED]>:
> > Andriy Korud wrote:
> > > The problem is that when traffic grows to 10Mbit and number of active
> > > NAT sessions reach 7, CPU usage exponentialy grows and system
> > > spends all
> >
> > CPU
> >
> > > time in interrupts handling.
> > > The system become completely unreponsible and unsable and only hard
> > > reset
> >
> > is the
> >
> > > solution.
> >
> > Did you try OpenBSD's pf?
>
> Is it ported to 4.9-STABLE?
> How can I configure and try it?
>
> Andriy

It's in the KAME snapkits, AFAIK.

A port for DragonFlyBSD is on my site:
 (1) http://pf4freebsd.love2party.net/pfil.diff.gz
 (2) http://pf4freebsd.love2party.net/pf_df_test.tar.gz

Apply (1) to the tree, build GENERIC kernel with at least:

  options PFIL_HOOKS
  options bpf
  otptions RANDOM_IP_ID#this is a great default, btw 

install includes (or copy sys/net/pfil.h to /usr/net/pfil.h).
Extract (2) and issue:
  make && make install

now you should be able to:

  kldload pfsync
  kldload pflog
  kldload pf
  mknod pf c 73 0 root:wheel

and have fun with pfctl and friends.

This _might_ run on 4.x as well, but I think you'll have to work around a few 
minors to get it working in 4.9.

-- 
Best regards,   | [EMAIL PROTECTED]
Max Laier   | ICQ #67774661
http://pf4freebsd.love2party.net/   | [EMAIL PROTECTED] #DragonFlyBSD

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Update 4.6 to 4.8

2003-12-16 Thread Max Laier
WRONG LIST!!

-- 
Best regards,   | [EMAIL PROTECTED]
Max Laier   | ICQ #67774661
http://pf4freebsd.love2party.net/   | [EMAIL PROTECTED] #DragonFlyBSD

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Large scale NAT problems

2003-12-16 Thread Luigi Rizzo
On Tue, Dec 16, 2003 at 04:39:42AM -0800, Julian Elischer wrote:
> did you try natd?
> (for comparison)

i guess ipnat is in kernel, whereas natd is in userland,
and furthermore natd's session handling is just not up
to the job (small hash tables, huge session expire times...)

cheers
luigi

> On Tue, 16 Dec 2003, Andriy Korud wrote:
> 
> > Hi,
> > I'm tring to make NAT on FreeBSD box for 2500 clients on 35Mbit uplink.
> > Box is Xeon 2.8GHz, 1G RAM, 2xIntel PRO/1000 (em) adapters.
> > FreeBSD 4.9-STABLE, kernel is configured for single processor (HT not used),
> > with DEVICE_POLLING and HZ=2000, LARGE_NAT defined.
> > Nat was done using ipnat, no additional filtering.
> > 
> > The problem is that when traffic grows to 10Mbit and number of active NAT
> > sessions reach 7, CPU usage exponentialy grows and system spends all CPU
> > time in interrupts handling. 
> > The system become completely unreponsible and unsable and only hard reset is the
> > solution.
> > 
> > And worse thing is that Linux on Cel/800 with SOHO cards do that NATing with 5%
> > CPU load without any problem :-(.
> > 
> > Maybe I shoud try natd? May this help?
> > Any suggestions?
> > 
> > thanks in advance,
> > 
> > Andriy Korud
> > 
> > 
> > ___
> > [EMAIL PROTECTED] mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-net
> > To unsubscribe, send any mail to "[EMAIL PROTECTED]"
> > 
> 
> ___
> [EMAIL PROTECTED] mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "[EMAIL PROTECTED]"
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Problems using ipsec transport mode with a gateway

2003-12-16 Thread Regis . HANNA
Hello,

My network configuration is 2 subnets separated by a gateway :

||  1.1.1.0/24  |-|  2.1.1.0/24  |--|
| Host 1 |--| FreeBSD gateway |--| FreeBSD host |
||  |-|  |--|
 1.1.1.4 1.1.1.1   2.1.1.12.1.1.4
non ciphered data   ciphered data


I want to protect data between Host 1 and FreeBSD host, only in the
2.1.1.0/24 subnet by using ipsec in TRANSPORT mode. I choose transport mode
because of low overhead and higher performances.

I observe that data from Host 1 to FreeBSD host are ok but data from FreeBSD
host to Host 1 are STOPPED in the FreeBSD gateway. When I use ipsec in
tunnel mode it is always ok.

The FreeBSD gateway setkey configuration is :
add 2.1.1.1 2.1.1.4 esp 1000 -m transport -E rijndael-cbc
"PASSWORDPASSWORD";
add 2.1.1.4 2.1.1.1 esp 1001 -m transport -E rijndael-cbc
"PASSWORDPASSWORD";
spdadd 1.1.1.4 2.1.1.4 any -P out ipsec
esp/transport/2.1.1.1-2.1.1.4/require;
spdadd 2.1.1.4 1.1.1.4 any -P in ipsec
esp/transport/2.1.1.4-2.1.1.1/require;

The FreeBSD host setkey configuration is :
add 2.1.1.1 2.1.1.4 esp 1000 -m transport -E rijndael-cbc
"PASSWORDPASSWORD";
add 2.1.1.4 2.1.1.1 esp 1001 -m transport -E rijndael-cbc
"PASSWORDPASSWORD";
spdadd 1.1.1.4 2.1.1.4 any -P in ipsec
esp/transport/2.1.1.1-2.1.1.4/require;
spdadd 2.1.1.4 1.1.1.4 any -P out ipsec
esp/transport/2.1.1.4-2.1.1.1/require;

I use FreeBSD 5.1.

Thank you in advance,
Regis Hanna.

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Cisco Aironet 350 PCI in AP Mode?

2003-12-16 Thread Art Mason
Out of curiosity, has there been any success with implementing
infrastructure mode capability in the an driver for the Cisco Aironet
350 WLAN devices?  I like the quality and range of these cards, and
would like to roll my own access points, but every piece of
documentation I've come across up to this point says it's not currently
implemented in the 4.X drivers.  Any word on whether this might make it
into 5.2?  BTW, what is the problem with getting these cards to work in
infrastructure mode, as opposed to ad-hoc?  

Many thanks in advance!

-- 
Art Mason

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


gre tunnel & ipsec transport mode

2003-12-16 Thread Eric Masson
Hello,

I'm experimenting dynamic routing protocols in a vpn setup. Ipsec tunnel
mode is not applicable here as selectors do not appear in system routing
table.

So I've tried to use gre tunnels beetween lans and then protect them by
ipsec transport mode beetween gateways.

It seems that gre pseudo interfaces & ipsec stack don't interact very
well in this setup (4.8-RELEASE-p14 boxes).

I've set the following test case :

192.168.197.* --- Router A  --- gre tunnel--- Router B --- 10.168.18.*
  \  /
   +Internet---+

Gre tunnels setup :

Each router has a gre tunnel to its peer and the associated network
route.

Traffic from 192.168.197/24 hosts to 10.168.18/24 hosts flows fine,
tcpdump reports gre packets beetween the two routers.

Ipsec transport mode setup :

Each router has a outgoing & incoming transport ipsec policies (ah+esp)
to its peer for any protocol.

Isakmpd (racoon) is active.

Direct connection from one router to the other (ssh, telnet...) sees
ipsec SP applied and works fine.

Mixing the two setups :

Ipsec transformed gre packets leave originating box to the other tunnel
endpoint (tcpdump reports ah+esp packets flowing outside).

On destination box, tcpdump shows incoming ipsec gre transformed
packets, but these packets don't make their way to internal interface,
and are silently dropped (no log anywhere)

I've tried to look at /sys/net/ip_input.c, /sys/net/in_gif.c &
/sys/net/ip_gre.c to understand the case, as gif tunnels get
encapsulated correctly, but no immediate fix came to my mind but I must
say I'm no C guru nor kernel hacker :/

Has anyone any idea or fix on this case ?

TIA

Regards

Eric Masson

-- 
 je pense pas que ce soit toitu es bien trop vicieux pour agir de
 cette façon. Toi ton genre, c'est plus de contacter banque direct en
 esperant que je n'auras pas mes cadeaux de parrainages!
 -+- JD in  : Petit neuneu Noël -+-
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


suffering from poor network performance...

2003-12-16 Thread ander Sendzimir
First, I know very little about networking, especially performance 
turning. I would really like to learn more but don't know where/how to 
start effectively.

I have a small home network with a PowerBook G4 and FBSD 4.9-STABLE 
connected through a Netgear DS108 hub (10/100). The FBSD box is a dual 
Xeon 500MHz with Intel Etherexpress 100/Pro (MS440GX motherboard). If 
for some reason it makes a difference, there is an RT311 router 
connected to the hub as well. This is the router through which these 
machines see the internet. There are other machines connected to the 
network. However, they are currently turned off.

In my limited knowledge I'm using ping from each host to the other. 
From the FBSD system to the G4 system, I'm getting nearly 60% packet 
loss and about 20% in the other direction. I'm ready to use tcpdump but 
I'm not sure how I would. How can/should I go about improving network 
performance?

Thanks for the help.
Alex
ifconfig on the PowerBook G4 gives:

lo0: flags=8049 mtu 16384
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
inet 127.0.0.1 netmask 0xff00
gif0: flags=8010 mtu 1280
stf0: flags=0<> mtu 1280
en0: flags=8863 mtu 1500
inet6 fe80::20a:95ff:fe77:5140%en0 prefixlen 64 scopeid 0x4
inet 192.168.0.3 netmask 0xff00 broadcast 192.168.0.255
ether 00:0a:95:77:51:40
media: autoselect (100baseTX ) status: active
supported media: none autoselect 10baseT/UTP  
10baseT/UTP  10baseT/UTP  
100baseTX  100baseTX  100baseTX 
 1000baseTX  1000baseTX 
 1000baseTX  
1000baseTX 

ifconfig on the dual Xeon gives:

fxp0: flags=8843 mtu 1500
inet 192.168.0.2 netmask 0xff00 broadcast 192.168.0.255
ether 00:90:27:3e:b2:66
media: Ethernet autoselect (100baseTX)
status: active
lo0: flags=8049 mtu 16384
inet 127.0.0.1 netmask 0xff00
I know both interfaces are configured for half-duplex. Perhaps 
full-duplex would help? How to enable under Mac OS X 10.2? Otherwise, I 
know how to do it under FBSD in /etc/rc.conf.

When I ping from one machine to the other I get nearly 60% packet loss 
from Xeon to G4 system and about 20% packet loss from G4 to Xeon. I'm 
issuing the following commands to get this output (with some  
for the list):

> ping -f -c {1000,2000,4000,8000} name-of-host

Xeon -> G4:

1.
PING hardy (192.168.0.3): 56 data bytes
1000 packets transmitted, 500 packets received, 50% packet loss
round-trip min/avg/max/stddev = 0.795/0.859/1.081/0.021 ms
2.
PING hardy (192.168.0.3): 56 data bytes
2000 packets transmitted, 900 packets received, 55% packet loss
round-trip min/avg/max/stddev = 0.501/0.858/1.111/0.022 ms
3.
PING hardy (192.168.0.3): 56 data bytes
4000 packets transmitted, 1600 packets received, 60% packet loss
round-trip min/avg/max/stddev = 0.784/0.859/1.042/0.017 ms
4.
PING hardy (192.168.0.3): 56 data bytes
8000 packets transmitted, 3100 packets received, 61% packet loss
round-trip min/avg/max/stddev = 0.612/0.858/0.996/0.017 ms
G4 -> Xeon:

1.
PING newton (192.168.0.2): 56 data bytes
1240 packets transmitted, 1000 packets received, 19% packet loss
round-trip min/avg/max = 0.251/1.048/16.451 ms
2.
PING newton (192.168.0.2): 56 data bytes
2539 packets transmitted, 2000 packets received, 21% packet loss
round-trip min/avg/max = 0.171/1.057/12.42 ms
3.
PING newton (192.168.0.2): 56 data bytes
5118 packets transmitted, 4000 packets received, 21% packet loss
round-trip min/avg/max = 0.205/1.088/13.318 ms
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Alexander Sendzimir 802 863 5502
 MacTutor of Vermont   info @ mactutor . vt . us
  Colchester, VT 05446 ( not yet active )
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: suffering from poor network performance...

2003-12-16 Thread Barney Wolff
On Tue, Dec 16, 2003 at 05:58:08PM -0500, Alex wrote:
> First, I know very little about networking, especially performance 
> turning. I would really like to learn more but don't know where/how to 
> start effectively.

You're seeing icmp rate-limiting.  Don't worry about it.

-- 
Barney Wolff http://www.databus.com/bwresume.pdf
I'm available by contract or FT, in the NYC metro area or via the 'Net.
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: suffering from poor network performance...

2003-12-16 Thread Charles Swiger
On Dec 16, 2003, at 5:58 PM, Alex (ander Sendzimir) wrote:
I have a small home network with a PowerBook G4 and FBSD 4.9-STABLE 
connected through a Netgear DS108 hub (10/100).
If the device works at both 10 and 100 speed, it's a switch, not a hub.

Anyway, the very high rates of packet loss you report suggest a 
physical link-layer problem: can you try swapping out ethernet cables 
or try using another hub for testing?  Also, what does "netstat -i" 
show-- any significant number of errors or collisions (not that the 
latter should be present if using a switch), but something is going 
wrong

--
-Chuck
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: suffering from poor network performance...

2003-12-16 Thread Kevin Stevens
On Tue, 16 Dec 2003, Alex wrote:

> I have a small home network with a PowerBook G4 and FBSD 4.9-STABLE
> connected through a Netgear DS108 hub (10/100). The FBSD box is a dual
> Xeon 500MHz with Intel Etherexpress 100/Pro (MS440GX motherboard). If
> for some reason it makes a difference, there is an RT311 router
> connected to the hub as well. This is the router through which these
> machines see the internet. There are other machines connected to the
> network. However, they are currently turned off.

Ok.

> In my limited knowledge I'm using ping from each host to the other.
>  From the FBSD system to the G4 system, I'm getting nearly 60% packet
> loss and about 20% in the other direction. I'm ready to use tcpdump but
> I'm not sure how I would. How can/should I go about improving network
> performance?

tcpdump will only show you packets that ARRIVED - since packet loss is
your problem it probably won't help much.

> ifconfig on the PowerBook G4 gives:
>
> en0: flags=8863 mtu 1500
>  inet6 fe80::20a:95ff:fe77:5140%en0 prefixlen 64 scopeid 0x4
>  inet 192.168.0.3 netmask 0xff00 broadcast 192.168.0.255
>  ether 00:0a:95:77:51:40
>  media: autoselect (100baseTX ) status: active
>  supported media: none autoselect 10baseT/UTP 

Ok...

> ifconfig on the dual Xeon gives:
>
> fxp0: flags=8843 mtu 1500
>  inet 192.168.0.2 netmask 0xff00 broadcast 192.168.0.255
>  ether 00:90:27:3e:b2:66
>  media: Ethernet autoselect (100baseTX)
>  status: active
>
> I know both interfaces are configured for half-duplex. Perhaps

How do you know this?  The G4 showed half-duplex, the Xeon shows that it
is set for autoconfiguration.  In any case how they are configured is less
important than how they are actually running - not always the same thing.

I believe there's a sysctl that can be queried under FreeBSD to provide
actual status.  Sorry, I'm now exclusively on Mac/OSX, so can't check it
for you.

> full-duplex would help? How to enable under Mac OS X 10.2? Otherwise, I
> know how to do it under FBSD in /etc/rc.conf.

You're probably on the right track with a duplex problem.  Most hubs
default to half-duplex, and it's probably the safest choice to use in any
case - most attempts at full-duplexed hubs I've seen have been poor.

First, pull the hub out of the middle and connect the G4 to the Xeon with
a straight-through Ethernet cable.  (All G4 PBs should automatically
handle any crossover required).  Repeat your ping tests, and observe your
duplex config on both machines (should be full duplex).  You should see
practically no packet loss.

Now go back and reconnect each machine to the hub, and verify/confirm half
duplex for each device.  Repeat tests.  If you're still getting packet
loss, power cycle the hub.  If you're STILL getting packet loss, throw the
hub out and buy an 8-port switch for $30, and set the machines to
full-duplex.

KeS
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: suffering from poor network performance...

2003-12-16 Thread Charles Swiger
On Dec 16, 2003, at 6:32 PM, Barney Wolff wrote:
You're seeing icmp rate-limiting.  Don't worry about it.
Whoops, I didn't pay particular attention to the "-f" option, but 
you're absolutely right...

--
-Chuck
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: suffering from poor network performance...

2003-12-16 Thread Kevin Stevens
On Tue, 16 Dec 2003, Charles Swiger wrote:

> If the device works at both 10 and 100 speed, it's a switch, not a hub.

It is sold as a hub.  Most of these "dual-speed" hubs are/were two hubs,
one of each speed, with a two-port internal switch connecting them.  The
physical ports would auto-join to whichever side the connection speed
indicated.  Infuriating to use as tap devices, if you ended up on the
wrong side of the switch from your target, you wouldn't see any broadcast
traffic.  ;)

KeS
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: suffering from poor network performance...

2003-12-16 Thread Eli Dart

In reply to Alex (ander Sendzimir) <[EMAIL PROTECTED]> :

> First, I know very little about networking, especially performance 
> turning. I would really like to learn more but don't know where/how to 
> start effectively.

Take a look at the tools ttcp, netperf and iperf.  They build 
straight out of ports.

Also, there are several good network tuning sites out there -- this 
one has most of them listed (take a look at the links page):

http://www-didc.lbl.gov/TCP-tuning/TCP-tuning.html

Note that most of the techniques covered here are for high-bandwidth, 
high-latency links (long fat pipes).  Bumping up your tcp buffers a 
bit might help a bit, but for the most part the machines you have 
should saturate a 100Mbps link with no trouble at all.  If you see a 
bit less than that, realize that you're connected to a hub, and so 
you're doing collision detection.  Fast Ethernet performance falls 
off pretty quickly in the face of competing traffic on a hub.  Note 
also that if you just crank up your tcp buffers to something large 
without thinking about what you're doing, you can actually decrease 
performance.

As someone else pointed out, using ping as a measure of network 
performance often doesn't give reliable results, since most operating 
systems (including FreeBSD) rate limit ICMP in various ways to 
protect against DoS attacks.

Hope this helps,

--eli


> 
> Thanks for the help.
> Alex



pgp0.pgp
Description: PGP signature


Re: suffering from poor network performance...

2003-12-16 Thread Barney Wolff
Folks, see sysctl net.inet.icmp.icmplim for why you get packet loss
on a flood ping.  It has nothing to do with duplex, hub/switch or
problems with equipment.  Make it 0 to remove the limit, I believe.
Barney
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: suffering from poor network performance...

2003-12-16 Thread ander Sendzimir
I'm responding to several people at once. References
to material to read is fine in place of personal
descriptions. However, you know, the 'personal touch'
is always good :-)
	The only thing better than FBSD is the mailing lists.

	Thanks, folks.

	  Alex


On Tuesday, December 16, 2003, at 06:36  PM, Kevin Stevens wrote:

You're probably on the right track with a duplex problem.  Most hubs
default to half-duplex, and it's probably the safest choice to use in 
any
case - most attempts at full-duplexed hubs I've seen have been poor.
Any recommendations on switches for home use? Equipment to stay away 
from?

First, pull the hub out of the middle and connect the G4 to the Xeon 
with
a straight-through Ethernet cable.  (All G4 PBs should automatically
handle any crossover required).  Repeat your ping tests, and observe 
your
duplex config on both machines (should be full duplex).  You should see
practically no packet loss.
Of course. I didn't even think of this.

Now go back and reconnect each machine to the hub, and verify/confirm 
half
duplex for each device.  Repeat tests.  If you're still getting packet
loss, power cycle the hub.  If you're STILL getting packet loss, throw 
the
hub out and buy an 8-port switch for $30, and set the machines to
full-duplex.
What does power cycling the hub do in this case (Netgear DS108)? 
Finally, what is the difference between half and full duplex?

Kevin, later on you write in response to Charles Swiger:

On Tue, 16 Dec 2003, Charles Swiger wrote:

If the device works at both 10 and 100 speed, it's a switch, not a 
hub.
It is sold as a hub.  Most of these "dual-speed" hubs are/were two 
hubs,
one of each speed, with a two-port internal switch connecting them.  
The
physical ports would auto-join to whichever side the connection speed
indicated.  Infuriating to use as tap devices, if you ended up on the
wrong side of the switch from your target, you wouldn't see any 
broadcast
traffic.  ;)

KeS
Interesting. I didn't know that. What is the difference between a 
switch and a hub? I thought I understood. Perhaps this is not the case. 
Thanks.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Alexander Sendzimir 802 863 5502
 MacTutor of Vermont   info @ mactutor . vt . us
  Colchester, VT 05446 ( not yet active )
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: suffering from poor network performance...

2003-12-16 Thread Charles Swiger
On Dec 16, 2003, at 7:22 PM, Alex (ander Sendzimir) wrote:
[ ... ]
First, Barney was correct: using "ping -f" will run into the ICMP 
response limitation.  Try using "ping -i 0.01 _hostname_", instead, and 
you may find out that you don't have a problem with packet loss at all 
at this lower speed.

What does power cycling the hub do in this case (Netgear DS108)? 
Finally, what is the difference between half and full duplex?
Half-duplex means the interface can either send or receive, but not do 
both at the same time.  Full-duplex requires a switch.

[ ... ]
It is sold as a hub.  Most of these "dual-speed" hubs are/were two 
hubs,
one of each speed, with a two-port internal switch connecting them.  
The
physical ports would auto-join to whichever side the connection speed
indicated.  Infuriating to use as tap devices, if you ended up on the
wrong side of the switch from your target, you wouldn't see any 
broadcast
traffic.  ;)
Interesting. I didn't know that. What is the difference between a 
switch and a hub? I thought I understood. Perhaps this is not the 
case. Thanks.
Hubs are dumb; typically all ports share a single wire-speed chunk of 
bandwidth, they do not regenerate packets and are subject to 
significant topology constraints (you can't nest or "tree" them more 
than about two levels deep).

Switches are smarter and often have external management interfaces, 
they keep track of each port individually in terms of speed and duplex 
(ie, permit full-duplex operation), they keep track of MAC addresses 
via their own ARP tables and only forward traffic to the destination 
port(s) to which the traffic should go.  Switches generally have a 
store-and-forward mechanism for handling packets so that they eliminate 
collisions and drop errors at the sending port rather than forwarding 
broken traffic to all listeners the way a hub does, thereby 
regenerating packet timing and permitting much larger topologies.  
Switches may implement spanning tree to prevent loops, and often handle 
things like VLAN tagging and port aggregration or trunking for 
switch-to-switch connections.

Switches support internal bandwidth many times greater than indidivual 
port wirespeed so that many machines can be sending traffic at "full 
speed".  Two machines talking to each other at "full speed" will 
saturate a hub; if four machines all want to talk on a hub, they each 
get a fraction of the bandwidth.

--
-Chuck
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: suffering from poor network performance...

2003-12-16 Thread Kevin Stevens
On Dec 16, 2003, at 17:32, Charles Swiger wrote:

On Dec 16, 2003, at 7:22 PM, Alex (ander Sendzimir) wrote:
[ ... ]
First, Barney was correct: using "ping -f" will run into the ICMP 
response limitation.  Try using "ping -i 0.01 _hostname_", instead, 
and you may find out that you don't have a problem with packet loss at 
all at this lower speed.
I wish I had a FreeBSD box to check this on, but from an OS X G5 to an 
Athlon WinXP box (both at 100% CPU from distribfolding client:

babelfish:~ root# ping -f -c 1 denizen
PING denizen.pursued-with.net (192.168.168.1): 56 data bytes
.
--- denizen.pursued-with.net ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.079/0.112/1.01 ms
babelfish:~ root#
That's through a cheap Gb switch.  Just a data point.

KeS

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: suffering from poor network performance...

2003-12-16 Thread Bill Fumerola
[ this isn't really -net material ]

On Tue, Dec 16, 2003 at 07:50:57PM -0800, Kevin Stevens wrote:

> >First, Barney was correct: using "ping -f" will run into the ICMP 
> >response limitation.  Try using "ping -i 0.01 _hostname_", instead, 
> >and you may find out that you don't have a problem with packet loss at 
> >all at this lower speed.
> 
> I wish I had a FreeBSD box to check this on, but from an OS X G5 to an 
> Athlon WinXP box (both at 100% CPU from distribfolding client:

which is completely irrelevant because your winxp machine doesn't have
the aforementioned icmp response limiter.

> That's through a cheap Gb switch.  Just a data point.

... albiet a useless one.

[ SECURE !1 ]$ sudo ping -f choker.corp   
PING choker.corp.yahoo.com (216.145.52.228): 56 data bytes
.^C
--- choker.corp.yahoo.com ping statistics ---
459 packets transmitted, 398 packets received, 13% packet loss
round-trip min/avg/max/stddev = 0.221/0.227/0.302/0.010 ms

$ dmesg | tail -1  (~)
Limiting icmp ping response from 246 to 200 packets/sec
[choker.corp(ttyq0)-fumerola Tue16Dec/20:27:57]
[ SECURE !4 ]$ sudo sysctl net.inet.icmp.icmplim=0 
net.inet.icmp.icmplim: 200 -> 0

[scurvy.corp(ttyp1)-fumerola Tue16Dec/20:28:19]
[ SECURE !3 ]$ sudo ping -f -c 1 choker.corp 
PING choker.corp.yahoo.com (216.145.52.228): 56 data bytes
..
--- choker.corp.yahoo.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss

-- 
- bill fumerola / [EMAIL PROTECTED] / [EMAIL PROTECTED]


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: suffering from poor network performance...

2003-12-16 Thread Kevin Stevens
On Dec 16, 2003, at 20:32, Bill Fumerola wrote:

I wish I had a FreeBSD box to check this on, but from an OS X G5 to an
Athlon WinXP box (both at 100% CPU from distribfolding client:
which is completely irrelevant because your winxp machine doesn't have
the aforementioned icmp response limiter.
That's through a cheap Gb switch.  Just a data point.
... albiet a useless one.
FOAD, jackass.

KeS

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: suffering from poor network performance...

2003-12-16 Thread Kevin Stevens
I apologize to the list for my results not being germane to the 
conversation.  I can confirm that OS X also implements an ICMP 
restriction (net.inet.icmp.icmplim) which similarly limits responses 
(default is 250), and would account for the OP's results when testing 
toward the PowerBook.

As for my response to Bill Fumerola, his snotty response was completely 
uncalled for, and if you treat people like that you should expect the 
same in return.  No apology there.

KeS

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"