%cpu in system - squid performance in FreeBSD 5.3

2005-01-11 Thread Mohan Srinivasan
Following up to a mail from Jeff Behl and Sean Chittenden back in Dec.

http://lists.freebsd.org/pipermail/freebsd-net/2004-December/006074.html

>From your description, it looks like moving a kqueue based Squid will 
help considerably (it looks like there is a version of Squid that
is kqueue based - not sure how stable that is though). If you drop a quick 
kernel profile, you will see most of the system CPU being spent in select() 
caused polling of descriptors. In my previous experience with a Squid-based 
proxy several years ago, once you dropped more than a couple of hundred 
connections into select(), CPU utilization spiked sharply because of 
the descriptor polling.

We then hoisted Squid on top of a (homebrew) version of kqueue, which 
caused system CPU to drop dramatically, because all the descriptor polling
was avoided.

mohan

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: %cpu in system - squid performance in FreeBSD 5.3

2005-01-11 Thread Jeff Behl
Yes, I believe the kqueue version of squid would show much better 
results.  Unfortunately it fails to compile and I have yet the time to 
try mucking with it more.  I'll get back to the list when I am able to 
get it up and running...

jeff
Mohan Srinivasan wrote:
Following up to a mail from Jeff Behl and Sean Chittenden back in Dec.
http://lists.freebsd.org/pipermail/freebsd-net/2004-December/006074.html
From your description, it looks like moving a kqueue based Squid will 
help considerably (it looks like there is a version of Squid that
is kqueue based - not sure how stable that is though). If you drop a quick 
kernel profile, you will see most of the system CPU being spent in select() 
caused polling of descriptors. In my previous experience with a Squid-based 
proxy several years ago, once you dropped more than a couple of hundred 
connections into select(), CPU utilization spiked sharply because of 
the descriptor polling.

We then hoisted Squid on top of a (homebrew) version of kqueue, which 
caused system CPU to drop dramatically, because all the descriptor polling
was avoided.

mohan
 

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: buildup of Windows time_wait talking to fbsd 4.10

2005-01-11 Thread Len Conrad

We have a windows mailserver that relays its outbound to a fbsd 
gateway.  We changed to a different fbsd gateway running 4.10. Windows 
then began having trouble sending to 4.10.  Windows "netstat -an" 
shows  dozens of lines like this:

source IP  desitination IP
==
 TCP10.1.16.3:1403 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1407 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1415 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1419 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1435 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1462 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1470 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1473 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1478 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1493 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1504 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1507 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1508 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1521 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1526 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1546 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1550 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1568 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1571 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1589 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1592 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1616 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1620 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1629 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1644 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1647 192.168.200.59:25  TIME_WAIT
 TCP10.1.16.3:1654 192.168.200.59:25  TIME_WAIT
Eventually, the windows SMTP logs line like "cannot connect to remote IP" 
or "address already in use" because no local tcp/ip sockets are 
available, we think.

The new gateway/fbsd 4.10 "sockstat -4" shows no corresponding tcp 
connections when the Windows server is showing as above.  On the fbsd 
4.10 machines, smtp logs, syslog, and dmesg show no errors.

We switch the windows box to smtp gateway towards the old box/fbsd 4.7, 
all is cool.

Suggestions with how to proceed debugging, please.
I'm trying to get the dmesg.boot for the 4.7 and 4.10 boxes now, sorry.
Len
Just off the top of my head...
You mentioned the freebsd machine is the gateway.  Do you have a firewall 
on the host blocking connections from the windows machine?
the two mail servers that send outbound to the fbsd gateway are on the 
subnet, same rules.   the firewall is "outside" the subnets of the mail 
servers and gateways.

We haven't put a sniffer yet.  there's none on windows boxes, and tcpview 
on the fbsd boxes.

We going to start changing NIC model/brands.
thanks
Len
_
http://IMGate.MEIway.com : free anti-spam gateway, runs on 1000's of sites
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


IPv6 TCP transfers are hanging

2005-01-11 Thread Kevin Oberman
I think I have found a problem with TCP when run over IPv6.

I set my MSS for TCP to 1460 to allow a full 1500 byte MTU to be
utilized on my systems. (Yes, I see that this does break some things
like communicating via links where PMTUD is blocked and one or more
links restrict MTU to some size less than 1500 bytes.

What I am specifically seeing is a packet being sent out with a TCP
length of 1460. While this is fine for IPv4, it's too back for IPv6 and,
as you might expect, the far end never receives this packet.

There is a sysctl for net.inet.tcp.v6mssdflt which is set to 1024. This
should be fine, but it appears that it is not being honored and the V4
value is always used.

Am I mis-analyzing things or is TCP at least a bit broken when running
over V6? (Or am I at fault for setting the large MSS because ti is
honored with v6 even though there is a separate sysctl for IPv6?
-- 
R. Kevin Oberman, Network Engineer
Energy Sciences Network (ESnet)
Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab)
E-mail: [EMAIL PROTECTED]   Phone: +1 510 486-8634
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


gif's

2005-01-11 Thread Tom Skeren
Been pulling my hair out.  Anybody know of a resource for a fairly 
complex tunneling scheme.  My needs are such that a central hub "Star" 
style tunneling scheme simply will not be efficient.

Any info would be appreciated.
TMS III
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: gif's

2005-01-11 Thread Chuck Swiger
Tom Skeren wrote:
Been pulling my hair out.  Anybody know of a resource for a fairly 
complex tunneling scheme.  My needs are such that a central hub "Star" 
style tunneling scheme simply will not be efficient.
At some point, complex VPN configurations become more work to setup and 
maintain than switching to IPsec or increasing the # publicly available 
services, hopefully switching to more secure protocols at the same time.

By the last I mean, many people want a VPN to do filesharing from home to 
work, or access email and such "securely" over the encrypted tunnel, but 
people tend to terminate VPN endpoints inside the network rather than in a 
semi-trusted perimeter zone, and the more VPN connections you add, the greater 
the exposure of various external networks to the inside and to each other.

Switching to HTTPS+WebDAV (eg SubVersion) for a filesharing/publishing 
mechanism to replace direct CIFS/Samba access, or accessing mail via IMAPS 
rather than firing up Outlook against the company's MS-Exchange server over 
the VPN might actually result in a more secure configuration.

--
-Chuck
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: IPv6 TCP transfers are hanging

2005-01-11 Thread JINMEI Tatuya / 神明達哉
> On Tue, 11 Jan 2005 14:01:29 -0800, 
> "Kevin Oberman" <[EMAIL PROTECTED]> said:

> I think I have found a problem with TCP when run over IPv6.
> I set my MSS for TCP to 1460 to allow a full 1500 byte MTU to be
> utilized on my systems. (Yes, I see that this does break some things
> like communicating via links where PMTUD is blocked and one or more
> links restrict MTU to some size less than 1500 bytes.

> What I am specifically seeing is a packet being sent out with a TCP
> length of 1460. While this is fine for IPv4, it's too back for IPv6 and,
> as you might expect, the far end never receives this packet.

Two questions to clarify things:

1. Which version of FreeBSD are you using?
2. How did you set the MSS?

JINMEI, Tatuya
Communication Platform Lab.
Corporate R&D Center, Toshiba Corp.
[EMAIL PROTECTED]
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"