Jail source address selection in 8.1-RELEASE

2010-11-24 Thread Steve Polyack

Hi,
There appears to be a loosely documented sysctl 
'security.jail.param.ip4.saddrsel' which should limit source IP 
selection of jails to their primary jail interface/IP.  The sysctl does 
not appear to do anything, however:


# sysctl security.jail.param.ip4.saddrsel=0
 ->
# echo $?
0
# sysctl security.jail.param.ip4.saddrsel
#
# sysctl -d security.jail.param.ip4.saddrsel
security.jail.param.ip4.saddrsel: Do (not) use IPv4 source address 
selection rather than the primary jail IPv4 address.


Is this tunable only available when VIMAGE jails are built? The 
8.1-RELEASE Release Notes suggest it is for VIMAGE jail(8) containers, 
while 7.3-RELEASE Release Notes suggest that it is available for the 
entire jail(8) subsystem as 'security.jail.ip4_saddrsel', a different OID.


FreeBSD  8.1-RELEASE FreeBSD 8.1-RELEASE #0: Tue Aug  3 16:24:09 EDT 
2010 r...@:/usr/obj/usr/src/sys/GENERIC  amd64



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


MAC address / per-proto ARP caching in 8.1-RELEASE

2011-03-15 Thread Steve Polyack
Is anyone aware of some sort of facility in either FreeBSD 8.1-RELEASE 
or the em(4) driver which would cause it to cache MAC addresses / ARP 
entries for hosts on a per-protocol basis?  We've been doing some 
testing with new routers, and almost every time we switch them in or out 
our FreeBSD machines get stuck sending UDP DNS requests to the MAC 
address of the old default gateway in complete disregard for the current 
contents of the ARP table.  New TCP connections do not seem to be affected:


[spolyack@web01 ~]$ uname -a
FreeBSD web01 8.1-RELEASE-p2 FreeBSD 8.1-RELEASE-p2 #3: Mon Dec  6 
08:58:21 EST 2010 root@web01:/usr/obj/usr/src/sys/WEB-AMD64  amd64


Current ARP table after the router replacement, the default gateway is 
10.0.1.254, which it has the correct new MAC address for already:

[spolyack@web01 ~]$ arp -an
? (10.0.1.17) at 00:0c:29:47:74:3a on em2 permanent [ethernet]
? (10.0.1.130) at 00:0c:29:47:74:26 on em0 permanent [ethernet]
? (10.0.1.254) at 00:a0:c9:00:01:01 on em0 expires in 915 seconds [ethernet]
? (10.0.0.17) at 00:0c:29:47:74:30 on em1 permanent [ethernet]
? (10.0.0.15) at 00:0c:29:47:74:30 on em1 permanent [ethernet]
? (10.0.0.11) at 00:0c:29:47:74:30 on em1 permanent [ethernet]
? (10.0.0.2) at 00:1f:a0:10:28:70 on em1 expires in 1162 seconds [ethernet]
? (10.0.0.1) at 00:1f:a0:10:28:70 on em1 expires in 943 seconds [ethernet]

tcpdump shows the DNS requests heading to the *old* router's MAC address 
despite the contents of the ARP table:

[spolyack@web01 ~]$ sudo tcpdump -i em0 -s 256 -vvv -e -n -ttt 'port 53'
tcpdump: listening on em0, link-type EN10MB (Ethernet), capture size 256 
bytes
00:00:00.00 00:0c:29:47:74:26 > 54:75:d0:a3:7c:8c, ethertype IPv4 
(0x0800), length 86: (tos 0x0, ttl 64, id 55590, offset 0, flags [none], 
proto UDP (17), length 72)
10.0.1.130.52419 > 10.0.2.80.53: [bad udp cksum fdc5!] 52051+ A? 
db-testing-lab. (44)


[spolyack@web01 ~]$ arp -an | grep 54:75:d0
[spolyack@web01 ~]$

Using telnet to form a TCP connection to port 53 on the same DNS server 
which the above UDP requests are headed to works just fine:

[spolyack@web01 ~]$ telnet 10.0.2.80 53
Trying 10.0.2.80...
Connected to 10.0.2.80.
Escape character is '^]'.
00:03:43.272134 00:0c:29:47:74:26 > 00:a0:c9:00:01:01, ethertype IPv4 
(0x0800), length 74: (tos 0x10, ttl 64, id 24383, offset 0, flags [DF], 
proto TCP (6), length 60)
10.0.2.130.20130 > 10.0.2.80.53: Flags [S], cksum 0x0353 (incorrect 
-> 0xf74d), seq 2674341615, win 65535, options [mss 1460,nop,wscale 
3,sackOK,TS val 60433610 ecr 0], length

...

Even after the above, standard UDP DNS requests are still headed to the 
old default gateway MAC address.  tcpdumping and looking at ARP 
requests/responses doesn't show any traces of the old MAC address 
either.  The switch which connects the server and router doesn't have 
any entries referencing the old router's MAC address anywhere either.  
I'm assuming this isn't related to the new flowtable features included 
in 8.x, since they appear to function on a 4-tuple of src addr, src 
port, dst addr, and dst port, which isn't going to match for new DNS 
requests.


If anyone has any ideas or suggestions on what additional data I can 
grab, I'd be happy to investigate further, but I've run out of ideas here.

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: MAC address / per-proto ARP caching in 8.1-RELEASE

2011-03-15 Thread Steve Polyack

On 03/15/11 14:26, Jeremy Chadwick wrote:

On Tue, Mar 15, 2011 at 09:30:39AM -0400, Steve Polyack wrote:

Is anyone aware of some sort of facility in either FreeBSD
8.1-RELEASE or the em(4) driver which would cause it to cache MAC
addresses / ARP entries for hosts on a per-protocol basis?

[snipping remaining details; readers can read it here instead:]
[http://lists.freebsd.org/pipermail/freebsd-stable/2011-March/061908.html]

The only thing I can think of would be flowtable, but I'm not sure
if it's enabled by default on 8.1-RELEASE-p2.  You can try the following
sysctl to disable it (I would recommend setting this in sysctl.conf and
rebooting; I don't know what happens in the case you set it on a live
system that's already experiencing the MAC issue you describe).

net.inet.flowtable.enable=0

Details:

http://conferences.sigcomm.org/sigcomm/2009/workshops/presto/papers/p37.pdf

It looks like it is enabled by default on 8.1-RELEASE.  Here's the 
net.inet.flowtable tree from the box in question:

[spolyack@web00 ~]$ sysctl net.inet.flowtable
net.inet.flowtable.stats:
table name: ipv4
collisions: 1
allocated: 0
misses: 20013
max_depth: 0
free_checks: 377953
frees: 19993
hits: 69519580
lookups: 69539593

net.inet.flowtable.nmbflows: 50176
net.inet.flowtable.tcp_expire: 86400
net.inet.flowtable.fin_wait_expire: 600
net.inet.flowtable.udp_expire: 300
net.inet.flowtable.syn_expire: 300
net.inet.flowtable.enable: 1
net.inet.flowtable.debug: 0

I'm planning on setting net.inet.flowtable.debug=1 next time we see this 
behavior, but from looking at the code, it looks like we might have to 
rebuild with -DFLOWTABLE_DEBUG to get really useful information.  It's 
too bad that there is not yet a method to dump the contents of the 
flowtable.


Thanks for the suggestion.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: MAC address / per-proto ARP caching in 8.1-RELEASE

2011-03-16 Thread Steve Polyack

On 03/15/11 14:26, Jeremy Chadwick wrote:

On Tue, Mar 15, 2011 at 09:30:39AM -0400, Steve Polyack wrote:

Is anyone aware of some sort of facility in either FreeBSD
8.1-RELEASE or the em(4) driver which would cause it to cache MAC
addresses / ARP entries for hosts on a per-protocol basis?

[snipping remaining details; readers can read it here instead:]
[http://lists.freebsd.org/pipermail/freebsd-stable/2011-March/061908.html]

The only thing I can think of would be flowtable, but I'm not sure
if it's enabled by default on 8.1-RELEASE-p2.  You can try the following
sysctl to disable it (I would recommend setting this in sysctl.conf and
rebooting; I don't know what happens in the case you set it on a live
system that's already experiencing the MAC issue you describe).

net.inet.flowtable.enable=0

Details:

http://conferences.sigcomm.org/sigcomm/2009/workshops/presto/papers/p37.pdf

I gave this a shot again this morning.  It's definitely related to the 
flowtable:


[spolyack@web01 ~]$ time host web00.lab00 ; sudo sysctl 
net.inet.flowtable.enable=0 ; time host web00.lab00

;; connection timed out; no servers could be reached

real0m10.017s
user0m0.000s
sys0m0.008s

net.inet.flowtable.enable: 1 -> 0

web00.lab00 has address 10.0.1.129

real0m0.069s
user0m0.000s
sys0m0.003s

I'm still curious as to why this is only breaking new outgoing UDP 
traffic.  New TCP connections aren't affected in the same way at all.  
There also does not seem to be any relevant changes to flowtable code 
between 8.1-RELEASE and 8.2-RELEASE or 8-STABLE.



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Network stack unstable after arp flapping

2011-04-01 Thread Steve Polyack

On 04/01/11 10:16, Frederique Rijsdijk wrote:

Hi,

We (hosting provider) are in the process of implementing ipv6 in our network 
(yay). Yesterday one of the final steps in configuring and updating our core 
routers were taken, which did not go entirely as planned. As a result, the 
default gateway mac addresses for all our machines changed about 800 times in a 
time span of about 4 minutes.

Here's a small piece of the logging:

Mar 31 18:36:12 srv01 kernel: arp: x.x.x.1 moved from 00:00:0c:9f:f0:3d to 
00:00:0c:07:ac:3d on bge0
Mar 31 18:36:12 srv01 kernel: arp: x.x.x.1 moved from 00:00:0c:07:ac:3d to 
00:00:0c:9f:f0:3d on bge0
Mar 31 18:36:13 srv01 kernel: arp: x.x.x.1 moved from 00:00:0c:9f:f0:3d to 
00:00:0c:07:ac:3d on bge0
Mar 31 18:36:14 srv01 kernel: arp: x.x.x.1 moved from 00:00:0c:07:ac:3d to 
00:00:0c:9f:f0:3d on bge0
Mar 31 18:36:14 srv01 kernel: arp: x.x.x.1 moved from 00:00:0c:9f:f0:3d to 
00:00:0c:07:ac:3d on bge0
Mar 31 18:36:14 srv01 kernel: arp: x.x.x.1 moved from 00:00:0c:07:ac:3d to 
00:00:0c:9f:f0:3d on bge0
Mar 31 18:36:15 srv01 kernel: arp: x.x.x.1 moved from 00:00:0c:9f:f0:3d to 
00:00:0c:07:ac:3d on bge0

The x.x.x.1 is always the same IP, the gateway of the machine.

The result of that, is that loads of FreeBSD machines (6.x, 7.x and 8.x) 
developed serious network issues, mainly being no or slow traffic between other 
(FreeBSD) machine accross different VLAN's in our own network.

First thing that comes to mind is the network itself, but all Linux machines 
(Ubuntu, Red Hat and CentOS) had no issues at all. Only BSD.

An arp -ad on both machines where problems occured, didn't solve anything. What 
worked better was /etc/rc.d/netif restart and a /etc/rc.d/routing restart. Some 
machines even had to be rebooted in order to get networking back to normal.

This almost sounds like a bug in the network stack in BSD, but I can not 
imagine that I'm right. The BSD networking stack is considered to be one of the 
best..

Any ideas anyone?
We experienced a similar issue here, but IIRC only on our 8.x systems 
(we don't have any 7.x).  Disabling flowtable cleared everything up 
immediately.  You can try that and see if it helps.  It seems like the 
flowtable  caches and associates the next-hop router MAC address with 
each flow, and unfortunately this doesn't get purged when the kernel 
senses and logs an ARP change.  The only other solution I've seen was to 
stop all network traffic on the machine until the flows/cache entries 
expired.


http://www.freebsd.org/cgi/query-pr.cgi?pr=155604 has more details of my 
run-in with this.  The title should be corrected though, as I found 
shortly after that all traffic is affected.


- Steve
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Network stack unstable after arp flapping

2011-04-04 Thread Steve Polyack

On 4/3/2011 9:50 AM, Frederique Rijsdijk wrote:

Steve,

On 01-04-11 16:50, Steve Polyack wrote:

On 04/01/11 10:16, Frederique Rijsdijk wrote:

[ .. ]

Mar 31 18:36:12 srv01 kernel: arp: x.x.x.1 moved from
00:00:0c:9f:f0:3d to 00:00:0c:07:ac:3d on bge0

[ .. ]

The result of that, is that loads of FreeBSD machines (6.x, 7.x and
8.x) developed serious network issues, mainly being no or slow traffic

[ .. ]

Any ideas anyone?

We experienced a similar issue here, but IIRC only on our 8.x systems
(we don't have any 7.x).  Disabling flowtable cleared everything up
immediately.  You can try that and see if it helps.

AFAIK this feature was introduced in 8.x? Btw you are here referring to
UDP, we had issues with TCP. It could still be related, perhaps I'll get
around emulating the situation and see if I can reproduce it.


Sorry, I tried to clarify that in the last piece of my post - the title 
on the PR is misleading and should be corrected.  We found out quickly 
after submitting it that all types of traffic (UDP, TCP, ICMP, etc.) are 
independently affected.

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Production use of carp?

2011-06-02 Thread Steve Polyack

On 6/2/2011 8:14 PM, John De Boskey wrote:

- Patrick Lamaiziere's Original Message -

Le Thu, 2 Jun 2011 16:39:40 -0400,
John  a ?crit :


Instead of running carp on the external interfaces as below:

ifconfig_cxgb0="inet 10.24.99.11 netmask 255.255.0.0"  # System 1
physical ip ifconfig_cxgb0="inet 10.24.99.12 netmask 255.255.0.0"  #
System 2 physcial ip ifconfig_carp1="vhid 1 pass zfscarp1 advbase 1
advskew 100 10.24.99.13 netmask 255.255.0.0" # HA ip used by clients
... we instead connect a direct cross-over cable between the two
systems providing HA/Failover and use a private (backside) network:

I've missed this...

As the purpose of carp is to provide a shared ip on a network, I don't
see why you are trying to use it on a cross-over network, between
only two machines? It seems useless to me.

Regards.

I have separate scripts which monitor the external interfaces on
the two systems. If, for instance, one of the public ip addresses,
10.24.99.11 were to go down, the monitor script issues a ifconfig
carp1 down, causing the service to shift over to partner system.
These are actually hooked up with devd. These scripts also shift
the virtual ip from one system to the other (as an ifconfig alias).

I'm trying to avoid the controlling interface being on the external
interfaces. As I said previously, exposing the vhid values to the
external net (company internal net) means we have to be careful
not to allow the same vhid twice, and to avoid the problem of the
switch going down and both systems thinking they should be the
master.
Just to clarify, the reason using the same VHID twice on the same 
network is bad news is because it is what is used to generate the 
virtual MAC address which the master system will respond to ARP requests 
with.  Unfortunately, the password doesn't come into play here.


Why worry about when the switch goes down?  Shouldn't that eliminate any 
traffic which could cause changes to the exported filesystems?  What if 
somebody knocks out your crossover cable and you have both systems 
thinking they are master AND are accessible (in some sense, since the 
MAC is going to flap between switchports).

In general, I attach exported filesystem services to different
carp interfaces. I load balance them between the two HA servers.
For instance, /vol/data1 is on carp1, /vol/data2 is on carp2.
Under normal circumstance, /vol/data1 is "owned" by system A,
and /vol/data2 is owned by sysem B.  Issuing 'ifconfig carp1 down'
on system A causes the export of /vol/data1 to shift over to
system B at which point maintanence can be done on system A.
The only problem is taking down system A causes the carp interfaces
on system B to go down/up a few times.

Does that help a bit?
I think your problem lies with the crossover cable.  When the crossover 
interface goes down (as in link state), the CARP interface on the system 
which is still online is going to flap.  Even if you do not specify 
'carpdev cxgb0', the system will locate the proper physical interface to 
associate your CARP interface with.  If this is really what you want, 
you may have to hack the kernel to have CARP ignore link state changes 
on the associated physical interface.


I would suggest just using CARP on the interface which connect to the 
switches.  Just as each set of two storage units will likely share a 
Virtual IP which is unique to them, they should also share a unique 
VHID.  Yes, you only have 1-255, but since the VHID only controls the 
virtual MAC, reusing a VHID in a different VLAN should not matter 
(provided you are using a different password or the VLANs are also in 
different multicast domains).  Perhaps you can create a VLAN local to 
the switch just for this purpose?  Setup a vlan(4) interface on each 
system just for this purpose and attach the CARP device to it.


Overall, I think you're trying to use CARP to accomplish something other 
than what it was designed for.  Why not just run a ping on the crossover 
interface every second and use that to trigger your switchovers?  You 
could get away with a pretty low timeout.  Add in some kind of way to 
force a maintenance-mode on a box and I think you'd have a solution that 
does what you want without the pitfalls you are seeing.


- Steve

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Question about NIC link state initialization

2011-06-29 Thread Steve Polyack
I have a handful of systems running FreeBSD 8.1-RELEASE.  An occaisional 
fat-finger in /etc/fstab may cause one to end up in single-user mode 
from time to time.  This would normally not be a problem, but some of 
these systems have a LOM (lights-out management) controller which shares 
the system's on-board NICs.  This works great 99% of the time, but when 
the system drops out of init(8) and into single-user mode, the links on 
the interfaces never come up, and therefore the LOM becomes 
inaccessible.  Cue remote-hands at the facility to help us remedy the 
problem.


I've been playing around with this configuration on a local system, and 
I've noticed that once at a single-user shell, all one has to do is run 
ifconfig to cause the NIC's links to come up.  You don't even have to 
specify the interface, nor do you have to specify "up".  As soon as I 
hit enter, ifconfig prints the typical interface summary - intermingled 
in with this are the bold kernel log messages stating "bce0: link state 
changed to UP" and "bce1: link state changed to UP".


So, my question is - why do we have to run ifconfig(8) to bring the 
links up on the attached interfaces?  Shouldn't they come up after the 
driver discovers and initializes the devices?  Keep in mind that I don't 
even have to pass any arguments (such as "up") to ifconfig.  
Furthermore, the behavior is exactly the same for bce(4) and em(4).


Short of patching init(8) (or perhaps the NIC drivers?), I don't see 
another way for me to ensure the links come up even when the system 
drops into single-user mode on boot.


- Steve
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Question about NIC link state initialization

2011-06-30 Thread Steve Polyack

On 6/30/2011 1:10 AM, per...@pluto.rain.com wrote:

Steve Polyack  wrote:


... An occaisional fat-finger in /etc/fstab may cause one to
end up in single-user mode ... some of these systems have a LOM
(lights-out management) controller which shares the system's
on-board NICs ... when the system drops out of init(8) and into
single-user mode, the links on the interfaces never come up,
and therefore the LOM becomes inaccessible.

... all one has to do is run ifconfig to cause the NIC's links to
come up ... why do we have to run ifconfig(8) to bring the links
up on the attached interfaces?

When trying to troubleshoot a problem that was known or suspected to
involve the network or its hardware, one might not _want_ the NICs
alive.


Short of patching init(8) (or perhaps the NIC drivers?), I don't
see another way for me to ensure the links come up even when the
system drops into single-user mode on boot.

Something in /root/.profile, perhaps?  That should get run when the
single-user shell starts up, if it's started as a "login" shell.

This won't work.  When the system kicks you into single-user mode, you 
are prompted to enter the name of a shell or press enter for /bin/sh.  
If no one is there to press enter, or enter the path to an alternate 
shell, then a shell never starts.


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Question about NIC link state initialization

2011-06-30 Thread Steve Polyack

On 6/30/2011 6:49 AM, Daniel Feenberg wrote:



On Wed, 29 Jun 2011, per...@pluto.rain.com wrote:


Steve Polyack  wrote:


... An occaisional fat-finger in /etc/fstab may cause one to
end up in single-user mode ... some of these systems have a LOM
(lights-out management) controller which shares the system's
on-board NICs ... when the system drops out of init(8) and into
single-user mode, the links on the interfaces never come up,
and therefore the LOM becomes inaccessible.

... all one has to do is run ifconfig to cause the NIC's links to
come up ... why do we have to run ifconfig(8) to bring the links
up on the attached interfaces?


When trying to troubleshoot a problem that was known or suspected to
involve the network or its hardware, one might not _want_ the NICs


Well, maybe, but if the system needs to boot into multi-user mode for 
the LOM to be available, what is the need for the LOM? At that point 
you can do everything you might need through the OS interface. Can I 
ask what is the brand of this so-called LOM? Is there any 
documentation implying something more useful? Do they describe doing a 
bare metal install of an

OS?


They are the Dell Remote Access Controllers (DRACs).  Now, they do have 
their own dedicated NIC, which we use for anything that really needs the 
attention.  However, the shared feature saves us a switchport per server 
we use it on.  When both on-board NICs are cabled (i.e. for lagg(4) 
failover), then the DRAC's shared NIC mode *also* supports automatic 
failover between both on-board NICs.  This doesn't help however if the 
operating system never turns on the links to either on-board NIC.


I was able to "fix" the single-user mode behavior (which I agree, isn't 
necessarily broken) and get it to bring up the links by simply patching 
init(8) to call system("/sbin/ifconfig") before prompting for the 
single-user shell.  It works, but I feel dirty.


- Steve
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: kern/155604: [flowtable] Flowtable excessively caches dest MAC addresses for outgoing traffic

2011-10-17 Thread Steve Polyack
The following reply was made to PR kern/155604; it has been noted by GNATS.

From: Steve Polyack 
To: Gleb Smirnoff 
Cc: bug-follo...@freebsd.org
Subject: Re: kern/155604: [flowtable] Flowtable excessively caches dest MAC
 addresses for outgoing traffic
Date: Mon, 17 Oct 2011 12:04:35 -0400

 On 10/17/2011 11:59 AM, Gleb Smirnoff wrote:
 >Steve,
 >
 >looks like this bug is fixed in 8.2 and later versions of FreeBSD.
 > Can I close the PR?
 >
 IMHO, the fix is merely a workaround (disabling FLOWTABLE by default in 
 the kernel configuration).  If a user turns flowtable on, they will 
 still encounter the same problem as I described.
 
 We don't have a reason to turn the flowtable feature back on, so it 
 doesn't really affect us either way.  I'll leave it up to you as to 
 whether you want to close it or have someone take a deeper look.
 
 Thanks for checking,
 
 -- 
 http://www.intermedix.com
 Steve Polyack, System Engineer
 T: 412-422-3463 x4026
 spoly...@collaborativefusion.com
 
 The information contained in this message is confidential and may be 
privileged and/or protected under law. If you received this message in error, 
please notify us immediately by forwarding a copy to 
karen.coll...@intermedix.com and then deleting the original message and any 
attachments.
 
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Kernel (7.3) crash due to mbuf leak?

2010-07-30 Thread Steve Polyack

On 07/30/10 14:10, David DeSimone wrote:

After upgrading a couple of our systems from 7.2-RELEASE to 7.3-RELEASE,
we have started to see them running out of mbuf's and crashing every
month or so.  The panic string is:
   

...


The services on these systems are extremely simple:

 SSH (though nobody logs in)
 sendmail
 qmail
 ntpd (client only)
 named (BIND)

   


Do these systems consume or offer NFS?

-Steve

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: MPD5 + DUMMYNET + PF HIGH CPU USAGE

2010-09-08 Thread Steve Polyack

 On 09/08/10 13:38, Marcos Vinícius Buzo wrote:

Hi all.

I just started working in a small WISP in a place of a friend that
unfortunatelly is not between us anymore :(
_ We're running FreeBSD 8.1 64bits with MPD5 for pppoe, IPFW+Dummynet for
Traffic Shaping and PF for NAT and firewall.
_ Our hardware is a Dell PowerEdge R210, X3430 Intel Xeon, 4GB 1066Mhz and a
two ports Broadcom NetXtreme II BCM5716.
_ Our WAN Link is 60mbps down/up.

When we have 450+ pppoe connections and link usage is about 30mbps, things
get strange.
CPU usage goes to 80%+(Im using cacti+snmp to see this); we have high
latency pings, sometimes it goes to 300ms+ and sometimes mpd5 stops doing
its service.

I did setup another server to work together, it solves the problem just for
now, in this server i disabled flowtable (sysctl
net.inet.flowtable.enable=0), because in the old server, when i run top
-ISH, I see the following:

  22 root  44- 0K16K CPU22 236:19 100.00% flowcleaner

Is this a bug ?

Are the following customizations right ?

Here are the custom kernel flags:
...
kern.maxvnodes=1
...


100 million vnodes sounds like a lot for a system that is not doing IO 
with lots of files.  I guess the worst it's going to do is sucking up 
some extra memory.  I can't speak much for the flowtable, but with 450+ 
clients, you are surely hitting the limits of the default number of 
entries there.


$ sysctl net.inet.ip.output_flowtable_size
net.inet.ip.output_flowtable_size: 32768
$ sysctl -d net.inet.ip.output_flowtable_size
net.inet.ip.output_flowtable_size: number of entries in the per-cpu 
output flow caches


With 4 CPUs, that tracks a maximum of 128k flows.  With 450 clients 
behind, I could see you easily exceeding that rapidly.  You may want to 
try doubling (or tripling) this value via loader.conf on the main system 
and seeing if that helps a lot (the flowcleaner may not have to 
constantly work if you are not always close to the maximum number of 
flows).  I'm not sure of the specifics of the flow table, so someone 
else could probably chime in with some more information on it (I can't 
find any real documentation on the feature).  With such a high number of 
flows, you may just be better turning it off anyways.


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"