John,

I ran a test using iperf on an external openbsd system (client) through a carp
firewall to an internal openbsd system (server). All systems are running
OpenBSD v4.2 with the latest patches.

      external               ---> CARP --->  internal
(iperf -i 1 -t 600 -c carp0)                (iperf -s)

I did _not_ see any slow down through the MASTER when I rebooted the BACKUP
server. For example, I started the reboot of the BACKUP at 5 seconds and
the BACKUP finished rebooting at 102 seconds:

[  3]  1.0- 2.0 sec  81.2 MBytes    681 Mbits/sec
[  3]  2.0- 3.0 sec  82.3 MBytes    690 Mbits/sec
[  3]  3.0- 4.0 sec  83.8 MBytes    703 Mbits/sec
[  3]  4.0- 5.0 sec  86.6 MBytes    727 Mbits/sec -- start reboot
[  3]  5.0- 6.0 sec  86.8 MBytes    728 Mbits/sec
[  3]  6.0- 7.0 sec  86.3 MBytes    724 Mbits/sec
[  3]  7.0- 8.0 sec  82.8 MBytes    695 Mbits/sec
[  3]  8.0- 9.0 sec  86.7 MBytes    728 Mbits/sec
[  3]  9.0-10.0 sec  85.8 MBytes    720 Mbits/sec
[  3] 10.0-11.0 sec  86.1 MBytes    722 Mbits/sec

....cut....

[  3] 96.0-97.0 sec  83.4 MBytes    699 Mbits/sec
[  3] 97.0-98.0 sec  82.4 MBytes    692 Mbits/sec
[  3] 98.0-99.0 sec  81.9 MBytes    687 Mbits/sec
[  3] 99.0-100.0 sec  84.7 MBytes    710 Mbits/sec
[  3] 100.0-101.0 sec  83.3 MBytes    699 Mbits/sec
[  3] 101.0-102.0 sec  83.7 MBytes    702 Mbits/sec -- finished reboot
[  3] 102.0-103.0 sec  83.3 MBytes    699 Mbits/sec
[  3] 103.0-104.0 sec  83.6 MBytes    701 Mbits/sec
[  3] 104.0-105.0 sec  85.3 MBytes    716 Mbits/sec
[  3] 105.0-106.0 sec  83.4 MBytes    699 Mbits/sec

I also did not see any errors in the logs of either system running ipref
or on the firewalls. The load on the MASTER firewall was around 0.30.

Are the firewalls kernel patched? Are their any hardware failures to
report? Are the firewalls overloaded? 

You are welcome to check out some of the "how to's" I have at
http://calomel.org if you need to.
 
--
  Calomel @ http://calomel.org
  Open Source Research and Reference


On Thu, Apr 10, 2008 at 12:35:17PM +0100, openbsd firewall wrote:
>Hello,
>
>I'm testing an OpenBSD 4.2 firewall with Iperf and I'm experiencing a very
>strange behaviour.
>What happens is that when I reboot the backup node the connection rate drops
>while the backup node is coming back.
>Iperf log:
>[  3] 233.0-234.0 sec  6.62 MBytes  55.5 Mbits/sec
>[  3] 234.0-235.0 sec  6.62 MBytes  55.5 Mbits/sec
>[  3] 235.0-236.0 sec  6.62 MBytes  55.5 Mbits/sec
>[  3] 236.0-237.0 sec  6.70 MBytes  56.2 Mbits/sec
>[  3] 237.0-238.0 sec    288 KBytes  2.36 Mbits/sec
>[  3] 238.0-239.0 sec  3.40 MBytes  28.5 Mbits/sec
>[  3] 239.0-240.0 sec  0.00 Bytes  0.00 bits/sec
>[  3] 240.0-241.0 sec  3.55 MBytes  29.8 Mbits/sec
>[  3] 241.0-242.0 sec  0.00 Bytes  0.00 bits/sec
>[  3] 242.0-243.0 sec  3.49 MBytes  29.3 Mbits/sec
>[  3] 243.0-244.0 sec  0.00 Bytes  0.00 bits/sec
>[  3] 244.0-245.0 sec  3.49 MBytes  29.3 Mbits/sec
>[  3] 245.0-246.0 sec  2.30 MBytes  19.3 Mbits/sec
>[  3] 246.0-247.0 sec  5.23 MBytes  43.9 Mbits/sec
>[  3] 247.0-248.0 sec  2.60 MBytes  21.8 Mbits/sec
>[  3] 248.0-249.0 sec  5.37 MBytes  45.0 Mbits/sec
>[  3] 249.0-250.0 sec  1.28 MBytes  10.7 Mbits/sec
>[  3] 250.0-251.0 sec  4.69 MBytes  39.3 Mbits/sec
>[  3] 251.0-252.0 sec  4.69 MBytes  39.3 Mbits/sec
>[  3] 252.0-253.0 sec  6.62 MBytes  55.5 Mbits/sec
>[  3] 253.0-254.0 sec  6.62 MBytes  55.5 Mbits/sec
>[  3] 254.0-255.0 sec  6.62 MBytes  55.5 Mbits/sec
>
>That drop in connection is when the rebooted node is coming back ! Iperf is
>being tested from one machine behind one firewall interface and another
>machine behind another firewall interface. One machine is running Openbsd
>and the other Linux.
>Is there any reason for this behaviour ? I do not expect the backup node to
>have any influence over the flow on active node.
>
>Related to this is a problem with pfsync. Sometimes I get a bad state after
>the backup firewall comes back and then Iperf gets totally messed up,
>sometimes recovering others not. No difference if psync is configured with
>multicast or with syncpeer.
>Log from the active node:
>Apr 10 06:57:03 inferno /bsd: pfsync: received bulk update request
>Apr 10 06:57:04 inferno /bsd: pfsync: bulk update complete
>Apr 10 06:57:04 inferno pflogd[23092]: invalid size 484 (116/116), packet
>dropped
>Apr 10 06:57:11 inferno pflogd[23092]: invalid size 144 (116/116), packet
>dropped
>Apr 10 06:57:16 inferno last message repeated 3 times
>Apr 10 06:57:31 inferno pflogd[23092]: invalid size 484 (116/116), packet
>dropped
>Apr 10 06:57:31 inferno /bsd: pf: BAD state: TCP xx.xx.xx.4:5001
>xx.xx.xx.4:5001 xx.xx.xx.5:43558 [lo=2191798936 high=2191798936 win=5840
>modulator=0] [lo=911995449 high=912001289 win=65535 modulator=0] 4:4 A
>seq=2191798936 (2191798936) ack=911995449 len=1460 ackskew=0
>pkts=1267241:671313 dir=in,fwd
>Apr 10 06:57:31 inferno /bsd: pf: State failure on: 1
>Apr 10 06:57:31 inferno /bsd: pf: BAD state: TCP xx.xx.xx.4:5001
>xx.xx.xx.4:5001 xx.xx.xx.5:43558 [lo=2191798936 high=2191798936 win=5840
>modulator=0] [lo=911995449 high=912001289 win=65535 modulator=0] 4:4 A
>seq=2191800396 (2191800396) ack=911995449 len=1460 ackskew=0
>pkts=1267241:671313 dir=in,fwd
>Apr 10 06:57:31 inferno /bsd: pf: State failure on: 1
>
>If I destroy pfsync interface in the master node, this problem doesn't occur
>(that's what I expected to happen).
>
>Any clue of what is happening here ?
>
>Thanks,
>John

Reply via email to