2007/7/13, Adriaan <[EMAIL PROTECTED]>:
On 7/13/07, TuxR <[EMAIL PROTECTED]> wrote:
> Hello.
>
> I trying to use OpenBSD under high load and have problems with PF.
>
> When there is very many connections to server in some point other
> connections just failes.
>
> I try to use simple test application that creates 1000 connections to
> server for 1000 iteration. Maximum number I have observed with pf was
> '12' but with 'pfctl -d' all cycle successfully works ('1000').
>
> I try to use following simple test application:
>
> Also I have looked the same when testing 'ab' from apache2
> distribution. 'ab -c 100 -n 100' : maximum 9 iteration with pf enabled
> and 100 without.
>
> There is instant connection closing if "keep state" is enabled. When
> "keep state" is disabled there is following behaviour: in some moment
> the program is waiting for reply but do not get it and connection also
> close because timeout.
>
> I have looked no problems in tcpdump reports. Also no blocked packets
> was in pflog0 interface ('block log all' rule)
>
> I am sure that states limit is not exceed. Now I have
>
> set limit states50
> set limit src-nodes 5
> set limit frags 32000
>
> And `pfctl -si` have normal values.
>
> 'antispoof' and 'scrub' options are not affected. 'set optimization'
> make more bad.
>
> I looked the same behaviour in real use: when there is many
> connection, in some point they just closed.
>
> Any help will be appropriated. Many thanks.
>
> P.S. Sorry for my bad english.
>
Study the execellent 3 part series of OpenBSD developer at
http://undeadly.org/cgi?action=article&sid=20060927091645&mode=expanded
If after following his advice, your firewall still does not perform
adequately come back here with a posting of:
1) dmesg to see what kind of hardware you are using
2) vmstat -i output to show the interrupt rate of the NICs
Using 'systat vmstat" will give you a 'live' view of the interrupt
rate and other resources
3) netstat -m output to see the mbuf stats
4) your pf.conf
Others may have additional suggestions of course ;)
=Adriaan=
Adriaan, thank you for reply.
I believe, this is not hardware problem. The system is not under high
CPU-Load during tests.
Hmmm... Of cource, I have read excelent Daniel Hartmeier's articles.
It runnings on FujitsuSiemens SX200 1U Server, 1 Gb RAM, 2x Intel Xeon
3000 (but for now we using non-SMP kernel).
# dmesg
OpenBSD 4.1 (GENERIC) #1435: Sat Mar 10 19:07:45 MST 2007
[EMAIL PROTECTED]:/usr/src/sys/arch/i386/compile/GENERIC
cpu0: Intel(R) Xeon(TM) CPU 3.00GHz ("GenuineIntel" 686-class) 3 GHz
cpu0:
FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,
SSE3,MWAIT,DS-CPL,VMX,EST,CNXT-ID,CX16,xTPR
real mem = 1072652288 (1047512K)
avail mem = 971362304 (948596K)
using 4278 buffers containing 53764096 bytes (52504K) of memory
mainbus0 (root)
bios0 at mainbus0: AT/286+ BIOS, date 10/20/06, BIOS32 rev. 0 @
0xfd66a, SMBIOS rev. 2.34 @ 0x3fee8000 (67 entries)
bios0: FUJITSU SIEMENS PRIMERGY RX200 S3
pcibios0 at bios0: rev 2.1 @ 0xfd590/0xa70
pcibios0: PCI IRQ Routing Table rev 1.0 @ 0xfde30/432 (25 entries)
pcibios0: PCI Interrupt Router at 000:31:0 ("Intel 82371FB ISA" rev 0x00)
pcibios0: PCI bus #12 is the last bus
bios0: ROM list: 0xc/0x8000 0xc8000/0x5800 0xe2800/0x1400!
acpi at mainbus0 not configured
ipmi0 at mainbus0: version 1.5 interface KCS iobase 0xca2/2 spacing 1
cpu0 at mainbus0
cpu0: Enhanced SpeedStep disabled by BIOS
pci0 at mainbus0 bus 0: configuration mode 1 (no bios)
pchb0 at pci0 dev 0 function 0 vendor "Intel", unknown product 0x25d8 rev 0x92
ppb0 at pci0 dev 2 function 0 "Intel 5000 PCIE" rev 0x92
pci1 at ppb0 bus 1
ppb1 at pci1 dev 0 function 0 "Intel 6321ESB PCIE" rev 0x01
pci2 at ppb1 bus 2
ppb2 at pci2 dev 0 function 0 "Intel 6321ESB PCIE" rev 0x01
pci3 at ppb2 bus 3
ppb3 at pci2 dev 1 function 0 "Intel 6321ESB PCIE" rev 0x01
pci4 at ppb3 bus 4
ppb4 at pci1 dev 0 function 3 "Intel 6321ESB PCIE-PCIX" rev 0x01
pci5 at ppb4 bus 5
mpi0 at pci5 dev 5 function 0 "Symbios Logic SAS1068" rev 0x01: irq 11
scsibus0 at mpi0: 63 targets
sd0 at scsibus0 targ 0 lun 0: SCSI3 0/direct fixed
sd0: 238471MB, 238472 cyl, 16 head, 127 sec, 512 bytes/sec, 488390625 sec total
sd1 at scsibus0 targ 1 lun 0: SCSI3 0/direct fixed
sd1: 238471MB, 238472 cyl, 16 head, 127 sec, 512 bytes/sec, 488390625 sec total
ppb5 at pci0 dev 3 function 0 "Intel 5000 PCIE" rev 0x92
pci6 at ppb5 bus 6
ppb6 at pci0 dev 4 function 0 "Intel 5000 PCIE&q