Re: FreeBSD 7.1 taskq em performance

2009-04-27 Thread Ray Kinsella
Joseph,

I would recommend that you start with PMCStat and figure where the
bottleneck is,
Given that you have a two threads and your CPU is at 100%,
my a apriori guess would be a contention for a spinlock,
so I might also try to use LOCK_PROFILING to handle on this.

Regards

Ray Kinsella


On Fri, Apr 24, 2009 at 11:42 PM, Joseph Kuan  wrote:

> Hi all,
>  I have been hitting some barrier with FreeBSD 7.1 network performance. I
> have written an application which contains two kernel threads that takes
> mbufs directly from a network interface and forwards to another network
> interface. This idea is to simulate different network environment.
>
>  I have been using FreeBSD 6.4 amd64 and tested with an Ixia box
> (specialised hardware firing very high packet rate). The PC was a Core2 2.6
> GHz with dual ports Intel PCIE Gigabit network card. It can manage up to
> 1.2
> million pps.
>
>  I have a higher spec PC with FreeBSD 7.1 amd64 and Quadcore 2.3 GHz and
> PCIE Gigabit network card. The performance can only achieve up to 600k pps.
> I notice the 'taskq em0' and 'taskq em1' is solid 100% CPU but it is not in
> FreeBSD 6.4.
>
>  Any advice?
>
>  Many thanks in advance
>
>  Joe
> ___
> freebsd-performa...@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to "
> freebsd-performance-unsubscr...@freebsd.org"
>
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Current problem reports assigned to freebsd-net@FreeBSD.org

2009-04-27 Thread FreeBSD bugmaster
Note: to view an individual PR, use:
  http://www.freebsd.org/cgi/query-pr.cgi?pr=(number).

The following is a listing of current problems submitted by FreeBSD users.
These represent problem reports covering all versions including
experimental development code and obsolete releases.


S Tracker  Resp.  Description

o kern/133902  net[tun] Killing tun0 iface ssh tunnel causes Panic Strin
o kern/133736  net[udp] ip_id not protected ...
o kern/133613  net[wpi] [panic] kernel panic in wpi(4)
o kern/133595  net[panic] Kernel Panic at pcpu.h:195
o kern/133572  net[ppp] [hang] incoming PPTP connection hangs the system
o kern/133490  net[bpf] [panic] 'kmem_map too small' panic on Dell r900 
o kern/133328  net[bge] [panic] Kernel panics with Windows7 client
o kern/133235  net[netinet] [patch] Process SIOCDLIFADDR command incorre
o kern/133218  net[carp] [hang] use of carp(4) causes system to freeze
o kern/133204  net[msk] msk driver timeouts
o kern/133060  net[ipsec] [pfsync] [panic] Kernel panic with ipsec + pfs
o kern/132991  net[bge] if_bge low performance problem
o kern/132984  net[netgraph] swi1: net 100% cpu usage
f bin/132911   netip6fw(8): argument type of fill_icmptypes is wrong and
o kern/132889  net[ndis] [panic] NDIS kernel crash on load BCM4321 AGN d
o kern/132885  net[wlan] 802.1x broken after SVN rev 189592
o conf/132851  net[fib] [patch] allow to setup fib for service running f
o kern/132832  net[netinet] [patch] tcp_output() might generate invalid 
o bin/132798   net[patch] ggatec(8): ggated/ggatec connection slowdown p
o kern/132734  net[ifmib] [panic] panic in net/if_mib.c
o kern/132722  net[ath] Wifi ath0 associates fine with AP, but DHCP or I
o kern/132715  net[lagg] [panic] Panic when creating vlan's on lagg inte
o kern/132705  net[libwrap] [patch] libwrap - infinite loop if hosts.all
o kern/132672  net[ndis] [panic] ndis with rt2860.sys causes kernel pani
o kern/132669  net[xl] 3c905-TX send DUP! in reply on ping (sometime)
o kern/132625  net[iwn] iwn drivers don't support setting country
o kern/132554  net[ipl] There is no ippool start script/ipfilter magic t
o kern/132354  net[nat] Getting some packages to ipnat(8) causes crash
o kern/132285  net[carp] alias gives incorrect hash in dmesg
o kern/132277  net[crypto] [ipsec] poor performance using cryptodevice f
o conf/132179  net[patch] /etc/network.subr: ipv6 rtsol on incorrect wla
o kern/132107  net[carp] carp(4) advskew setting ignored when carp IP us
o kern/131781  net[ndis] ndis keeps dropping the link
o kern/131776  net[wi] driver fails to init
o kern/131753  net[altq] [panic] kernel panic in hfsc_dequeue
o bin/131567   net[socket] [patch] Update for regression/sockets/unix_cm
o kern/131549  netifconfig(8) can't clear 'monitor' mode on the wireless
o kern/131536  net[netinet] [patch] kernel does allow manipulation of su
o bin/131365   netroute(8): route add changes interpretation of network 
o kern/131162  net[ath] Atheros driver bugginess and kernel crashes
o kern/131153  net[iwi] iwi doesn't see a wireless network
f kern/131087  net[ipw] [panic] ipw / iwi - no sent/received packets; iw
f kern/130820  net[ndis] wpa_supplicant(8) returns 'no space on device'
o kern/130628  net[nfs] NFS / rpc.lockd deadlock on 7.1-R
o conf/130555  net[rc.d] [patch] No good way to set ipfilter variables a
o kern/130525  net[ndis] [panic] 64 bit ar5008 ndisgen-erated driver cau
o kern/130311  net[wlan_xauth] [panic] hostapd restart causing kernel pa
o kern/130109  net[ipfw] Can not set fib for packets originated from loc
f kern/130059  net[panic] Leaking 50k mbufs/hour
o kern/129750  net[ath] Atheros AR5006 exits on "cannot map register spa
f kern/129719  net[nfs] [panic] Panic during shutdown, tcp_ctloutput: in
o kern/129580  net[ndis] Netgear WG311v3 (ndis) causes kenel trap at boo
o kern/129517  net[ipsec] [panic] double fault / stack overflow
o kern/129508  net[carp] [panic] Kernel panic with EtherIP (may be relat
o kern/129352  net[xl] [patch] xl0 watchdog timeout
o kern/129219  net[ppp] Kernel panic when using kernel mode ppp
o kern/129197  net[panic] 7.0 IP stack related panic
o kern/129135  net[vge] vge driver on a VIA mini-ITX not working
o bin/128954   netifconfig(8) deletes valid routes
o kern/128917  net[wpi] [panic] if_wpi and wpa+tkip causing kernel panic
o kern/128884  net[msk] if_msk page fault while in kernel mode
o kern/128840  net[igb] page fault under load with i

Re: [dummynet] Several queues connected to one pipe: "dummynet: OUCH! pipe should have been idle!"

2009-04-27 Thread Maxim Ignatenko
2009/4/27 Luigi Rizzo :
>
> ok there seems to be no change related to dummynet between these
> two versions so I am not sure where to look.
> Could you double check what is the last working version ?
>
 Yes, r191201 have this problems too (it seems, i didn't updated for a
long time).
Now  I updated to r190864 (just before last change on ip_dummynet.c) -
all works fine. Should I now check r190865?

Thanks.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: [dummynet] Several queues connected to one pipe: "dummynet: OUCH! pipe should have been idle!"

2009-04-27 Thread Luigi Rizzo
On Mon, Apr 27, 2009 at 04:51:18PM +0300, Maxim Ignatenko wrote:
> 2009/4/27 Luigi Rizzo :
> >
> > ok there seems to be no change related to dummynet between these
> > two versions so I am not sure where to look.
> > Could you double check what is the last working version ?
> >
>  Yes, r191201 have this problems too (it seems, i didn't updated for a
> long time).
> Now  I updated to r190864 (just before last change on ip_dummynet.c) -
> all works fine. Should I now check r190865?

yes it would be great if you could identify a specific change that
caused the problem.
There is one thing particularly tricky in one of the dummynet
changes, because some fields changed between 32/64 bits and
signed/unsigned. I may have unadvertently introduced some
conversion bug.

thanks a lot for the feedback

cheers
luigi
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: [dummynet] Several queues connected to one pipe: "dummynet: OUCH! pipe should have been idle!"

2009-04-27 Thread Maxim Ignatenko
2009/4/27 Luigi Rizzo :
> On Mon, Apr 27, 2009 at 04:51:18PM +0300, Maxim Ignatenko wrote:
>> 2009/4/27 Luigi Rizzo :
>> >
>> > ok there seems to be no change related to dummynet between these
>> > two versions so I am not sure where to look.
>> > Could you double check what is the last working version ?
>> >
>>  Yes, r191201 have this problems too (it seems, i didn't updated for a
>> long time).
>> Now  I updated to r190864 (just before last change on ip_dummynet.c) -
>> all works fine. Should I now check r190865?
>
> yes it would be great if you could identify a specific change that
> caused the problem.
> There is one thing particularly tricky in one of the dummynet
> changes, because some fields changed between 32/64 bits and
> signed/unsigned. I may have unadvertently introduced some
> conversion bug.
>

On r190865 problem appeared again.

> thanks a lot for the feedback
>

You welcome :)

Thanks.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: IPFW MAX RULES COUNT PERFORMANCE

2009-04-27 Thread Daniel Dias Gonçalves

Julian,

You could give an example of rules with tables?

Julian Elischer escreveu:

Daniel Dias Gonçalves wrote:

Very good thinking, congratulations, but my need is another.
The objective is a Captive Porrtal that each authentication is 
dynamically created a rule to ALLOW or COUNT IP authenticated, which 
I'm testing is what is the maximum capacity of rules supported, 
therefore simultaneous user.


Understand ?


I think so.


do not add rules.
have a single rule that looks in a table
and add entries to the table when needed.


Thanks,

Daniel

Julian Elischer escreveu:

Daniel Dias Gonçalves wrote:

Hi,

My system is a FreeBSD 7.1R.
When I add rules IPFW COUNT to 254 IPS from my network, one of my 
interfaces increases the latency, causing large delays in the 
network, when I delete COUNT rules, everything returns to normal, 
which can be ?


My script:


of course adding 512 rules, *all of which hav eto be evaluated* will 
add latency.


you have several ways to improve this situation.

1/ use a differnet tool.
By using the netgraph netflow module you can get
accunting information that may be more useful and less impactful.

2/ you could make your rules smarter..

use skipto rules to make the average packet traverse less rules..

off the top of my head.. (not tested..)

Assuming you have machines 10.0.0.1-10.0.0.254
the rules below have an average packet traversing 19 rules and not 
256 for teh SYN packet and 2 rules for others..
you may not be able to do the keep state  trick if you use state for 
other stuff but in that case worst case will still be 19 rules.


2 check-state
5 skipto 1 ip from not 10.0.0.0/24 to any
10 skipto 2020 ip from not 10.0.0.0/25 to any  # 0-128
20 skipto 1030 ip from not 10.0.0.0/26 to any  # 0-64
30 skipto 240 ip from not 10.0.0.0/27  to any  # 0-32
40 skipto 100 ip from not 10.0.0.0/28  to any  # 0-16
[16 count rules for 0-15]
80 skipto 1 ip from any to any
100 [16 count rules for 16-31] keep-state
140 skipto 1 ip from any to any
240 skipto 300 ip from not 10.0.0.32/28
[16 rules for 32-47] keep-state
280 skipto 1 ip from any to any
300 [16 count rules for 48-63] keep-state
340 skipto 1 ip from any to any
1030 skipto 1240 ip from not 10.0.0.64/27 to any
1040 skipto 1100 ip from not 10.0.0.64/28 to any
   [16 count rules for 64-79] keep-state
1080 skipto 1 ip from any to any
1100 [16 rules for 80-95] keep-state
1140 skipto 1 ip from any to any
1240 skipto 1300 ip from not 10.0.0.96/28 to any
[16 count rules for 96-111] keep-state
1280 skipto 1 ip from any to any
1300 [16 rules for 112-127] keep-state
1340 skipto 1 ip from any to any
2020 skipto 3030 ip from not 10.0.0.128/26 to any
2030 skipto 2240 ip from not 10.0.0.128/28 to any
[16 count rules for 128-143] keep-state
2080 skipto 1 ip from any to any
2100 [16 rules for 144-159] keep-state
2140 skipto 1 ip from any to any
2240 skipto 2300 ip from not 10.0.0.32/28 to any
[16 count rules for 160-175] keep-state
2280 skipto 1 ip from any to any
2300 [16 count rules for 176-191] keep-state
2340 skipto 1 ip from any to any
3030 skipto 3240 ip from not 10.0.0.192/27 to any
3040 skipto 3100 ip from not 10.0.0.192/28 to any
[16 count rules for 192-207] keep-state
3080 skipto 1 ip from any to any
3100 [16 rules for 208-223] keep-state
3240 skipto 1 ip from any to any
3240 skipto 3300 ip from not 10.0.0.224/28 to any
[16 count rules for 224-239] keep-state
3280 skipto 1 ip from any to any
3300 [16 count rules for 240-255] keep-state
3340 skipto 1 ip from any to any

1 #other stuff

in fact you could improve it further with:
1/ either going down to a netmask of 29 (8 rules per set)
or
2/ instead of having count rules make them skipto
so you would have:
3300 skipto 1 ip from 10.0.0.240 to any
3301 skipto 1 ip from 10.0.0.241 to any
3302 skipto 1 ip from 10.0.0.242 to any
3303 skipto 1 ip from 10.0.0.243 to any
3304 skipto 1 ip from 10.0.0.244 to any
3305 skipto 1 ip from 10.0.0.245 to any
3306 skipto 1 ip from 10.0.0.246 to any
3307 skipto 1 ip from 10.0.0.247 to any
3308 skipto 1 ip from 10.0.0.248 to any
3309 skipto 1 ip from 10.0.0.249 to any
3310 skipto 1 ip from 10.0.0.240 to any
3311 skipto 1 ip from 10.0.0.241 to any
3312 skipto 1 ip from 10.0.0.242 to any
3313 skipto 1 ip from 10.0.0.243 to any
3314 skipto 1 ip from 10.0.0.244 to any
3315 skipto 1 ip from 10.0.0.245 to any

thus on average, a packet would traverse half the rules (8).

3/ both the above  so on average they would traverse  4 rules plus 
one extra skipto.


you should be  able to do the above in a script.
I'd love to see it..

(you can also do skipto tablearg in -current (maybe 7.2 too)
which may also be good.. (or not))


julian



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail t

Re: IPFW MAX RULES COUNT PERFORMANCE

2009-04-27 Thread Daniel Dias Gonçalves
What may be happening ? I'm with polling enabled on all interfaces, can 
you influence ?


em0:  port 0x7000-0x703f mem 
0xdfa0-0xdfa1 irq 16 at device 8.0 on pci4
em1:  port 0x7400-0x743f mem 
0xdfa2-0xdfa3 irq 17 at device 8.1 on pci4
em2:  port 0x8000-0x803f mem 
0xdfb0-0xdfb1 irq 16 at device 8.0 on pci5
em3:  port 0x8400-0x843f mem 
0xdfb2-0xdfb3 irq 17 at device 8.1 on pci5
em4:  port 0x9000-0x903f mem 
0xdfc0-0xdfc1 irq 16 at device 8.0 on pci7
em5:  port 0x9400-0x943f mem 
0xdfc2-0xdfc3 irq 17 at device 8.1 on pci7
em6:  port 0xa000-0xa03f mem 
0xdfd0-0xdfd1 irq 16 at device 8.0 on pci8
em7:  port 0xa400-0xa43f mem 
0xdfd2-0xdfd3 irq 17 at device 8.1 on pci8
fxp0:  port 0xb000-0xb03f mem 
0xdfe2-0xdfe20fff,0xdfe0-0xdfe1 irq 16 at device 4.0 on pci14


If I disable the polling, no network interface work, begins to display 
"em4 watchdog timeout".


Ian Smith escreveu:

On Fri, 24 Apr 2009, Daniel Dias Gonçalves wrote:

 > The latency in the interface em6 increased an average of 10ms to 200 ~ 300ms
 > Hardware:
 > CPU: Intel(R) Xeon(TM) CPU 3.20GHz (3200.13-MHz 686-class CPU)
 >  Logical CPUs per core: 2
 > FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs
 > cpu0:  on acpi0
 > p4tcc0:  on cpu0
 > cpu1:  on acpi0
 > p4tcc1:  on cpu1
 > cpu2:  on acpi0
 > p4tcc2:  on cpu2
 > cpu3:  on acpi0
 > p4tcc3:  on cpu3
 > SMP: AP CPU #1 Launched!
 > SMP: AP CPU #3 Launched!
 > SMP: AP CPU #2 Launched!
 > 
 > real memory  = 9663676416 (9216 MB)

 > avail memory = 8396738560 (8007 MB)

In that case, there really is something else wrong.  By my measurements, 
rummaging through most of >1000 rules on a old 166MHz Pentium to get to 
the icmp allow rules (ridiculous, I know) added about 2ms to local net 
pings via that box, ie 1ms each pass for about 900 rules, mostly counts.


cheers, Ian


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: IPFW MAX RULES COUNT PERFORMANCE

2009-04-27 Thread Daniel Dias Gonçalves

Going to another example.
If I wanted that each authentication (username and password) in captive 
portal, set up rules limiting the speed of the user's IP, as I do? I can 
create two rules for the in / out for each user associated with a pipe? 
When simulating this with a script adding hundreds of rules, the latency 
also increases, as resolve this ?


Adrian Chadd escreveu:

You'd almost certainly be better off hacking up an extension to ipfw
which lets you count a /24 in one rule.

As in, the count rule would match on the subnet/netmask, have 256 32
(or 64 bit) integers allocated to record traffic in, and then do an
O(1) operation using the last octet of the v4 address to map it into
this 256 slot array to update counters for.

It'd require a little tool hackery to extend ipfw in userland/kernel
space to do it but it would work and be (very almost) just as fast as
a single rule.

2c,



Adrian

2009/4/23 Daniel Dias Gonçalves :
  

Hi,

My system is a FreeBSD 7.1R.
When I add rules IPFW COUNT to 254 IPS from my network, one of my interfaces
increases the latency, causing large delays in the network, when I delete
COUNT rules, everything returns to normal, which can be ?

My script:

ipcount.php
-- CUT --

-- CUT --

net.inet.ip.fw.dyn_keepalive: 1
net.inet.ip.fw.dyn_short_lifetime: 5
net.inet.ip.fw.dyn_udp_lifetime: 10
net.inet.ip.fw.dyn_rst_lifetime: 1
net.inet.ip.fw.dyn_fin_lifetime: 1
net.inet.ip.fw.dyn_syn_lifetime: 20
net.inet.ip.fw.dyn_ack_lifetime: 300
net.inet.ip.fw.static_count: 262
net.inet.ip.fw.dyn_max: 1
net.inet.ip.fw.dyn_count: 0
net.inet.ip.fw.curr_dyn_buckets: 256
net.inet.ip.fw.dyn_buckets: 1
net.inet.ip.fw.default_rule: 65535
net.inet.ip.fw.verbose_limit: 0
net.inet.ip.fw.verbose: 1
net.inet.ip.fw.debug: 0
net.inet.ip.fw.one_pass: 1
net.inet.ip.fw.autoinc_step: 100
net.inet.ip.fw.enable: 1
net.link.ether.ipfw: 1
net.link.bridge.ipfw: 0
net.link.bridge.ipfw_arp: 0

Thanks,

Daniel
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


  


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: [dummynet] Several queues connected to one pipe: "dummynet: OUCH! pipe should have been idle!"

2009-04-27 Thread Maxim Ignatenko
2009/4/27 Oleg Bulyzhin :
>
> Perhaps you stepped on this:
>
> http://docs.freebsd.org/cgi/getmsg.cgi?fetch=879027+0+archive/2009/svn-src-all/20090419.svn-src-all
>
> You can try to change type of dn_pipe.numbytes to int64_t (instead of dn_key).
> (ip_dummynet.h:341)
>

This is exactly what is done by patch sent by Luigi to me. And yes, it helped.

Thanks.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: [dummynet] Several queues connected to one pipe: "dummynet: OUCH! pipe should have been idle!"

2009-04-27 Thread Oleg Bulyzhin
On Mon, Apr 27, 2009 at 05:44:22PM +0300, Maxim Ignatenko wrote:
> 2009/4/27 Luigi Rizzo :
> > On Mon, Apr 27, 2009 at 04:51:18PM +0300, Maxim Ignatenko wrote:
> >> 2009/4/27 Luigi Rizzo :
> >> >
> >> > ok there seems to be no change related to dummynet between these
> >> > two versions so I am not sure where to look.
> >> > Could you double check what is the last working version ?
> >> >
> >>  Yes, r191201 have this problems too (it seems, i didn't updated for a
> >> long time).
> >> Now  I updated to r190864 (just before last change on ip_dummynet.c) -
> >> all works fine. Should I now check r190865?
> >
> > yes it would be great if you could identify a specific change that
> > caused the problem.
> > There is one thing particularly tricky in one of the dummynet
> > changes, because some fields changed between 32/64 bits and
> > signed/unsigned. I may have unadvertently introduced some
> > conversion bug.
> >
> 
> On r190865 problem appeared again.
> 
> > thanks a lot for the feedback
> >
> 
> You welcome :)
> 
> Thanks.
> ___
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Perhaps you stepped on this:

http://docs.freebsd.org/cgi/getmsg.cgi?fetch=879027+0+archive/2009/svn-src-all/20090419.svn-src-all

You can try to change type of dn_pipe.numbytes to int64_t (instead of dn_key).
(ip_dummynet.h:341)

-- 
Oleg.


=== Oleg Bulyzhin -- OBUL-RIPN -- OBUL-RIPE -- o...@rinet.ru ===


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: [dummynet] Several queues connected to one pipe: "dummynet: OUCH! pipe should have been idle!"

2009-04-27 Thread Luigi Rizzo
On Mon, Apr 27, 2009 at 11:08:54PM +0400, Oleg Bulyzhin wrote:
> On Mon, Apr 27, 2009 at 05:44:22PM +0300, Maxim Ignatenko wrote:
...
> > > yes it would be great if you could identify a specific change that
> > > caused the problem.
> > > There is one thing particularly tricky in one of the dummynet
> > > changes, because some fields changed between 32/64 bits and
> > > signed/unsigned. I may have unadvertently introduced some
> > > conversion bug.
> > >
> > 
> > On r190865 problem appeared again.
> > 
> > > thanks a lot for the feedback
> > >
> > 
> > You welcome :)
> > 
> > Thanks.
> > ___
> > freebsd-net@freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-net
> > To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
> 
> Perhaps you stepped on this:
> 
> http://docs.freebsd.org/cgi/getmsg.cgi?fetch=879027+0+archive/2009/svn-src-all/20090419.svn-src-all
> 
> You can try to change type of dn_pipe.numbytes to int64_t (instead of dn_key).
> (ip_dummynet.h:341)

good catch Oleg, sorry if i missed your email above.

cheers
luigi
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: IPFW MAX RULES COUNT PERFORMANCE

2009-04-27 Thread Adrian Chadd
You may want to investigate using pf; i'm not sure whether they handle
this better.

Me, I'd investigate writing a "tree" ipfw rule type. Ie, instead of
having a list of rules, all evaluated one at a time, I'd create a rule
implementing a subrule match on ip/netmask with some kind of action
(allow, deny, count, pipe, etc) rather than having it all be evaluated
O(n) style.

2c,


Adrian

2009/4/28 Daniel Dias Gonçalves :
> Going to another example.
> If I wanted that each authentication (username and password) in captive
> portal, set up rules limiting the speed of the user's IP, as I do? I can
> create two rules for the in / out for each user associated with a pipe? When
> simulating this with a script adding hundreds of rules, the latency also
> increases, as resolve this ?
>
> Adrian Chadd escreveu:
>>
>> You'd almost certainly be better off hacking up an extension to ipfw
>> which lets you count a /24 in one rule.
>>
>> As in, the count rule would match on the subnet/netmask, have 256 32
>> (or 64 bit) integers allocated to record traffic in, and then do an
>> O(1) operation using the last octet of the v4 address to map it into
>> this 256 slot array to update counters for.
>>
>> It'd require a little tool hackery to extend ipfw in userland/kernel
>> space to do it but it would work and be (very almost) just as fast as
>> a single rule.
>>
>> 2c,
>>
>>
>>
>> Adrian
>>
>> 2009/4/23 Daniel Dias Gonçalves :
>>
>>>
>>> Hi,
>>>
>>> My system is a FreeBSD 7.1R.
>>> When I add rules IPFW COUNT to 254 IPS from my network, one of my
>>> interfaces
>>> increases the latency, causing large delays in the network, when I delete
>>> COUNT rules, everything returns to normal, which can be ?
>>>
>>> My script:
>>>
>>> ipcount.php
>>> -- CUT --
>>> >> $c=0;
>>> $a=50100;
>>> for($x=0;$x<=0;$x++) {
>>>      for($y=1;$y<=254;$y++) {
>>>              $ip = "192.168.$x.$y";
>>>              system("/sbin/ipfw -q add $a count { tcp or udp } from any
>>> to
>>> $ip/32");
>>>              system("/sbin/ipfw -q add $a count { tcp or udp } from
>>> $ip/32
>>> to any");
>>>              #system("/sbin/ipfw delete $a");
>>>              $c++;
>>>              $a++;
>>>      }
>>> }
>>> echo "\n\nTotal: $c\n";
>>> ?>
>>> -- CUT --
>>>
>>> net.inet.ip.fw.dyn_keepalive: 1
>>> net.inet.ip.fw.dyn_short_lifetime: 5
>>> net.inet.ip.fw.dyn_udp_lifetime: 10
>>> net.inet.ip.fw.dyn_rst_lifetime: 1
>>> net.inet.ip.fw.dyn_fin_lifetime: 1
>>> net.inet.ip.fw.dyn_syn_lifetime: 20
>>> net.inet.ip.fw.dyn_ack_lifetime: 300
>>> net.inet.ip.fw.static_count: 262
>>> net.inet.ip.fw.dyn_max: 1
>>> net.inet.ip.fw.dyn_count: 0
>>> net.inet.ip.fw.curr_dyn_buckets: 256
>>> net.inet.ip.fw.dyn_buckets: 1
>>> net.inet.ip.fw.default_rule: 65535
>>> net.inet.ip.fw.verbose_limit: 0
>>> net.inet.ip.fw.verbose: 1
>>> net.inet.ip.fw.debug: 0
>>> net.inet.ip.fw.one_pass: 1
>>> net.inet.ip.fw.autoinc_step: 100
>>> net.inet.ip.fw.enable: 1
>>> net.link.ether.ipfw: 1
>>> net.link.bridge.ipfw: 0
>>> net.link.bridge.ipfw_arp: 0
>>>
>>> Thanks,
>>>
>>> Daniel
>>> ___
>>> freebsd-net@freebsd.org mailing list
>>> http://lists.freebsd.org/mailman/listinfo/freebsd-net
>>> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
>>>
>>>
>>
>> ___
>> freebsd-net@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-net
>> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
>>
>>
>>
>
> ___
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
>
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: IPFW MAX RULES COUNT PERFORMANCE

2009-04-27 Thread Ian Smith
On Mon, 27 Apr 2009, Daniel Dias Gonçalves wrote:
 > What may be happening ? I'm with polling enabled on all interfaces, can you
 > influence ?
 > 
 > em0:  port 0x7000-0x703f mem
 > 0xdfa0-0xdfa1 irq 16 at device 8.0 on pci4
 > em1:  port 0x7400-0x743f mem
 > 0xdfa2-0xdfa3 irq 17 at device 8.1 on pci4
 > em2:  port 0x8000-0x803f mem
 > 0xdfb0-0xdfb1 irq 16 at device 8.0 on pci5
 > em3:  port 0x8400-0x843f mem
 > 0xdfb2-0xdfb3 irq 17 at device 8.1 on pci5
 > em4:  port 0x9000-0x903f mem
 > 0xdfc0-0xdfc1 irq 16 at device 8.0 on pci7
 > em5:  port 0x9400-0x943f mem
 > 0xdfc2-0xdfc3 irq 17 at device 8.1 on pci7
 > em6:  port 0xa000-0xa03f mem
 > 0xdfd0-0xdfd1 irq 16 at device 8.0 on pci8
 > em7:  port 0xa400-0xa43f mem
 > 0xdfd2-0xdfd3 irq 17 at device 8.1 on pci8
 > fxp0:  port 0xb000-0xb03f mem
 > 0xdfe2-0xdfe20fff,0xdfe0-0xdfe1 irq 16 at device 4.0 on pci14
 > 
 > If I disable the polling, no network interface work, begins to display "em4
 > watchdog timeout".

Sorry, no ideas about polling, but this doesn't smell like just an IPFW 
issue.  I was pointing out that despite 20 times the CPU clock rate, 
probably at least 30 times CPU throughput and likely 10 times the tick 
rate, you appear to be suffering something like 30 to 900 times the 
increased latency to be expected by traversing 'too many' ipfw rules.

 > Ian Smith escreveu:
 > > On Fri, 24 Apr 2009, Daniel Dias Gonçalves wrote:
 > > 
 > >  > The latency in the interface em6 increased an average of 10ms to 200 ~
 > > 300ms
 > >  > Hardware:
 > >  > CPU: Intel(R) Xeon(TM) CPU 3.20GHz (3200.13-MHz 686-class CPU)
 > >  >  Logical CPUs per core: 2
 > >  > FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs
 > >  > cpu0:  on acpi0
 > >  > p4tcc0:  on cpu0
 > >  > cpu1:  on acpi0
 > >  > p4tcc1:  on cpu1
 > >  > cpu2:  on acpi0
 > >  > p4tcc2:  on cpu2
 > >  > cpu3:  on acpi0
 > >  > p4tcc3:  on cpu3
 > >  > SMP: AP CPU #1 Launched!
 > >  > SMP: AP CPU #3 Launched!
 > >  > SMP: AP CPU #2 Launched!
 > >  >  > real memory  = 9663676416 (9216 MB)
 > >  > avail memory = 8396738560 (8007 MB)
 > > 
 > > In that case, there really is something else wrong.  By my measurements,
 > > rummaging through most of >1000 rules on a old 166MHz Pentium to get to the
 > > icmp allow rules (ridiculous, I know) added about 2ms to local net pings
 > > via that box, ie 1ms each pass for about 900 rules, mostly counts.

cheers, Ian___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Re: IPFW MAX RULES COUNT PERFORMANCE

2009-04-27 Thread Julian Elischer

Daniel Dias Gonçalves wrote:

Julian,

You could give an example of rules with tables?


I'm sorry I forgot that you want to count packets from each client.
tables won't work for that.


for counting I suggest the technique I show below,
but for just allowing, you can add allowable addresses to
a table,
e.g. table 1 add 1.2.3.4

and test it with

allow ip from table (1) to any




Julian Elischer escreveu:

Daniel Dias Gonçalves wrote:

Very good thinking, congratulations, but my need is another.
The objective is a Captive Porrtal that each authentication is 
dynamically created a rule to ALLOW or COUNT IP authenticated, which 
I'm testing is what is the maximum capacity of rules supported, 
therefore simultaneous user.


Understand ?


I think so.


do not add rules.
have a single rule that looks in a table
and add entries to the table when needed.


Thanks,

Daniel

Julian Elischer escreveu:

Daniel Dias Gonçalves wrote:

Hi,

My system is a FreeBSD 7.1R.
When I add rules IPFW COUNT to 254 IPS from my network, one of my 
interfaces increases the latency, causing large delays in the 
network, when I delete COUNT rules, everything returns to normal, 
which can be ?


My script:


of course adding 512 rules, *all of which hav eto be evaluated* will 
add latency.


you have several ways to improve this situation.

1/ use a differnet tool.
By using the netgraph netflow module you can get
accunting information that may be more useful and less impactful.

2/ you could make your rules smarter..

use skipto rules to make the average packet traverse less rules..

off the top of my head.. (not tested..)

Assuming you have machines 10.0.0.1-10.0.0.254
the rules below have an average packet traversing 19 rules and not 
256 for teh SYN packet and 2 rules for others..
you may not be able to do the keep state  trick if you use state for 
other stuff but in that case worst case will still be 19 rules.


2 check-state
5 skipto 1 ip from not 10.0.0.0/24 to any
10 skipto 2020 ip from not 10.0.0.0/25 to any  # 0-128
20 skipto 1030 ip from not 10.0.0.0/26 to any  # 0-64
30 skipto 240 ip from not 10.0.0.0/27  to any  # 0-32
40 skipto 100 ip from not 10.0.0.0/28  to any  # 0-16
[16 count rules for 0-15]
80 skipto 1 ip from any to any
100 [16 count rules for 16-31] keep-state
140 skipto 1 ip from any to any
240 skipto 300 ip from not 10.0.0.32/28
[16 rules for 32-47] keep-state
280 skipto 1 ip from any to any
300 [16 count rules for 48-63] keep-state
340 skipto 1 ip from any to any
1030 skipto 1240 ip from not 10.0.0.64/27 to any
1040 skipto 1100 ip from not 10.0.0.64/28 to any
   [16 count rules for 64-79] keep-state
1080 skipto 1 ip from any to any
1100 [16 rules for 80-95] keep-state
1140 skipto 1 ip from any to any
1240 skipto 1300 ip from not 10.0.0.96/28 to any
[16 count rules for 96-111] keep-state
1280 skipto 1 ip from any to any
1300 [16 rules for 112-127] keep-state
1340 skipto 1 ip from any to any
2020 skipto 3030 ip from not 10.0.0.128/26 to any
2030 skipto 2240 ip from not 10.0.0.128/28 to any
[16 count rules for 128-143] keep-state
2080 skipto 1 ip from any to any
2100 [16 rules for 144-159] keep-state
2140 skipto 1 ip from any to any
2240 skipto 2300 ip from not 10.0.0.32/28 to any
[16 count rules for 160-175] keep-state
2280 skipto 1 ip from any to any
2300 [16 count rules for 176-191] keep-state
2340 skipto 1 ip from any to any
3030 skipto 3240 ip from not 10.0.0.192/27 to any
3040 skipto 3100 ip from not 10.0.0.192/28 to any
[16 count rules for 192-207] keep-state
3080 skipto 1 ip from any to any
3100 [16 rules for 208-223] keep-state
3240 skipto 1 ip from any to any
3240 skipto 3300 ip from not 10.0.0.224/28 to any
[16 count rules for 224-239] keep-state
3280 skipto 1 ip from any to any
3300 [16 count rules for 240-255] keep-state
3340 skipto 1 ip from any to any

1 #other stuff

in fact you could improve it further with:
1/ either going down to a netmask of 29 (8 rules per set)
or
2/ instead of having count rules make them skipto
so you would have:
3300 skipto 1 ip from 10.0.0.240 to any
3301 skipto 1 ip from 10.0.0.241 to any
3302 skipto 1 ip from 10.0.0.242 to any
3303 skipto 1 ip from 10.0.0.243 to any
3304 skipto 1 ip from 10.0.0.244 to any
3305 skipto 1 ip from 10.0.0.245 to any
3306 skipto 1 ip from 10.0.0.246 to any
3307 skipto 1 ip from 10.0.0.247 to any
3308 skipto 1 ip from 10.0.0.248 to any
3309 skipto 1 ip from 10.0.0.249 to any
3310 skipto 1 ip from 10.0.0.240 to any
3311 skipto 1 ip from 10.0.0.241 to any
3312 skipto 1 ip from 10.0.0.242 to any
3313 skipto 1 ip from 10.0.0.243 to any
3314 skipto 1 ip from 10.0.0.244 to any
3315 skipto 1 ip from 10.0.0.245 to any

thus on average, a packet would traverse half the rules (8).

3/ both the above  so on average they would traverse  4 rules plus 
one extra skipto.


you should be  able to