Issues with USB wireless adapter in hostap mode
Hey, I bought a hawking HWUG1A USB wireless adapter with external antenna port thingie. Using it with OpenBSD 4.2 in hostap mode. I'm having issues where it'll drop clients off the network if they're not constantly ping'ing something. It also tops out at 100k/sec, even though the adapter links up in 802.11g mode. The same host is able to transfer several megabytes/sec over the same USB ports. The configuration's bridged, and was directly converted from me having a networked WAP bridged over an extra network port. Shows up as: rum0 at uhub3 port 1 rum0: Ralink 802.11 bg WLAN, rev 2.00/0.01, addr 2 rum0: MAC/BBP RT2573 (rev 0x2573a), RF RT2528, address 00:0e:3b:09:81:65 Configured as: media autoselect mediaopt hostap nwid dormando mode 11g chan 3 txpower 95 up ... I've twiddled the mode (I think), txpower, channel, etc. I'm emptyheaded as far as wireless still. What should I try next? The dropping thing is pretty annoying :) Thanks, -Dormando
Re: hardware needed for network stack performance work
(sorry for screwing up the thread; I'm on the daily digest list. Please CC responses to me as well). Hey, I have two machines who are recent, high end, PCI-Express, single CPU dualcore, but unfortunately one's AMD, one's Intel, and I can't find the actual specs as of this moment. I purchased these to experiment with speedy PF firewalls but haven't had the time to actually use them. Also have PCI Express fiber and dual copper syskonnect and intel NICs that go with them. It's a long way to germany from here... If no other company can or will step up, I'll whine, cry, and throw puppydog eyes at my superiors until we can get Henning the hardware he needs, as quickly as possible. I've been occasionally pushing over 1.6 gigabits of transfer over PF recently, and eventually want to start using OpenBGPD routers and the like. I've used many, other hardware platforms for routing and firewalling. They all suck, trust me. We can talk details off list, but I want to throw this out into the open this time. Please, I have no clue how many corporate types would get wind of posts to openbsd-misc, but it's not that big of an investment to send some hardware. It feels real nice to get the right equipment into the right hands. Take your accountant out to a nice lunch, they'll be more understanding of having to do the extra paperwork. In my case, these machines (unfortunately not ideal ones?) are in the wrong hands, my hands, and should probably change. -Dormando
Huge PF/BGP setups with OpenBSD
Yo all, I'm finally starting a project where I need to build a front-end network that'll allow us to push up to (eventually) 10 gigabits of outbound internet traffic, made up of non-jumbo frame packets. Currently we push between 150,000 and 200,000pps. Our current firewalls running 3.8 i386 and em cards are maxing out now. I have gigabit fiber ethernet feeds, and can get 10 gigabit drops as well. I need redundancy, I'd like to run BGP. We use PF round-robin for high speed L4 LB, but nothing else too special. Everything else is open right now; I'll be buying multiple hardware platforms, CPUs, motherboards, network cards, and testing them all thoroughly for packet rates with/without PF rulesets. My question is; how the hell do I scale this? What good approaches are there to getting a front end network to scale, be redundant, maybe run BGP, and not be a huge pain in the ass to manage? I'd much rather continue sending resources to OpenBSD instead of shelling out for a pair of huge, expensive routers. Any good input is greatly appreciated; trolling not so much. Yes I've read all of the PF docs, the PF series on undeadly, the OpenBGP slides, etc. Thanks, -Dormando
Very high interrupts on a supermicro machine.
Hey all, Attached is a dmesg of one of a pair of supermicro based firewalls I recently bought. I had set them up as a CARP/pfsync redundant pair of frontend firewalls for our network. However, after they reached 15,000 interrupts per second (~ 110 megabits of our site traffic), they passed 90% CPU usage through interrupts and stopped being useful. The machines have two built-in BGE nics. I swapped in an Intel PRO/1000MT Dual Port Server Nic into a PCI-X 133mhz PCI slot, but it made absolutely no difference in the interrupt load. The current firewalls in place are freebsd machines running on supermicro hardware with two em based built-in nics running past 40k interrupts without passing 50% CPU load on interrupts. The only error I can see in the dmesg was this: pcibios0: no compatible PCI ICU found: ICU vendor 0x8086 product 0x2640 pcibios0: Warning, unable to fix up PCI interrupt routing pcibios0: PCI bus #5 is the last bus ... which as far as I can read, is "harmless", but potentially causing higher interrupt load? Any hints as to where I should look next would be great. I'm about to install the latest -current snapshot on the machine to see if there's a recent fix. I'm about 95% sure this is the motherboard we're using: http://www.supermicro.com/products/motherboard/P4/E7221/P8SCT.cfm I'll check with the order guy and confirm the PO. There's a 3.4ghz P4 CPU in it, the two built-in nics, and a single PCI-X 133mhz PCI port which I used for the dual port server nic from intel. SATA harddrive for what it's worth. Running OpenBSD 3.7 as a PF firewall. I've tried changing a bunch of BIOS options, disabling interrupts, etc. I haven't compiled my own kernel or built the OS or anything. Thanks, -Dormando [demime 1.01d removed an attachment of type application/octet-stream which had a name of supermicro-dmesg]
Re: Very high interrupts on a supermicro machine.
On 10/17/05, dormando <[EMAIL PROTECTED]> wrote: > Hey all, [...] My apologies for mime'ing the dmesg :( I post here once a year or so. It looks like the latest snapshot from the FTP does a lot better with interrupts (about 150k pps before getting into the danger area), and interrupts never go above 8k/sec. The dmesg for the latest snapshot has this instead of the pcibios error: pcibios0 at bios0: rev 3.0 @ 0xf/0xcb84 pcibios0: PCI IRQ Routing Table rev 1.0 @ 0xfca20/336 (19 entries) pcibios0: PCI Exclusive IRQs: 5 9 10 12 pcibios0: PCI Interrupt Router at 000:31:0 ("Intel 82801FB LPC" rev 0x00) pcibios0: PCI bus #5 is the last bus 3.7 dmesg follows: OpenBSD 3.7 (GENERIC) #50: Sun Mar 20 00:01:57 MST 2005 [EMAIL PROTECTED]:/usr/src/sys/arch/i386/compile/GENERIC cpu0: Intel(R) Pentium(R) 4 CPU 3.40GHz ("GenuineIntel" 686-class) 3.40 GHz cpu0: FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,PNI,MWAIT,EST,CNXT-ID cpu0: Enhanced SpeedStep 1700 MHz (1420 mV): unknown EST cpu, no changes possible real mem = 2144837632 (2094568K) avail mem = 1951232000 (1905500K) using 4278 buffers containing 107343872 bytes (104828K) of memory mainbus0 (root) bios0 at mainbus0: AT/286+(d2) BIOS, date 04/07/05, BIOS32 rev. 0 @ 0xfa000 apm0 at bios0: Power Management spec V1.2 apm0: AC on, battery charge unknown pcibios0 at bios0: rev 3.0 @ 0xf/0xcb84 pcibios0: PCI IRQ Routing Table rev 1.0 @ 0xfca20/336 (19 entries) pcibios0: PCI Exclusive IRQs: 5 9 10 12 pcibios0: no compatible PCI ICU found: ICU vendor 0x8086 product 0x2640 pcibios0: Warning, unable to fix up PCI interrupt routing pcibios0: PCI bus #5 is the last bus bios0: ROM list: 0xc/0x9400! 0xcc000/0x1800 cpu0 at mainbus0 pci0 at mainbus0 bus 0: configuration mode 1 (no bios) pchb0 at pci0 dev 0 function 0 "Intel E7221 MCH Host" rev 0x05 ppb0 at pci0 dev 1 function 0 "Intel E7221 PCIE" rev 0x05 pci1 at ppb0 bus 1 ppb1 at pci1 dev 0 function 0 "Intel PCIE-PCIE" rev 0x09 pci2 at ppb1 bus 2 em0 at pci2 dev 1 function 0 "Intel PRO/1000MT DP (82546EB)" rev 0x03: irq 5, address: 00:04:23:bf:11:6c em1 at pci2 dev 1 function 1 "Intel PRO/1000MT DP (82546EB)" rev 0x03: irq 12, address: 00:04:23:bf:11:6d vendor "Intel", unknown product 0x0326 (class system subclass interrupt, rev 0x09) at pci1 dev 0 function 1 not configured vga1 at pci0 dev 2 function 0 "Intel E7221 Video" rev 0x05: aperture at 0xd040, size 0x800 wsdisplay0 at vga1: console (80x25, vt100 emulation) wsdisplay0: screen 1-5 added (80x25, vt100 emulation) ppb2 at pci0 dev 28 function 0 "Intel 82801FB PCIE" rev 0x03 pci3 at ppb2 bus 3 bge0 at pci3 dev 0 function 0 "Broadcom BCM5721" rev 0x11, unknown BCM5750 (0x4101): irq 5 address 00:30:48:84:cd:ca brgphy0 at bge0 phy 1: BCM5750 10/100/1000baseT PHY, rev. 0 ppb3 at pci0 dev 28 function 1 "Intel 82801FB PCIE" rev 0x03 pci4 at ppb3 bus 4 bge1 at pci4 dev 0 function 0 "Broadcom BCM5721" rev 0x11, unknown BCM5750 (0x4101): irq 12 address 00:30:48:84:cd:cb brgphy1 at bge1 phy 1: BCM5750 10/100/1000baseT PHY, rev. 0 uhci0 at pci0 dev 29 function 0 "Intel 82801FB USB" rev 0x03: irq 9 usb0 at uhci0: USB revision 1.0 uhub0 at usb0 uhub0: Intel UHCI root hub, class 9/0, rev 1.00/1.00, addr 1 uhub0: 2 ports with 2 removable, self powered uhci1 at pci0 dev 29 function 1 "Intel 82801FB USB" rev 0x03: irq 10 usb1 at uhci1: USB revision 1.0 uhub1 at usb1 uhub1: Intel UHCI root hub, class 9/0, rev 1.00/1.00, addr 1 uhub1: 2 ports with 2 removable, self powered uhci2 at pci0 dev 29 function 2 "Intel 82801FB USB" rev 0x03: irq 10 usb2 at uhci2: USB revision 1.0 uhub2 at usb2 uhub2: Intel UHCI root hub, class 9/0, rev 1.00/1.00, addr 1 uhub2: 2 ports with 2 removable, self powered uhci3 at pci0 dev 29 function 3 "Intel 82801FB USB" rev 0x03: irq 5 usb3 at uhci3: USB revision 1.0 uhub3 at usb3 uhub3: Intel UHCI root hub, class 9/0, rev 1.00/1.00, addr 1 uhub3: 2 ports with 2 removable, self powered ehci0 at pci0 dev 29 function 7 "Intel 82801FB USB" rev 0x03: irq 9 ehci0: EHCI version 1.0 ehci0: companion controllers, 2 ports each: uhci0 uhci1 uhci2 uhci3 usb4 at ehci0: USB revision 2.0 uhub4 at usb4 uhub4: Intel EHCI root hub, class 9/0, rev 2.00/1.00, addr 1 uhub4: single transaction translator uhub4: 8 ports with 8 removable, self powered ppb4 at pci0 dev 30 function 0 "Intel 82801BA AGP" rev 0xd3 pci5 at ppb4 bus 5 pcib0 at pci0 dev 31 function 0 "Intel 82801FB LPC" rev 0x03 pciide0 at pci0 dev 31 function 1 "Intel 82801FB IDE" rev 0x03: DMA, channel 0 configured to compatibility, channel 1 configured to compatibility pciide0: channel 0 disabled (no drives) pciide0: channel 1 disabled (no drives) pciide1 at pci0 dev 31 function 2 "Intel 82801FR SATA" r
Re: Very high interrupts on a supermicro machine.
Hey, Thanks. I do see the congestion values going up pretty quick. However, as I need to make this run in production as fast as possible. The performance I'm seeing thus far is a lot less than I'd like to see, but the hardware is probably not ideal (bleeding edge chipset, etc). Interrupts % are very high on a machine with a very high end bus. I do notice that with the new OBSD the number of interrupts reporter per NIC doesn't go above 8,000/sec on this hardware. Once it hits that ceiling the CPU % in interrupts keeps going up, but the number of interrupts is not. Hmm. So should I turn off the congestion packets regardless? Thanks, -Dormando On 10/18/05, Schvberle Daniel <[EMAIL PROTECTED]> wrote: > Hi, > > I was trying to bench routing pps with pf on and henning gave me > some advice which I think might help you too. For my benching purposes > it helped break the 200k pps barrier with current but no guaranties > that it'll do you any good or that it won't hurt you. > > > The high drop rates > are a anti-DDoS measure - yeah, that pretty much makes benching > impossible... > you could change IF_INPUT_ENQUEUE in sys/net/if.h so that it looks like > > #define IF_INPUT_ENQUEUE(ifq, m) { \ > if (IF_QFULL(ifq)) {\ > IF_DROP(ifq); \ > m_freem(m); \ > } else \ > IF_ENQUEUE(ifq, m); \ > } > > i. e. remove these two lines: > if (!(ifq)->ifq_congestion) \ > if_congestion(ifq); \ > > that means the congestion flag will never be set. > or you add a return; as first statement in if_congestion() in if.c. > > > > > -Original Message- > > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > > On Behalf Of dormando > > Sent: Monday, October 17, 2005 8:29 PM > > To: misc@openbsd.org > > Subject: Very high interrupts on a supermicro machine. > > > > Hey all, > > > > Attached is a dmesg of one of a pair of supermicro based firewalls I > > recently bought. I had set them up as a CARP/pfsync redundant pair of > > frontend firewalls for our network. However, after they reached 15,000 > > interrupts per second (~ 110 megabits of our site traffic), > > they passed 90% > > CPU usage through interrupts and stopped being useful. > > > > The machines have two built-in BGE nics. I swapped in an > > Intel PRO/1000MT > > Dual Port Server Nic into a PCI-X 133mhz PCI slot, but it > > made absolutely no > > difference in the interrupt load. The current firewalls in > > place are freebsd > > machines running on supermicro hardware with two em based > > built-in nics > > running past 40k interrupts without passing 50% CPU load on > > interrupts. The > > only error I can see in the dmesg was this: > > > > pcibios0: no compatible PCI ICU found: ICU vendor 0x8086 > > product 0x2640 > > pcibios0: Warning, unable to fix up PCI interrupt routing > > pcibios0: PCI bus #5 is the last bus > > > > ... which as far as I can read, is "harmless", but potentially causing > > higher interrupt load? > > > > Any hints as to where I should look next would be great. I'm about to > > install the latest -current snapshot on the machine to see if > > there's a > > recent fix. > > > > I'm about 95% sure this is the motherboard we're using: > > http://www.supermicro.com/products/motherboard/P4/E7221/P8SCT. > > cfm I'll check > > with the order guy and confirm the PO. > > > > There's a 3.4ghz P4 CPU in it, the two built-in nics, and a > > single PCI-X > > 133mhz PCI port which I used for the dual port server nic > > from intel. SATA > > harddrive for what it's worth. Running OpenBSD 3.7 as a PF > > firewall. I've > > tried changing a bunch of BIOS options, disabling interrupts, > > etc. I haven't > > compiled my own kernel or built the OS or anything. > > > > Thanks, > > -Dormando
Re: Very high interrupts on a supermicro machine.
So, My latest update; Theo mentioned the single CPU kernels don't make use of APIC interrupt controllers, just ISA. I booted my single P4 systems into the bsd.mp kernel, and behold there's a major difference in speed! Now the systems no longer claim 95%+ CPU held in interrupts, but claim to be 100% idle most of the time, bouncing into 1-6% sys CPU every few seconds, and holding at 0% int CPU. Traffic changed from lossy at 120 megs, to maxed out at 150 megabits, ~70k pps per interface. At that point traffic very obviously flatlined, but it did not dip or fail. I saw no visible CPU load, interrupts were around 7.8k/sec per active NIC. It looked almost like I had set an altq limit of 150 megabits. Any idea on how to profile where my packets are spending most of their time? I'm not so great with this level of troubleshooting, but I would love to get better at it. Right now I have two machines in a semi-carp cluster. A 3.7 stable box, and a -current as of oct 15th. 3.7 doesn't have the tuner Henning mentioned, but 3.8 and -current do. Set net.inet.ip.ifq.maxlen=250 on the -current box and traffic went up to 160 megabits and flatlined again. The next thing I'm trying tomorrow morning is switching the internal interface to one of the bge nics. The systems have two bge nics built-in, and one PCI-X 133mhz intel dual port 1000MT server nic. Right now the int/ext are on the intel card and the pfsync int is on bge1. -Dormando On 10/19/05, Henning Brauer <[EMAIL PROTECTED]> wrote: > eh, this is really only good for benching, because otherwise we stop > traversing the pf ruleset for very short amounts of time if we are > about to exhaust CPU. this allows already established connections to > live on and the OP to log in to the box via console and take > countermeasures. if you already ahd an ssh sessionto teh box it has > good chances to survive and you can even take countermeasures over that. > > what you really want to do for high speed routers is increasing > net.inet.ip.ifq.maxlen > I currently use 250 on some routers which seems good, but I need to do > more tests before I can make qualified assumptions about good values. > > This is the max length of a queue in the input path, and the default of > 50 packets is too small for high speed routers with modern GigE cards > that can put about that into teh queue with one single int. Or even more. > > In the end I think we need a better default based on some factors like > ip forwarding enabled and summarized link speed and RAM in teh box or > somesuch. Ryan and I discussed that on the ferry earlier this year and > have some good ideas, now we just need some time to work on it ;( > > * Schvberle Daniel <[EMAIL PROTECTED]> [2005-10-18 18:36]: > > Hi, > > > > I was trying to bench routing pps with pf on and henning gave me > > some advice which I think might help you too. For my benching purposes > > it helped break the 200k pps barrier with current but no guaranties > > that it'll do you any good or that it won't hurt you. > > > > > > The high drop rates > > are a anti-DDoS measure - yeah, that pretty much makes benching > > impossible... > > you could change IF_INPUT_ENQUEUE in sys/net/if.h so that it looks like > > > > #define IF_INPUT_ENQUEUE(ifq, m) { \ > > if (IF_QFULL(ifq)) {\ > > IF_DROP(ifq); \ > > m_freem(m); \ > > } else \ > > IF_ENQUEUE(ifq, m); \ > > } > > > > i. e. remove these two lines: > > if (!(ifq)->ifq_congestion) \ > > if_congestion(ifq); \ > > > > that means the congestion flag will never be set. > > or you add a return; as first statement in if_congestion() in if.c. > > > > > > > > > -Original Message- > > > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > > > On Behalf Of dormando > > > Sent: Monday, October 17, 2005 8:29 PM > > > To: misc@openbsd.org > > > Subject: Very high interrupts on a supermicro machine. > > > > > > Hey all, > > > > > > Attached is a dmesg of one of a pair of supermicro based firewalls I > > > recently bought. I had set them up as a CARP/pfsync redundant pair of > > > frontend firewalls for our network. However, after they reached 15,000 > > > interrupts per second (~ 110 megabits of our site traffic), > > > they passed 90% > > > CPU usage through interrupts and stopped being useful. > >
Re: Very high interrupts on a supermicro machine.
Did you make any other configuration changes? Right now my box is doing ~28,000pps per direction per interface (out public, in public, out internal, in internal), totalling around 112kpps. It doesn't seem to want to go any higher than that. I've just tried moving the internal connection off of the dualport PCI-X card and onto the internal nic, and it hasn't made a difference. I'd be a little confused if two syskonnect cards would have double the performance of what I have in the machine right now... On 10/20/05, Michael Blodgett <[EMAIL PROTECTED]> wrote: > I'm in a similar situation, 3ghz single cpu on a supermicro board with > two syskonnect cards, previously running around 50,000 packets the cpu > would bounce between 20 - 80 cpu utilized. Switched to the mp kernel, > I'm now topping out at 215,000 showing basically no utilization. > > mike blodgett > CSL, UW-Madison CS Dept.