Re: Packet loss with traffic shaper and routing
[EMAIL PROTECTED] wrote: > Hello. > > I did that and compiled the kernel. > Then I restarted the system and enabled sysctl kern.polling.enable=1 > > It seems that it has no effect in the system. Maybe bge driver doesn't > like polling? At least from a quick glance in the polling(4) manpage I cannot see that bge is among the supported devices. If you want to use polling, I suppose that you need to enable it via ifconfig, too: polling If the driver has user-configurable polling(4) support, select the polling mode on the interface. > At this moment, I'm getting more than 50% interrupts and 20% packets lost. > I also disabled HT in BIOS and the interrupts are now passing 80% mark. > Don't know what else to do. Aren't these cards supposed to work at > 100Mbits or 1Gbit? They are failing with 12Mbits traffic on a 100Mbits > LAN. Something is wrong and I am having a hard time trying to identify the > problem. > > Thanks for the hints, anything else would be greatly appreciated. Several wild guesses from my own experiences here: - SMP + networking in 5.x does not work too well, using em(4) I experienced VERY poor performance (only ~5MB/s over a Gbit link) - Try upgrading to 6.x (as others have already suggested). I experienced all kind of weird problems with 5.x, and although there is no proof that the problems were actually related to 5.x, 6.x seems to work better. - What's the value of nmbclusters? Have you checked netstat -m? Do you see memory requests for network memory denied? - 50% interrupts on such a fast machine is quite high. I currently experience about 30% interrupt load using two em(4) cards, shaping for about ~2000 clients on a 3.8GHz Xeon. Kind regards -- >> Ferdinand Goldmann | | >> |--00 |UNIX | >> Tel. : +43/732/2468/9398 Fax. : +43/732/2468/9397 C ^ | | >> EMail: [EMAIL PROTECTED]\ ~/ ~~~| >> PGP D4CF 8AA4 4B2A 7B88 65CA 5EDC 0A9B FA9A 13EA B993| |-3 ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "[EMAIL PROTECTED]"
DHCP Over PPPoE
> Hi all , > > I have a setup like this > > Linux Machine 1 > Eth0- DHCP Server > > > Linux Machine 2 > Eth1- Got IP from DHCP Server > Eth0- PPPoE Server > ppp0 Interface formed > > > Linux Machine 3 > Eth0- PPPoE Client > Eth1- IP is 192.168.40.1 > ppp0 Interface formed > Dhcp Relay is running on Linux Machine 3 > > > Windows Machine 4 > Expecting an IP of 192.168.40. after renewing the ip address > of windows machine > > But there is no result > > Without PPPoE interfaces the windows machine is getting an > ip in the range 192.168.40. > > Wouldn't DHCP Protocol work over PPP Interface? > > If any one knows, please reply. > > Rgds > Joby > > > > --- "This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this email or any action taken upon this e-mail is strictly prohibited and may be unlawful." --- ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: [fbsd] Network performance in a dual CPU system
Hi Marcos, On Fri, Feb 10, 2006 at 08:46:00AM -0500, Marcos Bedinelli wrote: > Hello all, > > We have a 2.4GHz Intel Xeon machine running FreeBSD 6.0-RELEASE-p2. Due > to heavy network traffic, CPU utilization on that machine is 100%: > > === > > mull [~]$top -S > last pid: 94989; load averages: 3.69, 4.02, 4.36 up > 25+07:21:34 14:51:43 > 105 processes: 2 running, 46 sleeping, 57 waiting > CPU states: 0.0% user, 0.0% nice, 0.3% system, 99.4% interrupt, > 0.3% idle > Mem: 20M Active, 153M Inact, 84M Wired, 4K Cache, 60M Buf, 237M Free > Swap: 999M Total, 999M Free > > PID USERNAME THR PRI NICE SIZERES STATETIME WCPU COMMAND >60 root 1 -44 -163 0K 8K WAIT 355.6H 72.17% swi1: > net >39 root 1 -68 -187 0K 8K WAIT52.3H 5.22% irq28: > bge0 >40 root 1 -68 -187 0K 8K WAIT28.3H 2.25% irq29: > bge1 >11 root 1 171 52 0K 8K RUN166.6H 0.00% idle >63 root 1 -160 0K 8K - 121:55 0.00% yarrow >61 root 1 -32 -151 0K 8K WAIT46:21 0.00% swi4: > clock sio > [...] > > === > > > Does anyone know whether a dual CPU system can help us improve the > situation? I was wondering if the software interrupt threads would be > divided between the two processors. I am a few weeks late, I just saw this very interesting thread. What solution did you finally employ to circumvent your high interrupt load ? Regards, -- Jeremie Le Hen < jeremie at le-hen dot org >< ttz at chchile dot org > ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: [fbsd] Network performance in a dual CPU system
On Thu, 27 Apr 2006, Jeremie Le Hen wrote: PID USERNAME THR PRI NICE SIZERES STATETIME WCPU COMMAND 60 root 1 -44 -163 0K 8K WAIT 355.6H 72.17% swi1: net 39 root 1 -68 -187 0K 8K WAIT52.3H 5.22% irq28: bge0 40 root 1 -68 -187 0K 8K WAIT28.3H 2.25% irq29: bge1 11 root 1 171 52 0K 8K RUN166.6H 0.00% idle 63 root 1 -160 0K 8K - 121:55 0.00% yarrow 61 root 1 -32 -151 0K 8K WAIT46:21 0.00% swi4: clock sio [...] Does anyone know whether a dual CPU system can help us improve the situation? I was wondering if the software interrupt threads would be divided between the two processors. I am a few weeks late, I just saw this very interesting thread. What solution did you finally employ to circumvent your high interrupt load ? I missed the original thread, but in answer to the question: if you set net.isr.direct=1, then FreeBSD 6.x will run the netisr code in the ithread of the network device driver. This will allow the IP forwarding and related paths in two threads instead of one, potentially allowing greater parallelism. Of course, you also potentially contend more locks, you may increase the time it takes for the ithread to respond to new interrupts, etc, so it's not quite cut and dry, but with a workload like the one shown above, it might make quite a difference. Robert N M Watson ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: VLAN interfaces and routing
On Wed, Apr 26, 2006 at 01:55:11PM +0100, William wrote: > The switch is a Cisco 3550, trunking is setup on the port and I've > allowed the VLANS I'm interested in using. > > The end result is being able to communicate with all devices on said > VLANS which is fantastic but my next objective is to have the box talk > to other networks via a default route, I've tried applying the default > route by defaultrouter= in rc.conf, also manually adding it using > route once the box has booted up but it always results in no replys > back from other networks, even netstat -r seems to hang. The hang is probably just because your DNS server is unreachable. Use "netstat -rn" instead, or just rm /etc/resolv.conf. (It annoys me that traceroute and some versions of ping and telnet default to trying DNS lookups, when if there's a network problem the DNS server is probably not available) Manually adding the route ought to be fine. Can you ping your default gateway? Does 'ifconfig -a' show the correct settings? The default gateway must be on a directly-connected network of course, i.e. within the range of one of the subnets shown by 'ifconfig -a'. When you ping the default gateway, does the ARP cache get updated? (arp -an) HTH, Brian. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: [fbsd] Re: [fbsd] Network performance in a dual CPU system
Hi, Robert, On Thu, Apr 27, 2006 at 02:54:21PM +0100, Robert Watson wrote: > > On Thu, 27 Apr 2006, Jeremie Le Hen wrote: > > >> PID USERNAME THR PRI NICE SIZERES STATETIME WCPU COMMAND > >> 60 root 1 -44 -163 0K 8K WAIT 355.6H 72.17% swi1: > >>net > >> 39 root 1 -68 -187 0K 8K WAIT52.3H 5.22% irq28: > >>bge0 > >> 40 root 1 -68 -187 0K 8K WAIT28.3H 2.25% irq29: > >>bge1 > >> 11 root 1 171 52 0K 8K RUN166.6H 0.00% idle > >> 63 root 1 -160 0K 8K - 121:55 0.00% yarrow > >> 61 root 1 -32 -151 0K 8K WAIT46:21 0.00% swi4: > >>clock sio > >>[...] > >> > >>Does anyone know whether a dual CPU system can help us improve the > >>situation? I was wondering if the software interrupt threads would be > >>divided between the two processors. > > > >I am a few weeks late, I just saw this very interesting thread. What > >solution did you finally employ to circumvent your high interrupt load ? > > I missed the original thread, but in answer to the question: if you set > net.isr.direct=1, then FreeBSD 6.x will run the netisr code in the ithread > of the network device driver. This will allow the IP forwarding and > related paths in two threads instead of one, potentially allowing greater > parallelism. Of course, you also potentially contend more locks, you may > increase the time it takes for the ithread to respond to new interrupts, > etc, so it's not quite cut and dry, but with a workload like the one shown > above, it might make quite a difference. Actually you already replied in the original thread, explaining mostly the same thing. The whole thread [1] brought up multiple valuable network performance tuning knobs, such as polling, fastforwarding, net.isr.direct but there is no happy end to the thread. Given this is a real world situation, I wanted to know how Marcos revolved his problem. BTW, what I understand is that net.isr.direct=1 prevents from multiplexing all packets on the netisr thread and instead makes the ithread do the job. In this case, what happens to the netisr thread ? Does it still have some work to do or is it removed ? Thank you. Regards, [1] http://lists.freebsd.org/pipermail/freebsd-net/2006-February/thread.html#9725 -- Jeremie Le Hen < jeremie at le-hen dot org >< ttz at chchile dot org > ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: DHCP Over PPPoE
On Thu, Apr 27, 2006 at 02:38:03PM +0530, JOBY THAMPAN wrote: > > > Hi all , > > > > I have a setup like this > > > > Linux Machine 1 > > Eth0- DHCP Server > > > > > > Linux Machine 2 > > Eth1- Got IP from DHCP Server > > Eth0- PPPoE Server > > ppp0 Interface formed > > > > > > Linux Machine 3 > > Eth0- PPPoE Client > > Eth1- IP is 192.168.40.1 > > ppp0 Interface formed > > Dhcp Relay is running on Linux Machine 3 > > > > > > Windows Machine 4 > > Expecting an IP of 192.168.40. after renewing the ip address > > of windows machine > > > > But there is no result > > > > Without PPPoE interfaces the windows machine is getting an > > ip in the range 192.168.40. > > > > Wouldn't DHCP Protocol work over PPP Interface? Yes. But those are Linux machines, and this is a FreeBSD mailing list, so you are asking in the wrong place. Regards, Brian. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: [fbsd] Re: [fbsd] Network performance in a dual CPU system
On Thu, 27 Apr 2006, Jeremie Le Hen wrote: I missed the original thread, but in answer to the question: if you set net.isr.direct=1, then FreeBSD 6.x will run the netisr code in the ithread of the network device driver. This will allow the IP forwarding and related paths in two threads instead of one, potentially allowing greater parallelism. Of course, you also potentially contend more locks, you may increase the time it takes for the ithread to respond to new interrupts, etc, so it's not quite cut and dry, but with a workload like the one shown above, it might make quite a difference. Actually you already replied in the original thread, explaining mostly the same thing. :-) BTW, what I understand is that net.isr.direct=1 prevents from multiplexing all packets on the netisr thread and instead makes the ithread do the job. In this case, what happens to the netisr thread ? Does it still have some work to do or is it removed ? Yes -- basically, what this setting does is turn a deferred dispatch of the protocol level processing into a direct function invocation. So instead of inserting the new IP packet into an IP processing queue from the ethernet code and waking up the netisr which calls the IP input routine, we directly call the IP input routine. This has a number of potentially positive effects: - Avoid the queue/dequeue operation - Avoid a context switch - Allow greater parallelism since protocol layer processing is not limited to the netisr thread It also has some downsides: - Perform more work in the ithread -- since any given thread is limited to a single CPU's worth of processing resources, if the link layer and protocol layer processing add up to more than one CPU, you slow them down - Increase the time it takes to pull packets out of the card -- we process each packet to completion rather than pulling them out in sets and batching them. This pushes drop on overload into the card instead of the IP queue, which has some benefits and some costs. The netisr is still there, and will still be used for certain sorts of things. In particular, we use the netisr when doing arbitrary decapsulation, as this places an upper bound on thread stack use. For example: if you have an IP in IP in IP in IP tunneled packet, if you always used direct dispatch, then you'd potentially get a deeply nested stack. By looping it back into the queue and picking it up from the top level of the netisr dispatch, we avoid nesting the stacks, which could lead to stack overflow. We don't context switch in that loop, so avoid context switch costs. We also use the netisr for loopback network traffic. So, in short, the netisr is still there, it just has reduced work scheduled in it. Another potential model for increasing parallelism in the input path is to have multiple netisr threads -- this raises an interesting question relating to ordering. right now, we use source ordering -- that is, we order packets in the network subsystem essentially in the order they come from a particular source. So we guarantee that if four packets come in em0, they get processed in the order they are received from em0. They may arbitrarily interlace with packets coming from other interfaces, such as em1, lo0, etc. The reason for the strong source ordering is that some protocols, TCP in particular, respond really badly to misordering, which they detect as a loss and force retransmit for. If we introduce multiple netisrs naively by simply having the different threads working from the same IP input queue, then we can potentially pull packets from the same source into different workers, and process them at different rates, resulting in misordering being introduced. While we'd process packets with greater parallelism, and hence possibly faster, we'd toast the end-to-end protocol properties and make everyone really unhappy. There are a few common ways people have addressed this -- it's actually very similar to the link parallelism problem. For example, using bonded ethernet links, packets are assigned to a particular link based on a hash of their source address, so that individual streams from the same source remain in order with respect to themselves. An obvious approach would be to assign particular ifnets to particular netisrs, since that would maintain our current source ordering assumptions, but allow the ithreads and netisrs to float to different CPUs. A catch in this approach is load balancing: if two ifnets are assigned to the same netisr, then they can't run in parallel. This line of thought can, and does, continue. :-) The direct dispatch model maintains source ordering in a manner similar to having a per-source netisr, which works pretty well, and also avoids context switches. The main downside is reducing parallelism between the ithread and the netisr, which for some configurations can be a big deal (i.e., if ithread uses 60% cpu, and netisr uses 60%
Re: [fbsd] Re: [fbsd] Network performance in a dual CPU system
On 4/27/06, Robert Watson <[EMAIL PROTECTED]> wrote: > > On Thu, 27 Apr 2006, Jeremie Le Hen wrote: > > >> I missed the original thread, but in answer to the question: if you set > >> net.isr.direct=1, then FreeBSD 6.x will run the netisr code in the ithread > >> of the network device driver. This will allow the IP forwarding and > >> related paths in two threads instead of one, potentially allowing greater > >> parallelism. Of course, you also potentially contend more locks, you may > >> increase the time it takes for the ithread to respond to new interrupts, > >> etc, so it's not quite cut and dry, but with a workload like the one shown > >> above, it might make quite a difference. > > > > Actually you already replied in the original thread, explaining mostly > > the same thing. > > :-) > > > BTW, what I understand is that net.isr.direct=1 prevents from multiplexing > > all packets on the netisr thread and instead makes the ithread do the job. > > In this case, what happens to the netisr thread ? Does it still have some > > work to do or is it removed ? > > Yes -- basically, what this setting does is turn a deferred dispatch of the > protocol level processing into a direct function invocation. So instead of > inserting the new IP packet into an IP processing queue from the ethernet code > and waking up the netisr which calls the IP input routine, we directly call > the IP input routine. This has a number of potentially positive effects: > > - Avoid the queue/dequeue operation > - Avoid a context switch > - Allow greater parallelism since protocol layer processing is not limited to >the netisr thread > > It also has some downsides: > > - Perform more work in the ithread -- since any given thread is limited to a >single CPU's worth of processing resources, if the link layer and protocol >layer processing add up to more than one CPU, you slow them down > - Increase the time it takes to pull packets out of the card -- we process >each packet to completion rather than pulling them out in sets and batching >them. This pushes drop on overload into the card instead of the IP queue, >which has some benefits and some costs. > > The netisr is still there, and will still be used for certain sorts of things. > In particular, we use the netisr when doing arbitrary decapsulation, as this > places an upper bound on thread stack use. For example: if you have an IP in > IP in IP in IP tunneled packet, if you always used direct dispatch, then you'd > potentially get a deeply nested stack. By looping it back into the queue and > picking it up from the top level of the netisr dispatch, we avoid nesting the > stacks, which could lead to stack overflow. We don't context switch in that > loop, so avoid context switch costs. We also use the netisr for loopback > network traffic. So, in short, the netisr is still there, it just has reduced > work scheduled in it. > > Another potential model for increasing parallelism in the input path is to > have multiple netisr threads -- this raises an interesting question relating > to ordering. right now, we use source ordering -- that is, we order packets > in the network subsystem essentially in the order they come from a particular > source. So we guarantee that if four packets come in em0, they get processed > in the order they are received from em0. They may arbitrarily interlace with > packets coming from other interfaces, such as em1, lo0, etc. The reason for > the strong source ordering is that some protocols, TCP in particular, respond > really badly to misordering, which they detect as a loss and force retransmit > for. If we introduce multiple netisrs naively by simply having the different > threads working from the same IP input queue, then we can potentially pull > packets from the same source into different workers, and process them at > different rates, resulting in misordering being introduced. While we'd > process packets with greater parallelism, and hence possibly faster, we'd > toast the end-to-end protocol properties and make everyone really unhappy. > > There are a few common ways people have addressed this -- it's actually very > similar to the link parallelism problem. For example, using bonded ethernet > links, packets are assigned to a particular link based on a hash of their > source address, so that individual streams from the same source remain in > order with respect to themselves. An obvious approach would be to assign > particular ifnets to particular netisrs, since that would maintain our current > source ordering assumptions, but allow the ithreads and netisrs to float to > different CPUs. A catch in this approach is load balancing: if two ifnets are > assigned to the same netisr, then they can't run in parallel. This line of > thought can, and does, continue. :-) The direct dispatch model maintains > source ordering in a manner similar to having a per-source netisr, which works > pretty well, and also avoids conte
Re: DHCP Over PPPoE
A few things.. 1/ thisn is a FreeBSD list so we are not very familiar with linux. 2/ PPPOE uses PPP which is a point-to-point protocol and does not support broadcast. 3/ DHCP is a broadcast protocol and does not support point-to-point networks. 4/ PPP (oE) has its own IP allocation mechanism. JOBY THAMPAN wrote: Hi all , I have a setup like this Linux Machine 1 Eth0- DHCP Server Linux Machine 2 Eth1- Got IP from DHCP Server Eth0 - PPPoE Server ppp0 Interface formed Linux Machine 3 Eth0- PPPoE Client Eth1- IP is 192.168.40.1 ppp0 Interface formed Dhcp Relay is running on Linux Machine 3 Windows Machine 4 Expecting an IP of 192.168.40. after renewing the ip address of windows machine But there is no result Without PPPoE interfaces the windows machine is getting an ip in the range 192.168.40. Wouldn't DHCP Protocol work over PPP Interface? If any one knows, please reply. Rgds Joby --- "This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this email or any action taken upon this e-mail is strictly prohibited and may be unlawful." --- ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "[EMAIL PROTECTED]" ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: New version of iwi(4) - Call for testers [regression!]
On Tuesday 18 April 2006 23:29, Max Laier wrote: > On Monday 10 April 2006 14:32, Hajimu UMEMOTO wrote: > Latest version: > http://people.freebsd.org/~mlaier/new_iwi/20060418.both_nofw.tgz > > Thanks to Sam, this should work in IBSS (adhoc) mode now. > > Why don't you commit it into HEAD, yet? :) > > Will do that after this *LAST* iteration of testing. Please test now - you > have been warned. FYI, this has been committed to HEAD now. Please test there and let me know if you find any remaining problem. MFC scheduled in 4 weeks from now. -- /"\ Best regards, | [EMAIL PROTECTED] \ / Max Laier | ICQ #67774661 X http://pf4freebsd.love2party.net/ | [EMAIL PROTECTED] / \ ASCII Ribbon Campaign | Against HTML Mail and News pgpZibt3np8cx.pgp Description: PGP signature