Re: CARP Failover
sorry for the delay in responding. The two firewalls cannot hear each other's CARP announcements. Test > with tcpdump; do you see the CARP packets coming from the other > firewall? If not, you have a switching problem, like the two firewalls > are not in the same VLAN together. > > that there was the problem. The interfaces on the different nodes were not in the same vlan. -- Mike Of course, you might discount this possibility, but remember that one in a million chances happen 99% of the time. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: em driver, 82574L chip, and possibly ASPM
So has everyone that wanted to get something testing been able to do so? Jack On Tue, Feb 1, 2011 at 7:03 PM, Mike Tancsa wrote: > On 2/1/2011 5:03 PM, Sean Bruno wrote: > > On Tue, 2011-02-01 at 13:43 -0800, Jack Vogel wrote: > >> To those who are going to test, here is the if_em.c, based on head, > >> with my > >> changes, I have to leave for the afternoon, and have not had a chance > >> to build > >> this, but it should work. I will check back in the later evening. > >> > >> Any blatant problems Sean, feel free to fix them :) > >> > >> Jack > >> > > > > > > I suspect that line 1490 should be: > > if (more_rx || (ifp->if_drv_flags & IFF_DRV_OACTIVE)) { > > > > > I have hacked up a RELENG_8 version which I think is correct including > the above change > > http://www.tancsa.com/if_em-8.c > > > > --- if_em.c.orig2011-02-01 21:47:14.0 -0500 > +++ if_em.c 2011-02-01 21:47:19.0 -0500 > @@ -30,7 +30,7 @@ > POSSIBILITY OF SUCH DAMAGE. > > > > **/ > -/*$FreeBSD: src/sys/dev/e1000/if_em.c,v 1.21.2.20 2011/01/22 01:37:53 > jfv Exp $*/ > +/*$FreeBSD$*/ > > #ifdef HAVE_KERNEL_OPTION_HEADERS > #include "opt_device_polling.h" > @@ -93,7 +93,7 @@ > /* > * Driver version: > */ > -char em_driver_version[] = "7.1.9"; > +char em_driver_version[] = "7.1.9-test"; > > /* > * PCI Device ID Table > @@ -927,11 +927,10 @@ >if (!adapter->link_active) >return; > > -/* Call cleanup if number of TX descriptors low */ > - if (txr->tx_avail <= EM_TX_CLEANUP_THRESHOLD) > - em_txeof(txr); > - >while (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) { > + /* First cleanup if TX descriptors low */ > + if (txr->tx_avail <= EM_TX_CLEANUP_THRESHOLD) > + em_txeof(txr); >if (txr->tx_avail < EM_MAX_SCATTER) { >ifp->if_drv_flags |= IFF_DRV_OACTIVE; >break; > @@ -1411,8 +1410,7 @@ >if (!drbr_empty(ifp, txr->br)) >em_mq_start_locked(ifp, txr, NULL); > #else > - if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) > - em_start_locked(ifp, txr); > + em_start_locked(ifp, txr); > #endif >EM_TX_UNLOCK(txr); > > @@ -1475,11 +1473,10 @@ >struct ifnet*ifp = adapter->ifp; >struct tx_ring *txr = adapter->tx_rings; >struct rx_ring *rxr = adapter->rx_rings; > - boolmore; > - > >if (ifp->if_drv_flags & IFF_DRV_RUNNING) { > - more = em_rxeof(rxr, adapter->rx_process_limit, NULL); > + boolmore_rx; > + more_rx = em_rxeof(rxr, adapter->rx_process_limit, NULL); > >EM_TX_LOCK(txr); >em_txeof(txr); > @@ -1487,12 +1484,10 @@ >if (!drbr_empty(ifp, txr->br)) >em_mq_start_locked(ifp, txr, NULL); > #else > - if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) > - em_start_locked(ifp, txr); > + em_start_locked(ifp, txr); > #endif > - em_txeof(txr); >EM_TX_UNLOCK(txr); > - if (more) { > + if (more_rx || (ifp->if_drv_flags & IFF_DRV_OACTIVE)) { >taskqueue_enqueue(adapter->tq, &adapter->que_task); >return; >} > @@ -1604,7 +1599,6 @@ >if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) >em_start_locked(ifp, txr); > #endif > - em_txeof(txr); >E1000_WRITE_REG(&adapter->hw, E1000_IMS, txr->ims); >EM_TX_UNLOCK(txr); > } > @@ -3730,17 +3724,17 @@ >txr->queue_status = EM_QUEUE_HUNG; > > /* > - * If we have enough room, clear IFF_DRV_OACTIVE > + * If we have a minimum free, clear IFF_DRV_OACTIVE > * to tell the stack that it is OK to send packets. > */ > -if (txr->tx_avail > EM_TX_CLEANUP_THRESHOLD) { > +if (txr->tx_avail > EM_MAX_SCATTER) > ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; > - /* Disable watchdog if all clean */ > -if (txr->tx_avail == adapter->num_tx_desc) { > - txr->queue_status = EM_QUEUE_IDLE; > - return (FALSE); > - } > -} > + > + /* Disable watchdog if all clean */ > + if (txr->tx_avail == adapter->num_tx_desc) { > + txr->queue_status = EM_QUEUE_IDLE; > + return (FALSE); > + } > >return (TRUE); > } > @@ -5064,8 +5058,8 @@ >char namebuf[QUEUE_NAME_LEN]; > >/* Driver Statistics */ > - SYSCTL_ADD_UINT(ctx, child, OID_AUTO, "li
Re: A flood of bacula traffic causes igb interface to go offline.
On Tue, 2011-02-01 at 12:50 -0800, Mike Carlson wrote: > Hey net@, > > I have a FreeBSD 8.2-RC2 system running on a HP DL180 G6, using the > onboard Intel controller, and it is our primary Bacula storage node and > director node. > > We have 96 clients that are scheduled to run at 8:30pm. After about 9 - > 10 minutes of activity (mrtg graphs show about 50-60MB/sec incoming > traffic), the igb1 interface is no longer able to communicate with the > Cisco switch. > > The interesting part is, the interface is still "up", there is nothing > in the kernel message buffer, and nothing relevant in the log file (just > syslogd and ldap errors because they cannot reach their respective > network servers). The system only responds to the network until I either > reboot, or run 'ifconfig igb1 down ; ifconfig igb1 up'. There is no > firewall loaded/configured. > > Thankfully, I have a KVM over IP, so when this happens I can at least > run script(1) and capture some useful information. > ifconfig igb1 > igb1: flags=8843 metric 0 mtu 1500 > > options=1bb > ether 1c:c1:de:e9:fb:af > inet 128.15.136.105 netmask 0xff00 broadcast 128.15.136.255 > inet 128.15.136.108 netmask 0xff00 broadcast 128.15.136.255 > inet 128.15.136.102 netmask 0xff00 broadcast 128.15.136.255 > media: Ethernet autoselect (1000baseT ) > status: active > > I can ping the internal IP (but I realize that is probably a useless > test...) > root@write /etc]> ping 128.15.136.105 > PING 128.15.136.105 (128.15.136.105): 56 data bytes > 64 bytes from 128.15.136.105: icmp_seq=0 ttl=64 time=0.024 ms > 64 bytes from 128.15.136.105: icmp_seq=1 ttl=64 time=0.015 ms > ^C > --- 128.15.136.105 ping statistics --- > 2 packets transmitted, 2 packets received, 0.0% packet loss > round-trip min/avg/max/stddev = 0.015/0.019/0.024/0.005 ms > > Attempting to ping the router: > root@write /etc]> ping 128.15.136.254 > PING 128.15.136.254 (128.15.136.254): 56 data bytes > ping: sendto: Host is down > ping: sendto: Host is down > ping: sendto: Host is down > ping: sendto: Host is down > ^C > --- 128.15.136.254 ping statistics --- > 9 packets transmitted, 0 packets received, 100.0% packet loss > > > The only thing that seems to solve this problem is to either reboot, or > do an "ifconfig down/up": > > root@write /etc]> ifconfig igb1 down > root@write /etc]> ifconfig igb1 > root@write /etc]> ping 128.15.136.254 > PING 128.15.136.254 (128.15.136.254): 56 data bytes > 64 bytes from 128.15.136.254: icmp_seq=1 ttl=255 time=1.015 ms > 64 bytes from 128.15.136.254: icmp_seq=2 ttl=255 time=0.217 ms > 64 bytes from 128.15.136.254: icmp_seq=3 ttl=255 time=0.278 ms > 64 bytes from 128.15.136.254: icmp_seq=4 ttl=255 time=0.238 ms > ^C > --- 128.15.136.254 ping statistics --- > 5 packets transmitted, 4 packets received, 20.0% packet loss > round-trip min/avg/max/stddev = 0.217/0.437/1.015/0.334 ms > > I was able to run tcpdump during all of this, and it *nothing* between > the system and the switch until I run ifconfig igb1 down/up, and then > you see the CDP and Tree Spanning traffic. > > The networking team here has told me there are no errors on the switch, > or the port I am on, and they even moved me from one port to another, > but this is still happening on a fairly regular basis now that I've > added more backup clients. > > Is this a possible bug with my hardware and the intel driver? I have a > pcap file and more system information that might provide a lot more > information, but I don't want to send that out to a mailing list. > ___ You may want to pay attention to the current discussions regarding the intel driver (em and igb). Can you post the output of lspci -vvv ? Sean ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: A flood of bacula traffic causes igb interface to go offline.
On 02/02/2011 10:07 AM, Sean Bruno wrote: On Tue, 2011-02-01 at 12:50 -0800, Mike Carlson wrote: Hey net@, I have a FreeBSD 8.2-RC2 system running on a HP DL180 G6, using the onboard Intel controller, and it is our primary Bacula storage node and director node. We have 96 clients that are scheduled to run at 8:30pm. After about 9 - 10 minutes of activity (mrtg graphs show about 50-60MB/sec incoming traffic), the igb1 interface is no longer able to communicate with the Cisco switch. The interesting part is, the interface is still "up", there is nothing in the kernel message buffer, and nothing relevant in the log file (just syslogd and ldap errors because they cannot reach their respective network servers). The system only responds to the network until I either reboot, or run 'ifconfig igb1 down ; ifconfig igb1 up'. There is no firewall loaded/configured. Thankfully, I have a KVM over IP, so when this happens I can at least run script(1) and capture some useful information. ifconfig igb1 igb1: flags=8843 metric 0 mtu 1500 options=1bb ether 1c:c1:de:e9:fb:af inet 128.15.136.105 netmask 0xff00 broadcast 128.15.136.255 inet 128.15.136.108 netmask 0xff00 broadcast 128.15.136.255 inet 128.15.136.102 netmask 0xff00 broadcast 128.15.136.255 media: Ethernet autoselect (1000baseT) status: active I can ping the internal IP (but I realize that is probably a useless test...) root@write /etc]> ping 128.15.136.105 PING 128.15.136.105 (128.15.136.105): 56 data bytes 64 bytes from 128.15.136.105: icmp_seq=0 ttl=64 time=0.024 ms 64 bytes from 128.15.136.105: icmp_seq=1 ttl=64 time=0.015 ms ^C --- 128.15.136.105 ping statistics --- 2 packets transmitted, 2 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 0.015/0.019/0.024/0.005 ms Attempting to ping the router: root@write /etc]> ping 128.15.136.254 PING 128.15.136.254 (128.15.136.254): 56 data bytes ping: sendto: Host is down ping: sendto: Host is down ping: sendto: Host is down ping: sendto: Host is down ^C --- 128.15.136.254 ping statistics --- 9 packets transmitted, 0 packets received, 100.0% packet loss The only thing that seems to solve this problem is to either reboot, or do an "ifconfig down/up": root@write /etc]> ifconfig igb1 down root@write /etc]> ifconfig igb1 root@write /etc]> ping 128.15.136.254 PING 128.15.136.254 (128.15.136.254): 56 data bytes 64 bytes from 128.15.136.254: icmp_seq=1 ttl=255 time=1.015 ms 64 bytes from 128.15.136.254: icmp_seq=2 ttl=255 time=0.217 ms 64 bytes from 128.15.136.254: icmp_seq=3 ttl=255 time=0.278 ms 64 bytes from 128.15.136.254: icmp_seq=4 ttl=255 time=0.238 ms ^C --- 128.15.136.254 ping statistics --- 5 packets transmitted, 4 packets received, 20.0% packet loss round-trip min/avg/max/stddev = 0.217/0.437/1.015/0.334 ms I was able to run tcpdump during all of this, and it *nothing* between the system and the switch until I run ifconfig igb1 down/up, and then you see the CDP and Tree Spanning traffic. The networking team here has told me there are no errors on the switch, or the port I am on, and they even moved me from one port to another, but this is still happening on a fairly regular basis now that I've added more backup clients. Is this a possible bug with my hardware and the intel driver? I have a pcap file and more system information that might provide a lot more information, but I don't want to send that out to a mailing list. ___ You may want to pay attention to the current discussions regarding the intel driver (em and igb). Can you post the output of lspci -vvv ? Sean Hey Sean, thanks for the reply! Yeah, I've noticed a lot of the em/igb emails going around, and I'm curious if it is related or something different all together. Here is the output from pciconf -lvbc: hostb0@pci0:0:0:0:class=0x06 card=0x330b103c chip=0x34068086 rev=0x13 hdr=0x00 vendor = 'Intel Corporation' device = 'QuickPath Architecture I/O Hub to ESI Port' class = bridge subclass = HOST-PCI cap 05[60] = MSI supports 2 messages, vector masks cap 10[90] = PCI-Express 2 root port max data 128(128) link x4(x4) cap 01[e0] = powerspec 3 supports D0 D3 current D0 ecap 0001[100] = AER 1 0 fatal 0 non-fatal 0 corrected ecap 000d[150] = unknown 1 ecap 000b[160] = unknown 0 pcib1@pci0:0:1:0:class=0x060400 card=0x330b103c chip=0x34088086 rev=0x13 hdr=0x01 vendor = 'Intel Corporation' device = 'QuickPath Architecture I/O Hub PCI Express Root Port 1' class = bridge subclass = PCI-PCI cap 0d[40] = PCI Bridge card=0x330b103c cap 05[60] = MSI supports 2 messages, vector masks cap 10[90] = PCI-Express 2 root port max data 256(256) link x4(x4) cap 01[e0] = powerspec 3 supports D0 D3 current D0 ecap 0001[100] = AER 1 0 fatal 0 non-fatal 0 corrected ecap 000d[150] = unknown 1 ecap 000b[160] = unknown 0 pcib2@pci0:0:3:0:c
Re: em driver, 82574L chip, and possibly ASPM
On 2/2/2011 12:37 PM, Jack Vogel wrote: > So has everyone that wanted to get something testing been able to do so? I have been testing in the back and will deploy to my production box this afternoon. As I am not able to reproduce it easily, it will be a bit before I can say the issue is gone. Jan however, was able to trigger it with greater ease ? ---Mike > > Jack > > > On Tue, Feb 1, 2011 at 7:03 PM, Mike Tancsa wrote: > >> On 2/1/2011 5:03 PM, Sean Bruno wrote: >>> On Tue, 2011-02-01 at 13:43 -0800, Jack Vogel wrote: To those who are going to test, here is the if_em.c, based on head, with my changes, I have to leave for the afternoon, and have not had a chance to build this, but it should work. I will check back in the later evening. Any blatant problems Sean, feel free to fix them :) Jack >>> >>> >>> I suspect that line 1490 should be: >>> if (more_rx || (ifp->if_drv_flags & IFF_DRV_OACTIVE)) { >>> >> >> >> I have hacked up a RELENG_8 version which I think is correct including >> the above change >> >> http://www.tancsa.com/if_em-8.c >> >> >> >> --- if_em.c.orig2011-02-01 21:47:14.0 -0500 >> +++ if_em.c 2011-02-01 21:47:19.0 -0500 >> @@ -30,7 +30,7 @@ >> POSSIBILITY OF SUCH DAMAGE. >> >> >> >> **/ >> -/*$FreeBSD: src/sys/dev/e1000/if_em.c,v 1.21.2.20 2011/01/22 01:37:53 >> jfv Exp $*/ >> +/*$FreeBSD$*/ >> >> #ifdef HAVE_KERNEL_OPTION_HEADERS >> #include "opt_device_polling.h" >> @@ -93,7 +93,7 @@ >> /* >> * Driver version: >> */ >> -char em_driver_version[] = "7.1.9"; >> +char em_driver_version[] = "7.1.9-test"; >> >> /* >> * PCI Device ID Table >> @@ -927,11 +927,10 @@ >>if (!adapter->link_active) >>return; >> >> -/* Call cleanup if number of TX descriptors low */ >> - if (txr->tx_avail <= EM_TX_CLEANUP_THRESHOLD) >> - em_txeof(txr); >> - >>while (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) { >> + /* First cleanup if TX descriptors low */ >> + if (txr->tx_avail <= EM_TX_CLEANUP_THRESHOLD) >> + em_txeof(txr); >>if (txr->tx_avail < EM_MAX_SCATTER) { >>ifp->if_drv_flags |= IFF_DRV_OACTIVE; >>break; >> @@ -1411,8 +1410,7 @@ >>if (!drbr_empty(ifp, txr->br)) >>em_mq_start_locked(ifp, txr, NULL); >> #else >> - if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) >> - em_start_locked(ifp, txr); >> + em_start_locked(ifp, txr); >> #endif >>EM_TX_UNLOCK(txr); >> >> @@ -1475,11 +1473,10 @@ >>struct ifnet*ifp = adapter->ifp; >>struct tx_ring *txr = adapter->tx_rings; >>struct rx_ring *rxr = adapter->rx_rings; >> - boolmore; >> - >> >>if (ifp->if_drv_flags & IFF_DRV_RUNNING) { >> - more = em_rxeof(rxr, adapter->rx_process_limit, NULL); >> + boolmore_rx; >> + more_rx = em_rxeof(rxr, adapter->rx_process_limit, NULL); >> >>EM_TX_LOCK(txr); >>em_txeof(txr); >> @@ -1487,12 +1484,10 @@ >>if (!drbr_empty(ifp, txr->br)) >>em_mq_start_locked(ifp, txr, NULL); >> #else >> - if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) >> - em_start_locked(ifp, txr); >> + em_start_locked(ifp, txr); >> #endif >> - em_txeof(txr); >>EM_TX_UNLOCK(txr); >> - if (more) { >> + if (more_rx || (ifp->if_drv_flags & IFF_DRV_OACTIVE)) { >>taskqueue_enqueue(adapter->tq, &adapter->que_task); >>return; >>} >> @@ -1604,7 +1599,6 @@ >>if (!IFQ_DRV_IS_EMPTY(&ifp->if_snd)) >>em_start_locked(ifp, txr); >> #endif >> - em_txeof(txr); >>E1000_WRITE_REG(&adapter->hw, E1000_IMS, txr->ims); >>EM_TX_UNLOCK(txr); >> } >> @@ -3730,17 +3724,17 @@ >>txr->queue_status = EM_QUEUE_HUNG; >> >> /* >> - * If we have enough room, clear IFF_DRV_OACTIVE >> + * If we have a minimum free, clear IFF_DRV_OACTIVE >> * to tell the stack that it is OK to send packets. >> */ >> -if (txr->tx_avail > EM_TX_CLEANUP_THRESHOLD) { >> +if (txr->tx_avail > EM_MAX_SCATTER) >> ifp->if_drv_flags &= ~IFF_DRV_OACTIVE; >> - /* Disable watchdog if all clean */ >> -if (txr->tx_avail == adapter->num_tx_desc) { >> - txr->queue_status = EM_QUEUE_IDLE; >> - return (FALSE
Re: kern/146792: [flowtable] flowcleaner 100% cpu's core load
The following reply was made to PR kern/146792; it has been noted by GNATS. From: Mark Boolootian To: bug-follo...@freebsd.org, n...@gtelecom.ru Cc: Subject: Re: kern/146792: [flowtable] flowcleaner 100% cpu's core load Date: Wed, 2 Feb 2011 11:37:16 -0800 --00163630f77dc34e0d049b51c661 Content-Type: text/plain; charset=ISO-8859-1 Hi folks, I hit this problem on a pair of anycast name servers. What I'm running: FreeBSD ns1.example.com 8.1-RELEASE FreeBSD 8.1-RELEASE #0: Mon Jul 19 02:36:49 UTC 2010 r...@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 Here's a peak at ps: ns1b# ps auxwww | head USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND root 11 100.0 0.0 032 ?? RL 11Jan11 59960:15.43 [idle] root 21 100.0 0.0 016 ?? RL 11Jan11 1112:01.24 [flowcleaner] root 0 0.0 0.0 096 ?? DLs 11Jan11 0:02.94 [kernel] root 1 0.0 0.0 3204 556 ?? ILs 11Jan11 0:00.01 /sbin/init -- root 2 0.0 0.0 016 ?? DL 11Jan11 0:52.87 [g_event] root 3 0.0 0.0 016 ?? DL 11Jan11 0:10.10 [g_up] root 4 0.0 0.0 016 ?? DL 11Jan11 0:15.18 [g_down] root 5 0.0 0.0 016 ?? DL 11Jan11 0:00.00 [mpt_recovery0] The box is running Quagga with a single OSPF adjacency. It has about 500 routes. Both anycast instances of ns1 hit this problem, but neither instance of ns2, which are configured identically, saw the trouble. The ns1 name servers are much busier than ns2. It appears that one instance of ns1 died almost a week ago, which went unnoticed :-( This morning, the second instance died. At that point, it was hard not to notice :-) Traffic on the mailing list suggests that 'sysctl net.inet.flowtable.enable=0' is a work-around. We'll pursue that path and hope for a bug fix in the not-too-distant future. thanks, mark --00163630f77dc34e0d049b51c661 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi folks,=A0I hit this problem on a pair of anyc= ast name servers. =A0WhatI'm running:=A0FreeBSD http://ns1.example.com";>ns1.example.com 8.1-RELEAS= E FreeBSD 8.1-RELEASE #0: Mon Jul 19 02:36:49 UTC 2010 =A0 =A0 r...@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENER= IC =A0amd64Here's a peak at ps:ns1b# ps auxwww | head USER =A0 =A0 PID %CPU %MEM =A0 VSZ =A0 RSS =A0TT =A0STAT STARTED =A0 = =A0 =A0TIME COMMANDroot =A0 =A0 =A011 100.0 =A00.0 =A0 = =A0 0 =A0 =A032 =A0?? =A0RL =A0 11Jan11 59960:15.43 [idle] root =A0 =A0 =A021 100.0 =A00.0 =A0 =A0 0 =A0 =A016 =A0?? =A0RL =A0 11= Jan11 1112:01.24 [flowcleaner]root =A0 =A0 =A0 0 =A00.0 = =A00.0 =A0 =A0 0 =A0 =A096 =A0?? =A0DLs =A011Jan11 =A0 0:02.94 [kernel] root =A0 =A0 =A0 1 =A00.0 =A00.0 =A03204 =A0 556 =A0?? =A0ILs =A011Jan= 11 =A0 0:00.01 /sbin/init --root =A0 =A0 =A0 2 =A00.0 =A0= 0.0 =A0 =A0 0 =A0 =A016 =A0?? =A0DL =A0 11Jan11 =A0 0:52.87 [g_event] root =A0 =A0 =A0 3 =A00.0 =A00.0 =A0 =A0 0 =A0 =A016 =A0?? =A0DL =A0 1= 1Jan11 =A0 0:10.10 [g_up]root =A0 =A0 =A0 4 =A00.0 =A00.0= =A0 =A0 0 =A0 =A016 =A0?? =A0DL =A0 11Jan11 =A0 0:15.18 [g_down] root =A0 =A0 =A0 5 =A00.0 =A00.0 =A0 =A0 0 =A0 =A016 =A0?? =A0DL =A0 1= 1Jan11 =A0 0:00.00 [mpt_recovery0] The box is running Quagga with a single OSPF adjacency. =A0It= has about 500 routes.Both anycast instances of ns1 hit this = problem, but neither instance of ns2, which are=A0 configured identically, saw the trouble. =A0The ns1 name servers are much = busier than ns2. It appears that one instance of ns1 died almost a week ago, w= hich went unnoticed :-( =A0Thismorning, the second instance d= ied. =A0At that point, it was hard not to notice :-) Traffic on the mailing list suggests that 'sysctl ne= t.inet.flowtable.enable=3D0' is a work-around. We'll pursue that path and hope for a bug fix in the not-too-distant f= uture. thanks,mark --00163630f77dc34e0d049b51c661-- ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: kern/146792: [flowtable] flowcleaner 100% cpu's core load
The following reply was made to PR kern/146792; it has been noted by GNATS. From: "Bjoern A. Zeeb" To: bug-follo...@freebsd.org, n...@gtelecom.ru Cc: Subject: Re: kern/146792: [flowtable] flowcleaner 100% cpu's core load Date: Wed, 2 Feb 2011 20:36:04 + (UTC) Hi, things could be improved in HEAD and stable/8 (as of r217168 [1]). Please test and report back. /bz [1] http://svn.freebsd.org/viewvc/base?view=revision&revision=217168 -- Bjoern A. Zeeb You have to have visions! Going to jail sucks -- All my daemons like it! http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails.html ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
IPFW extension for traffic classification based on statistical properties
Hi all, We believe this may be of some interest to list members, and apologise in advance for any duplicates you may receive. We are pleased to announce DIFFUSE v0.2, our second release of a system enabling FreeBSD's IPFW firewall subsystem to classify IP traffic based on statistical traffic properties and separate flow classification and treatment. This release contains a number of bug fixes as well as a number of new features. Most notably version 0.2 contains tools to build classifier models, and a feature module and classifier model to classify Skype traffic. Furthermore, there is a Linux version of DIFFUSE now. The project site is http://caia.swin.edu.au/urp/diffuse and the source code can be downloaded directly from http://caia.swin.edu.au/urp/diffuse/downloads.html. The software was developed as part of the DIFFUSE research project at Swinburne University's Centre for Advanced Internet Architectures. The project has been made possible in part by a grant from the Cisco University Research Program Fund at Community Foundation Silicon Valley. We welcome your feedback and hope you enjoy playing with the code and tools. Cheers, Sebastian Zander and Grenville Armitage http://caia.swin.edu.au ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
About "panic: bufwrite: buffer is not busy???"
On 02.02.2011 00:50, Gleb Smirnoff wrote: > E> Uptime: 8h3m51s > E> Dumping 4087 MB (3 chunks) > E> chunk 0: 1MB (150 pages) ... ok > E> chunk 1: 3575MB (915088 pages) 3559 3543panic: bufwrite: buffer is not > busy??? > E> cpuid = 3 > E> Uptime: 8h3m52s > E> Automatic reboot in 15 seconds - press a key on the console to abort > Can you add KDB_TRACE option to kernel? Your boxes for some reason can't > dump core, but with this option we will have at least trace. I see Mike Tancsa's box has "bufwrite: buffer is not busy???" problem too. Has anyone a thought how to fix generation of crashdumps? Eugene Grosbein ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"