regression in dc(4) from 7.2 to RELENG_8
one of our users has reported a regression in dc(4) on RELENG_8, the cards work fine on 7.2 and previous versions, but no longer function at all with RELENG_8 as of about a week ago. http://forum.pfsense.org/index.php/topic,24964.msg129488.html#msg129488 dmesg from it working, from 7.2: cbb0: at device 11.0 on pci0 cardbus0: on cbb0 pccard0: <16-bit PCCard bus> on cbb0 cbb0: [ITHREAD] cbb1: at device 11.1 on pci0 cardbus1: on cbb1 pccard1: <16-bit PCCard bus> on cbb1 cbb1: [ITHREAD] dc0: port 0x1080-0x10ff mem 0x8800-0x880007ff,0x88001000-0x880017ff irq 11 at device 0.0 on cardbus0 miibus1: on dc0 tdkphy0: PHY 0 on miibus1 tdkphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto dc0: Ethernet address: 00:xx:xx:xx:xx:56 dc0: [ITHREAD] dc1: port 0x1100-0x117f mem 0x88002000-0x880027ff,0x88003000-0x880037ff irq 11 at device 0.0 on cardbus1 miibus2: on dc1 tdkphy1: PHY 0 on miibus2 tdkphy1: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto dc1: Ethernet address: 00:xx:xx:xx:xx:66 dc1: [ITHREAD] Not working, RELENG_8: cbb0: at device 11.0 on pci0 cardbus0: on cbb0 pccard0: <16-bit PCCard bus> on cbb0 cbb0: [FILTER] cbb1: at device 11.1 on pci0 cardbus1: on cbb1 pccard1: <16-bit PCCard bus> on cbb1 cbb1: [FILTER] cardbus0: Unable to allocate resource to read CIS. cardbus0: Unable to allocate resources for CIS cardbus0: Unable to allocate resource to read CIS. cardbus0: Unable to allocate resources for CIS dc0: port 0x1080-0x10ff mem 0x8800-0x880007ff,0x88001000-0x880017ff irq 11 at device 0.0 on cardbus0 dc0: No station address in CIS! device_attach: dc0 attach returned 6 cardbus1: Unable to allocate resource to read CIS. cardbus1: Unable to allocate resources for CIS cardbus1: Unable to allocate resource to read CIS. cardbus1: Unable to allocate resources for CIS dc1: port 0x1080-0x10ff mem 0x88002000-0x880027ff,0x88003000-0x880037ff irq 11 at device 0.0 on cardbus1 dc1: No station address in CIS! device_attach: dc1 attach returned 6 We can apply patches to our builds for this person and others to test and confirm the fix, before it's committed into FreeBSD. Chris ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: kern/146534: [icmp6] wrong source address in echo reply
Old Synopsis: [icmpv6] wrong source address in echo reply New Synopsis: [icmp6] wrong source address in echo reply Responsible-Changed-From-To: freebsd-bugs->freebsd-net Responsible-Changed-By: linimon Responsible-Changed-When: Fri May 14 08:54:45 UTC 2010 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=146534 ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re:Re: convert Windows NDIS drivers for use with FreeBSD
Yes, I use ndisgen(8) instead. Input the netathw.inf and athw.sys file, appears: segmentation fault (core dumped) CONVERSION FAILED 在2010-05-08 20:20:46,"Paul B Mahol" 写道: >On 5/7/10, jiani1012 wrote: >> Hi all, >> I am using xp3264-7.7.0.329-whql.zip file from Atheros. >> #cd /sys/modules/ndis >> #make install >> #cd /sys/modules/if_ndis >> #make install >> #ndiscvt -i netathwx.inf -s athwx.sys -o ndis_driver_data.h >> (syntax error) >> When trying to convert the ones athwx.sys and netathwx.inf I am getting >> the error: >> > ndiscvt: line 5117: : syntax error. >> > CONVERSION FAILED >> same for netathw.inf athw.sys >> How to do it? >> Thank you in advance! >> >> Jeny > >Why you are not using ndisgen(8)? ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: regression in dc(4) from 7.2 to RELENG_8
on 14/05/2010 09:42 Chris Buechler said the following: > one of our users has reported a regression in dc(4) on RELENG_8, the > cards work fine on 7.2 and previous versions, but no longer function at > all with RELENG_8 as of about a week ago. > http://forum.pfsense.org/index.php/topic,24964.msg129488.html#msg129488 Perhaps this might be a cardbus issue (or even a more general issue) rather than a dc(4) issue. But first please try this patch reversed: --- a/sys/dev/dc/if_dc.c +++ b/sys/dev/dc/if_dc.c @@ -331,7 +331,6 @@ static driver_t dc_driver = { static devclass_t dc_devclass; -DRIVER_MODULE(dc, cardbus, dc_driver, dc_devclass, 0, 0); DRIVER_MODULE(dc, pci, dc_driver, dc_devclass, 0, 0); DRIVER_MODULE(miibus, dc, miibus_driver, miibus_devclass, 0, 0); > dmesg from it working, from 7.2: > cbb0: at device 11.0 on pci0 > cardbus0: on cbb0 > pccard0: <16-bit PCCard bus> on cbb0 > cbb0: [ITHREAD] > cbb1: at device 11.1 on pci0 > cardbus1: on cbb1 > pccard1: <16-bit PCCard bus> on cbb1 > cbb1: [ITHREAD] > dc0: port 0x1080-0x10ff mem > 0x8800-0x880007ff,0x88001000-0x880017ff irq 11 at device 0.0 on > cardbus0 > miibus1: on dc0 > tdkphy0: PHY 0 on miibus1 > tdkphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto > dc0: Ethernet address: 00:xx:xx:xx:xx:56 > dc0: [ITHREAD] > dc1: port 0x1100-0x117f mem > 0x88002000-0x880027ff,0x88003000-0x880037ff irq 11 at device 0.0 on > cardbus1 > miibus2: on dc1 > tdkphy1: PHY 0 on miibus2 > tdkphy1: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto > dc1: Ethernet address: 00:xx:xx:xx:xx:66 > dc1: [ITHREAD] > > Not working, RELENG_8: > cbb0: at device 11.0 on pci0 > cardbus0: on cbb0 > pccard0: <16-bit PCCard bus> on cbb0 > cbb0: [FILTER] > cbb1: at device 11.1 on pci0 > cardbus1: on cbb1 > pccard1: <16-bit PCCard bus> on cbb1 > cbb1: [FILTER] > cardbus0: Unable to allocate resource to read CIS. > cardbus0: Unable to allocate resources for CIS > cardbus0: Unable to allocate resource to read CIS. > cardbus0: Unable to allocate resources for CIS > dc0: port 0x1080-0x10ff mem > 0x8800-0x880007ff,0x88001000-0x880017ff irq 11 at device 0.0 on > cardbus0 > dc0: No station address in CIS! > device_attach: dc0 attach returned 6 > cardbus1: Unable to allocate resource to read CIS. > cardbus1: Unable to allocate resources for CIS > cardbus1: Unable to allocate resource to read CIS. > cardbus1: Unable to allocate resources for CIS > dc1: port 0x1080-0x10ff mem > 0x88002000-0x880027ff,0x88003000-0x880037ff irq 11 at device 0.0 on > cardbus1 > dc1: No station address in CIS! > device_attach: dc1 attach returned 6 > > > We can apply patches to our builds for this person and others to test > and confirm the fix, before it's committed into FreeBSD. > > Chris > > ___ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org" > -- Andriy Gapon ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: sockstat / netstat output 8.x vs 7.x
On Tue, 11 May 2010 13:24:02 -0700 Julian Elischer wrote: JE> On 5/11/10 12:20 PM, Wes Peters wrote: >> The output header is instructive: >> >> USER COMMANDPID FD PROTO LOCAL ADDRESS FOREIGN ADDRESS >> www httpd 18423 3 tcp4 6 *:80 *:* >> www httpd 18423 4 tcp4 *:* *:* >> www httpd 25184 3 tcp4 6 *:80 *:* >> www httpd 25184 4 tcp4 *:* *:* >> >> Same as 7, it's the foreign address. This is normally only useful for >> connected sockets. >> >> On Tue, May 11, 2010 at 11:14 AM, Mike Tancsa wrote: >>> [trying on freebsd-net since no response on stable] >>> >>> I noticed that apache on RELENG_8 and RELENG_7 shows up with output I cant >>> seem to understand from sockstat -l and netstat -naW >>> >>> On RELENG_7, sockstat -l makes sense to me >>> >>> www httpd 83005 4 tcp4 *:443 *:* >>> www httpd 82217 3 tcp4 *:80 *:* >>> www httpd 82217 4 tcp4 *:443 *:* >>> www httpd 38942 3 tcp4 *:80 *:* >>> www httpd 38942 4 tcp4 *:443 *:* >>> root httpd 1169 3 tcp4 *:80 *:* >>> root httpd 1169 4 tcp4 *:443 *:* >>> >>> >>> various processes listening on all bound IP addresses on ports 80 and 443. >>> >>> On RELENG_8 however, it shows up with an extra entry (at the end) >>> >>> www httpd 29005 4 tcp4 *:* *:* >>> www httpd 29004 3 tcp4 6 *:80 *:* >>> www httpd 29004 4 tcp4 *:* *:* >>> www httpd 29003 3 tcp4 6 *:80 *:* >>> www httpd 29003 4 tcp4 *:* *:* >>> www httpd 66731 3 tcp4 6 *:80 *:* >>> www httpd 66731 4 tcp4 *:* *:* >>> root httpd 72197 3 tcp4 6 *:80 *:* >>> root httpd 72197 4 tcp4 *:* *:* >>> >>> >>> *:80 makes sense to me... process is listening on all IPs for port 80. >>> What >>> does *:* mean then ? JE> I believe it has created a socket but not used it for anything JE> it may be the 6 socket... otherwise I don't see what a "tcp4 6" is JE> meant to be. Comparing RELENG_8 and RELENG_7 outputs it might be for https, which looks like is not configured on RELENG_8 host. I think socket() was called but no any other actions with the socket was performed. >>> >>> Netstat gives a slightly different version of it >>> >>> Active Internet connections (including servers) >>> Proto Recv-Q Send-Q Local Address Foreign Address (state) >>> tcp4 0 0 *.1984 *.*LISTEN >>> tcp4 0 0 *.**.*CLOSED >>> tcp46 0 0 *.80 *.*LISTEN >>> >>> state closed ? You can reproduce this with this simple program: zhuzha:~/src/test_socket% cat test.c #include #include #include #include #include int main(int argc, char **argv) { int sockfd; if ((sockfd = socket(AF_INET, SOCK_STREAM, 0)) < 0) errx(1, "socket error"); sleep(60); return 0; } zhuzha:~/src/test_socket% make cc -g -O0 -Wall test.c -o test zhuzha:~/src/test_socket% ./test& [1] 56076 zhuzha:~/src/test_socket% sockstat|grep test golubtest 56076 3 tcp4 *:* *:* zhuzha:~/src/test_socket% netstat -na |grep CLOSED tcp4 0 0 *.**.*CLOSED -- Mikolaj Golub ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: Intel 10Gb
On Tue, May 11, 2010 at 9:51 AM, Andrew Gallatin wrote: > Murat Balaban [mu...@enderunix.org] wrote: >> >> Much of the FreeBSD networking stack has been made parallel in order to >> cope with high packet rates at 10 Gig/sec operation. >> >> I've seen good numbers (near 10 Gig) in my tests involving TCP/UDP >> send/receive. (latest Intel driver). >> >> As far as BPF is concerned, above statement does not hold true, >> since there is some work that needs to be done here in terms >> of BPF locking and parallelism. My tests show that there >> is a high lock contention around "bpf interface lock", resulting >> in input errors at high packet rates and with many bpf devices. > > If you're interested in 10GbE packet sniffing at line rate on the > cheap, have a look at the Myri10GE "sniffer" interface. This is a > special software package that takes a normal mxge(4) NIC, and replaces > the driver/firmware with a "myri_snf" driver/firmware which is > optimized for packet sniffing. > > Using this driver/firmware combo, we can receive minimal packets at > line rate (14.8Mpps) to userspace. You can even access this using a > libpcap interface. The trick is that the fast paths are OS-bypass, > and don't suffer from OS overheads, like lock contention. See > http://www.myri.com/scs/SNF/doc/index.html for details. But your timestamps will be atrocious at 10G speeds. Myricom doesn't timestamp packets AFAIK. If you want reliable timestamps you need to look at companies like Endace, Napatech, etc. We do a lot of packet capture and work on bpf(4) all the time. My biggest concern for reliable 10G packet capture is timestamps. The call to microtime up in catchpacket() is not going to cut it (it barely cuts it for GIGE line rate speeds). I'd be interested in doing the multi-queue bpf(4) myself (perhaps I should ask? I don't know if non-summer-of-code folks are allowed?). I believe the goal is not so much throughput but cache affinity. It would be nice if say the listener application (libpcap) could bind itself to the same core that the driver's queue is receiving packets on so everything from catching to post-processing all work with a very warm cache (theoretically). I think that's the idea. It would also allow multiple applications to subscribe to potentially different queues that are doing some form of load balancing. Again, Intel's 82599 chipset supports flow based queues (albeit the size of the flow table is limited). Note, zero-copy bpf(4) is your friend in all use cases at 10G speeds! :) -aps PS I am not sure but Intel also supports writing packets directly in cache (yet I thought the 82599 driver actually does a prefetch anyway which had me confused on why that helps) ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: Intel 10Gb
Alexander Sack wrote: <...> >> Using this driver/firmware combo, we can receive minimal packets at >> line rate (14.8Mpps) to userspace. You can even access this using a >> libpcap interface. The trick is that the fast paths are OS-bypass, >> and don't suffer from OS overheads, like lock contention. See >> http://www.myri.com/scs/SNF/doc/index.html for details. > > But your timestamps will be atrocious at 10G speeds. Myricom doesn't > timestamp packets AFAIK. If you want reliable timestamps you need to > look at companies like Endace, Napatech, etc. I see your old help ticket in our system. Yes, our timestamping is not as good as a dedicated capture card with a GPS reference, but it is good enough for most people. > PS I am not sure but Intel also supports writing packets directly in > cache (yet I thought the 82599 driver actually does a prefetch anyway > which had me confused on why that helps) You're talking about DCA. We support DCA as well (and I suspect some other 10G NICs do to). There are a few barriers to using DCA on FreeBSD, not least of which is that FreeBSD doesn't currently have the infrastructure to support it (no IOATDMA or DCA drivers). DCA is also problematic because support from system/motherboard vendors is very spotty. The vendor must provide the correct tag table in BIOS such that the tags match the CPU/core numbering in the system. Many motherboard vendors don't bother with this, and you cannot enable DCA on a lot of systems, even though the underlying chipset supports DCA. I've done hacks to force-enable it in the past, with mixed results. The problem is that DCA depends on having the correct tag table, so that packets can be prefetched into the correct CPU's cache. If the tag table is incorrect, DCA is a big pessimization, because it blows the cache in other CPUs. That said, I would *love* it if FreeBSD grew ioatdma/dca support. Jack, does Intel have any interest in porting DCA support to FreeBSD? Drew ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: Intel 10Gb
On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin wrote: > Alexander Sack wrote: > <...> >>> Using this driver/firmware combo, we can receive minimal packets at >>> line rate (14.8Mpps) to userspace. You can even access this using a >>> libpcap interface. The trick is that the fast paths are OS-bypass, >>> and don't suffer from OS overheads, like lock contention. See >>> http://www.myri.com/scs/SNF/doc/index.html for details. >> >> But your timestamps will be atrocious at 10G speeds. Myricom doesn't >> timestamp packets AFAIK. If you want reliable timestamps you need to >> look at companies like Endace, Napatech, etc. > > I see your old help ticket in our system. Yes, our timestamping > is not as good as a dedicated capture card with a GPS reference, > but it is good enough for most people. I was told btw that it doesn't timestamp at ALL. I am assuming NOW that is incorrect. Define *most* people. I am not knocking the Myricom card. In fact I so wish you guys would just add the ability to latch to a 1PPS for timestamping and it would be perfect. We use I think an older version of the card internally for replay. Its a great multi-purpose card. However with IPG at 10G in the nanoseconds, anyone trying to do OWDs or RTT will find it difficult compared to an Endace or Napatech card. Btw, I was referring to bpf(4) specifically, so please don't take my comments as a knock against it. >> PS I am not sure but Intel also supports writing packets directly in >> cache (yet I thought the 82599 driver actually does a prefetch anyway >> which had me confused on why that helps) > > You're talking about DCA. We support DCA as well (and I suspect some > other 10G NICs do to). There are a few barriers to using DCA on > FreeBSD, not least of which is that FreeBSD doesn't currently have the > infrastructure to support it (no IOATDMA or DCA drivers). Right. > DCA is also problematic because support from system/motherboard > vendors is very spotty. The vendor must provide the correct tag table > in BIOS such that the tags match the CPU/core numbering in the system. > Many motherboard vendors don't bother with this, and you cannot enable > DCA on a lot of systems, even though the underlying chipset supports > DCA. I've done hacks to force-enable it in the past, with mixed > results. The problem is that DCA depends on having the correct tag > table, so that packets can be prefetched into the correct CPU's cache. > If the tag table is incorrect, DCA is a big pessimization, because it > blows the cache in other CPUs. Right. > That said, I would *love* it if FreeBSD grew ioatdma/dca support. > Jack, does Intel have any interest in porting DCA support to FreeBSD? Question for Jack or Drew, what DOES FreeBSD have to do to support DCA? I thought DCA was something you just enable on the NIC chipset and if the system is IOATDMA aware, it just works. Is that not right (assuming cache tags are correct and accessible)? i.e. I thought this was hardware black magic than anything specific the OS has to do. -aps ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: Intel 10Gb
Alexander Sack wrote: > On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin wrote: >> Alexander Sack wrote: >> <...> Using this driver/firmware combo, we can receive minimal packets at line rate (14.8Mpps) to userspace. You can even access this using a libpcap interface. The trick is that the fast paths are OS-bypass, and don't suffer from OS overheads, like lock contention. See http://www.myri.com/scs/SNF/doc/index.html for details. >>> But your timestamps will be atrocious at 10G speeds. Myricom doesn't >>> timestamp packets AFAIK. If you want reliable timestamps you need to >>> look at companies like Endace, Napatech, etc. >> I see your old help ticket in our system. Yes, our timestamping >> is not as good as a dedicated capture card with a GPS reference, >> but it is good enough for most people. > > I was told btw that it doesn't timestamp at ALL. I am assuming NOW > that is incorrect. I think you might have misunderstood how we do timestamping. I definately don't understand it, and I work there ;) I do know that there is NIC component of it (eg, it is not 100% done in the host). I also realize that it is not is good as something that is 1PPS GPS based. > Define *most* people. I may have a skewed view of the market, but it seems like some people care deeply about accurate timestamps, and others (mostly doing deep packet inspection) care only within a few milliseconds, or even seconds. > I am not knocking the Myricom card. In fact I so wish you guys would > just add the ability to latch to a 1PPS for timestamping and it would > be perfect. > > We use I think an older version of the card internally for replay. > Its a great multi-purpose card. > > However with IPG at 10G in the nanoseconds, anyone trying to do OWDs > or RTT will find it difficult compared to an Endace or Napatech card. > > Btw, I was referring to bpf(4) specifically, so please don't take my > comments as a knock against it. > >>> PS I am not sure but Intel also supports writing packets directly in >>> cache (yet I thought the 82599 driver actually does a prefetch anyway >>> which had me confused on why that helps) >> You're talking about DCA. We support DCA as well (and I suspect some >> other 10G NICs do to). There are a few barriers to using DCA on >> FreeBSD, not least of which is that FreeBSD doesn't currently have the >> infrastructure to support it (no IOATDMA or DCA drivers). > > Right. > >> DCA is also problematic because support from system/motherboard >> vendors is very spotty. The vendor must provide the correct tag table >> in BIOS such that the tags match the CPU/core numbering in the system. >> Many motherboard vendors don't bother with this, and you cannot enable >> DCA on a lot of systems, even though the underlying chipset supports >> DCA. I've done hacks to force-enable it in the past, with mixed >> results. The problem is that DCA depends on having the correct tag >> table, so that packets can be prefetched into the correct CPU's cache. >> If the tag table is incorrect, DCA is a big pessimization, because it >> blows the cache in other CPUs. > > Right. > >> That said, I would *love* it if FreeBSD grew ioatdma/dca support. >> Jack, does Intel have any interest in porting DCA support to FreeBSD? > > Question for Jack or Drew, what DOES FreeBSD have to do to support > DCA? I thought DCA was something you just enable on the NIC chipset > and if the system is IOATDMA aware, it just works. Is that not right > (assuming cache tags are correct and accessible)? i.e. I thought this > was hardware black magic than anything specific the OS has to do. IOATDMA and DCA are sort of unfairly joined for two reasons: The DCA control stuff is implemented as part of the IOATDMA PCIe device, and IOATDMA is a great usage model for DCA, since you'd want the DMAs that it does to be prefetched. To use DCA you need: - A DCA driver to talk to the IOATDMA/DCA pcie device, and obtain the tag table - An interface that a client device (eg, NIC driver) can use to obtain either the tag table, or at least the correct tag for the CPU that the interrupt handler is bound to. The basic support in a NIC driver boils down to something like: nic_interrupt_handler() { if (sc->dca.enabled && (curcpu != sc->dca.last_cpu)) { sc->dca.last_cpu = curcpu; tag = dca_get_tag(curcpu); WRITE_REG(sc, DCA_TAG, tag); } } Drew ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: Intel 10Gb
On Fri, May 14, 2010 at 11:41 AM, Andrew Gallatin wrote: > Alexander Sack wrote: >> On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin >> wrote: >>> Alexander Sack wrote: >>> <...> > Using this driver/firmware combo, we can receive minimal packets at > line rate (14.8Mpps) to userspace. You can even access this using a > libpcap interface. The trick is that the fast paths are OS-bypass, > and don't suffer from OS overheads, like lock contention. See > http://www.myri.com/scs/SNF/doc/index.html for details. But your timestamps will be atrocious at 10G speeds. Myricom doesn't timestamp packets AFAIK. If you want reliable timestamps you need to look at companies like Endace, Napatech, etc. >>> I see your old help ticket in our system. Yes, our timestamping >>> is not as good as a dedicated capture card with a GPS reference, >>> but it is good enough for most people. >> >> I was told btw that it doesn't timestamp at ALL. I am assuming NOW >> that is incorrect. > > I think you might have misunderstood how we do timestamping. > I definately don't understand it, and I work there ;) No problem. :) > I do know that there is NIC component of it (eg, it is not 100% > done in the host). I also realize that it is not is good as > something that is 1PPS GPS based. I need to grab your docs and start reading it again. I would like to support data capture using the Myricom card. I somehow missed this. I had thought the timestamps were software generated only. > >> Define *most* people. > > I may have a skewed view of the market, but it seems like > some people care deeply about accurate timestamps, and > others (mostly doing deep packet inspection) care only > within a few milliseconds, or even seconds. In our case Andrew, the folks who are doing deep packet inspection REQUIRE reasonable time stamps to correlate events and do generate reasonable stats. But I hear you, if you are just looking to see the packet data, then timestamp accuracy isn't your top priority. >> Question for Jack or Drew, what DOES FreeBSD have to do to support >> DCA? I thought DCA was something you just enable on the NIC chipset >> and if the system is IOATDMA aware, it just works. Is that not right >> (assuming cache tags are correct and accessible)? i.e. I thought this >> was hardware black magic than anything specific the OS has to do. > > IOATDMA and DCA are sort of unfairly joined for two reasons: The DCA > control stuff is implemented as part of the IOATDMA PCIe device, and > IOATDMA is a great usage model for DCA, since you'd want the DMAs > that it does to be prefetched. > > To use DCA you need: > > - A DCA driver to talk to the IOATDMA/DCA pcie device, and obtain the tag > table > - An interface that a client device (eg, NIC driver) can use to obtain > either the tag table, or at least the correct tag for the CPU > that the interrupt handler is bound to. The basic support in > a NIC driver boils down to something like: > > nic_interrupt_handler() > { > if (sc->dca.enabled && (curcpu != sc->dca.last_cpu)) { > sc->dca.last_cpu = curcpu; > tag = dca_get_tag(curcpu); > WRITE_REG(sc, DCA_TAG, tag); > } > } Drew, at least in the Intel documentation, it seems the NIC uses the LAPIC id to tell the PCIe TLPs where to put inbound NIC I/O (in the TLP the DCA info is stored) to the appropriate core's cache. i.e. the heuristic you gave above is more granular than what I think Intel does. I could be wrong, maybe Jack can chime in and correct me. But it seems with Intel chipsets it is a per queue parameter which allows you to bind a core cache's to a queue via DCA. The added piece to this for at least bpf(4) consumers is to have bpf(4) subscribe to these queues AND to allow an interface for libpcap applications to know where what queue is on what core and THEN bind to it. I think that is the general ideaI think! :) -aps ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: Intel 10Gb
Alexander Sack wrote: To use DCA you need: - A DCA driver to talk to the IOATDMA/DCA pcie device, and obtain the tag table - An interface that a client device (eg, NIC driver) can use to obtain either the tag table, or at least the correct tag for the CPU that the interrupt handler is bound to. The basic support in a NIC driver boils down to something like: nic_interrupt_handler() { if (sc->dca.enabled && (curcpu != sc->dca.last_cpu)) { sc->dca.last_cpu = curcpu; tag = dca_get_tag(curcpu); WRITE_REG(sc, DCA_TAG, tag); } } Drew, at least in the Intel documentation, it seems the NIC uses the LAPIC id to tell the PCIe TLPs where to put inbound NIC I/O (in the TLP the DCA info is stored) to the appropriate core's cache. i.e. the heuristic you gave above is more granular than what I think Intel The pseudo-code above was intended to be the MSI-X interrupt handler for a single queue, not some dispatcher for multiple queues. Sorry that wasn't clear. So yes, the DCA tag value may be different per queue. does. I could be wrong, maybe Jack can chime in and correct me. But it seems with Intel chipsets it is a per queue parameter which allows you to bind a core cache's to a queue via DCA. The added piece to this for at least bpf(4) consumers is to have bpf(4) subscribe to these queues AND to allow an interface for libpcap applications to know where what queue is on what core and THEN bind to it. Yes, everything associated with a queue must be bound to the same core (or at least to cores which share a cache). Drew ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
RE: Intel 10Gb
Neterion/Exar x3100 is one of generic 10GbE NICs that supports timestamping in hardware, along with some other packet capturing/monitoring featiures; here is a relevant paragraph from programming manual: "Receive Frame Timestamp Feature The x3100 has the ability to label each incoming frame with a timestamp to allow a host entity to record the arrival time of incoming packets. The host uses the XMAC_TIMESTAMP register to control its operation. To enable the feature, the "EN" field must be set. Once the timestamp feature is enabled, the FCS value of each frame will be replaced with the value in a free-running 32-bit counter with a default period of 3.2 ns. The "USE_LINK_ID" determines if the full 32 bits of the of the FCS are used for the timestamp, or if the most significant 2 bits are used to identify which port the frame came in on, and 30 bits are used for the timestamp. The "INTERVAL" field can be used to programmably change the period between several values: 3.2 ns (the default), 6.4 ns, 12.8 ns, 25.6 ns, 51.2 ns, 102.4 ns, and 204.8 ns. NOTE: To take advantage of this feature, "XMAC_CFG_PORTn.STRIP_FCS" must be set to 0 to pass the FCS to the host." > -Original Message- > From: owner-freebsd-performa...@freebsd.org [mailto:owner-freebsd- > performa...@freebsd.org] On Behalf Of Andrew Gallatin > Sent: Friday, May 14, 2010 8:41 AM > To: Alexander Sack > Cc: Murat Balaban; freebsd-net@freebsd.org; freebsd- > performa...@freebsd.org > Subject: Re: Intel 10Gb > > Alexander Sack wrote: > > On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin > wrote: > >> Alexander Sack wrote: > >> <...> > Using this driver/firmware combo, we can receive minimal packets > at > line rate (14.8Mpps) to userspace. You can even access this > using a > libpcap interface. The trick is that the fast paths are OS- > bypass, > and don't suffer from OS overheads, like lock contention. See > http://www.myri.com/scs/SNF/doc/index.html for details. > >>> But your timestamps will be atrocious at 10G speeds. Myricom > doesn't > >>> timestamp packets AFAIK. If you want reliable timestamps you need > to > >>> look at companies like Endace, Napatech, etc. > >> I see your old help ticket in our system. Yes, our timestamping > >> is not as good as a dedicated capture card with a GPS reference, > >> but it is good enough for most people. > > > > I was told btw that it doesn't timestamp at ALL. I am assuming NOW > > that is incorrect. > > I think you might have misunderstood how we do timestamping. > I definately don't understand it, and I work there ;) > I do know that there is NIC component of it (eg, it is not 100% > done in the host). I also realize that it is not is good as > something that is 1PPS GPS based. > > > Define *most* people. > > I may have a skewed view of the market, but it seems like > some people care deeply about accurate timestamps, and > others (mostly doing deep packet inspection) care only > within a few milliseconds, or even seconds. > > > I am not knocking the Myricom card. In fact I so wish you guys > would > > just add the ability to latch to a 1PPS for timestamping and it > would > > be perfect. > > > > We use I think an older version of the card internally for replay. > > Its a great multi-purpose card. > > > > However with IPG at 10G in the nanoseconds, anyone trying to do OWDs > > or RTT will find it difficult compared to an Endace or Napatech > card. > > > > Btw, I was referring to bpf(4) specifically, so please don't take my > > comments as a knock against it. > > > >>> PS I am not sure but Intel also supports writing packets directly > in > >>> cache (yet I thought the 82599 driver actually does a prefetch > anyway > >>> which had me confused on why that helps) > >> You're talking about DCA. We support DCA as well (and I suspect > some > >> other 10G NICs do to). There are a few barriers to using DCA on > >> FreeBSD, not least of which is that FreeBSD doesn't currently have > the > >> infrastructure to support it (no IOATDMA or DCA drivers). > > > > Right. > > > >> DCA is also problematic because support from system/motherboard > >> vendors is very spotty. The vendor must provide the correct tag > table > >> in BIOS such that the tags match the CPU/core numbering in the > system. > >> Many motherboard vendors don't bother with this, and you cannot > enable > >> DCA on a lot of systems, even though the underlying chipset > supports > >> DCA. I've done hacks to force-enable it in the past, with mixed > >> results. The problem is that DCA depends on having the correct tag > >> table, so that packets can be prefetched into the correct CPU's > cache. > >> If the tag table is incorrect, DCA is a big pessimization, because > it > >> blows the cache in other CPUs. > > > > Right. > > > >> That said, I would *love* it if FreeBSD grew ioatdma/dca support. > >> Jack, does Intel have any interest in porting DCA suppo
Re: Intel 10Gb
On Fri, May 14, 2010 at 8:18 AM, Alexander Sack wrote: > On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin > wrote: > > Alexander Sack wrote: > > <...> > >>> Using this driver/firmware combo, we can receive minimal packets at > >>> line rate (14.8Mpps) to userspace. You can even access this using a > >>> libpcap interface. The trick is that the fast paths are OS-bypass, > >>> and don't suffer from OS overheads, like lock contention. See > >>> http://www.myri.com/scs/SNF/doc/index.html for details. > >> > >> But your timestamps will be atrocious at 10G speeds. Myricom doesn't > >> timestamp packets AFAIK. If you want reliable timestamps you need to > >> look at companies like Endace, Napatech, etc. > > > > I see your old help ticket in our system. Yes, our timestamping > > is not as good as a dedicated capture card with a GPS reference, > > but it is good enough for most people. > > I was told btw that it doesn't timestamp at ALL. I am assuming NOW > that is incorrect. > > Define *most* people. > > I am not knocking the Myricom card. In fact I so wish you guys would > just add the ability to latch to a 1PPS for timestamping and it would > be perfect. > > We use I think an older version of the card internally for replay. > Its a great multi-purpose card. > > However with IPG at 10G in the nanoseconds, anyone trying to do OWDs > or RTT will find it difficult compared to an Endace or Napatech card. > > Btw, I was referring to bpf(4) specifically, so please don't take my > comments as a knock against it. > > >> PS I am not sure but Intel also supports writing packets directly in > >> cache (yet I thought the 82599 driver actually does a prefetch anyway > >> which had me confused on why that helps) > > > > You're talking about DCA. We support DCA as well (and I suspect some > > other 10G NICs do to). There are a few barriers to using DCA on > > FreeBSD, not least of which is that FreeBSD doesn't currently have the > > infrastructure to support it (no IOATDMA or DCA drivers). > > Right. > > > DCA is also problematic because support from system/motherboard > > vendors is very spotty. The vendor must provide the correct tag table > > in BIOS such that the tags match the CPU/core numbering in the system. > > Many motherboard vendors don't bother with this, and you cannot enable > > DCA on a lot of systems, even though the underlying chipset supports > > DCA. I've done hacks to force-enable it in the past, with mixed > > results. The problem is that DCA depends on having the correct tag > > table, so that packets can be prefetched into the correct CPU's cache. > > If the tag table is incorrect, DCA is a big pessimization, because it > > blows the cache in other CPUs. > > Right. > > > That said, I would *love* it if FreeBSD grew ioatdma/dca support. > > Jack, does Intel have any interest in porting DCA support to FreeBSD? > > Question for Jack or Drew, what DOES FreeBSD have to do to support > DCA? I thought DCA was something you just enable on the NIC chipset > and if the system is IOATDMA aware, it just works. Is that not right > (assuming cache tags are correct and accessible)? i.e. I thought this > was hardware black magic than anything specific the OS has to do. > > OK, let me see if I can clarify some of this. First, there IS an I/OAT driver that I did for FreeBSD like 3 or 4 years ago, in the timeframe that we put the feature out. However, at that time all it was good for was the DMA aspect of things, and Prafulla used it to accelerate the stack copies; interest did not seem that great so I put the code aside, its not badly dated and needs to be brought up to date due to there being a few different versions of the hardware now. At one point maybe a year back I started to take the code apart thinking I would JUST do DCA, that got back-burnered due to other higher priority issues, but its still an item in my queue. I also had a nibble of an interest in using the DMA engine so perhaps I should not go down the road of just doing the DCA support in the I/OAT part of the driver. The question is how to make the infrastructure work. To answer Alexander's question, DCA support is NOT in the NIC, its in the chipset, that's why the I/OAT driver was done as a seperate driver, but the NIC was the user of the info, its been a while since I was into the code but if memory serves the I/OAT driver just enables the support in the chipset, and then the NIC driver configures its engine to use it. DCA and DMA were supported in Linux in the same driver because the chipset features were easily handled together perhaps, I'm not sure :) Fabien's data earlier in this thread suggested that a strategicallly placed prefetch did you more good than DCA did if I recall, what do you all think of that? As far as I'm concerned right now I am willing to resurrect the driver, clean it up and make the features available, we can see how valuable they are after that, how does that sound?? Cheers, Jack ___
Re: Intel 10Gb
On Fri, May 14, 2010 at 1:01 PM, Jack Vogel wrote: > > > On Fri, May 14, 2010 at 8:18 AM, Alexander Sack wrote: >> >> On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin >> wrote: >> > Alexander Sack wrote: >> > <...> >> >>> Using this driver/firmware combo, we can receive minimal packets at >> >>> line rate (14.8Mpps) to userspace. You can even access this using a >> >>> libpcap interface. The trick is that the fast paths are OS-bypass, >> >>> and don't suffer from OS overheads, like lock contention. See >> >>> http://www.myri.com/scs/SNF/doc/index.html for details. >> >> >> >> But your timestamps will be atrocious at 10G speeds. Myricom doesn't >> >> timestamp packets AFAIK. If you want reliable timestamps you need to >> >> look at companies like Endace, Napatech, etc. >> > >> > I see your old help ticket in our system. Yes, our timestamping >> > is not as good as a dedicated capture card with a GPS reference, >> > but it is good enough for most people. >> >> I was told btw that it doesn't timestamp at ALL. I am assuming NOW >> that is incorrect. >> >> Define *most* people. >> >> I am not knocking the Myricom card. In fact I so wish you guys would >> just add the ability to latch to a 1PPS for timestamping and it would >> be perfect. >> >> We use I think an older version of the card internally for replay. >> Its a great multi-purpose card. >> >> However with IPG at 10G in the nanoseconds, anyone trying to do OWDs >> or RTT will find it difficult compared to an Endace or Napatech card. >> >> Btw, I was referring to bpf(4) specifically, so please don't take my >> comments as a knock against it. >> >> >> PS I am not sure but Intel also supports writing packets directly in >> >> cache (yet I thought the 82599 driver actually does a prefetch anyway >> >> which had me confused on why that helps) >> > >> > You're talking about DCA. We support DCA as well (and I suspect some >> > other 10G NICs do to). There are a few barriers to using DCA on >> > FreeBSD, not least of which is that FreeBSD doesn't currently have the >> > infrastructure to support it (no IOATDMA or DCA drivers). >> >> Right. >> >> > DCA is also problematic because support from system/motherboard >> > vendors is very spotty. The vendor must provide the correct tag table >> > in BIOS such that the tags match the CPU/core numbering in the system. >> > Many motherboard vendors don't bother with this, and you cannot enable >> > DCA on a lot of systems, even though the underlying chipset supports >> > DCA. I've done hacks to force-enable it in the past, with mixed >> > results. The problem is that DCA depends on having the correct tag >> > table, so that packets can be prefetched into the correct CPU's cache. >> > If the tag table is incorrect, DCA is a big pessimization, because it >> > blows the cache in other CPUs. >> >> Right. >> >> > That said, I would *love* it if FreeBSD grew ioatdma/dca support. >> > Jack, does Intel have any interest in porting DCA support to FreeBSD? >> >> Question for Jack or Drew, what DOES FreeBSD have to do to support >> DCA? I thought DCA was something you just enable on the NIC chipset >> and if the system is IOATDMA aware, it just works. Is that not right >> (assuming cache tags are correct and accessible)? i.e. I thought this >> was hardware black magic than anything specific the OS has to do. >> > > OK, let me see if I can clarify some of this. First, there IS an I/OAT > driver > that I did for FreeBSD like 3 or 4 years ago, in the timeframe that we put > the feature out. However, at that time all it was good for was the DMA > aspect > of things, and Prafulla used it to accelerate the stack copies; interest did > not seem that great so I put the code aside, its not badly dated and needs > to be brought up to date due to there being a few different versions of the > hardware now. > > At one point maybe a year back I started to take the code apart thinking > I would JUST do DCA, that got back-burnered due to other higher priority > issues, but its still an item in my queue. > > I also had a nibble of an interest in using the DMA engine so perhaps I > should not go down the road of just doing the DCA support in the I/OAT > part of the driver. The question is how to make the infrastructure work. > > To answer Alexander's question, DCA support is NOT in the NIC, its in > the chipset, that's why the I/OAT driver was done as a seperate driver, > but the NIC was the user of the info, its been a while since I was into > the code but if memory serves the I/OAT driver just enables the support > in the chipset, and then the NIC driver configures its engine to use it. Thank you very much Jack! :) It was not clear from the docs what was where to me. I just assumed this was Intel NIC knew Intel chipset black magic! LOL. > DCA and DMA were supported in Linux in the same driver because > the chipset features were easily handled together perhaps, I'm not > sure :) Ok! (it was my other reference) > Fabien's da
Re: Re: convert Windows NDIS drivers for use with FreeBSD
On 5/14/10, jiani1012 wrote: > Yes, I use ndisgen(8) instead. Input the netathw.inf and athw.sys file, > appears: > segmentation fault (core dumped) > CONVERSION FAILED inf file have missing end of line at end, open file in text editor and add empty line at and, and try again. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Packet Loss on FW1 but not FW2 (CARP + PF on FBSD8)
Hello, I recently just purchased 2 Soekris5501 with identical 120gb 2.5" WD Scorpio HDDs. I'm using them for network failover, using CARP, PF and pfSync on FreeBSD 8-STABLE. The short version of my problem: I setup FW2 first, imaged its hard drive to FW1. I changed the necessary configs to update the IPs and ensure FW1 was carp MASTER. Using a known working port on the switch, I continue to get 70% packet loss on FW1 on vr0 (vr0 - extif, vr1 - intif, vr2 - pfsync). If I flip FW1 and FW2, the packet loss follows FW1. I took FW1 home, plugged it into my home network on vr0 and it works fine with 0% packet loss so the interface seems fine. I also took the IP bound to vr0 on FW1 and bound it to vr0 on FW2 and the ISP isn't the problem. The long version: Both Soekris5501's use vr0 (ext), vr1 (int) and vr2 (pfsync). I was given 98.xxx.xxx.58 - .62 with .57 being the gateway IP. FW1 was assigned .59. FW2 was assigned .60 and I was going to use .58 to NAT the office traffic over CARP. If I take carp0 and carp1 down off FW1, it moves all traffic to FW2 appropriately. If I bring carp0 and carp1 back up on FW1, it assumes MASTER again as it should. FW1 /etc/rc.conf: - cloned_interfaces="carp0 carp1" ifconfig_vr0="inet 98.xxx.xxx.59 netmask 255.255.255.248" ifconfig_vr1="inet 192.168.1.10 netmask 255.255.255.0" ifconfig_vr2="inet 10.0.10.12 netmask 255.255.255.0" ifconfig_carp0="inet 98.xxx.xxx.58 netmask 255.255.255.248 pass pabsoekris1959 vhid 1" ifconfig_carp0_alias0="inet 98.xxx.xxx.61 netmask 255.255.255.248" ifconfig_carp0_alias1="inet 98.xxx.xxx.62 netmask 255.255.255.248" ifconfig_carp1="inet 192.168.1.1 netmask 255.255.255.0 pass pabsoekris1959 vhid 2" ifconfig_pfsync0="syncpeer 10.0.10.13 syncdev vr2" defaultrouter="98.xxx.xxx.57" gateway_enable="YES" FW2 /etc/rc.conf: - cloned_interfaces="carp0 carp1" ifconfig_vr0="inet 98.xxx.xxx.60 netmask 255.255.255.248" ifconfig_vr1="inet 192.168.1.11 netmask 255.255.255.0" ifconfig_vr2="inet 10.0.10.13 netmask 255.255.255.0" ifconfig_carp0="inet 98.xxx.xxx.58 netmask 255.255.255.248 pass pabsoekris1959 advskew 100 vhid 1" ifconfig_carp0_alias0="inet 98.xxx.xxx.61 netmask 255.255.255.248" ifconfig_carp0_alias1="inet 98.xxx.xxx.62 netmask 255.255.255.248" ifconfig_carp1="inet 192.168.1.1 netmask 255.255.255.0 pass pabsoekris1959 vhid 2" ifconfig_pfsync0="syncpeer 10.0.10.12 syncdev vr2" defaultrouter="98.xxx.xxx.57" gateway_enable="YES" FW1 /etc/pf.conf: ext_if = vr0# External WAN interface int_if = vr1# Internal LAN interface pfs_if = vr2# Pfsync interface carp_extif = carp0 # External CARP interface carp_intif = carp1 ### hosts office = "192.168.1.0/24" office_ext = "98.xxx.xxx.58" soekris1 = "98.xxx.xxx.59" soekris2 = "98.xxx.xxx.60" pab = "192.168.1.2" ### icmp icmp_types = "{ echoreq, unreach }" ### tables table persist table persist file "/etc/badguys" table { $office } set block-policy drop set loginterface $ext_if set skip on lo scrub on $ext_if reassemble tcp no-df random-id ### NAT outgoing connections nat on $ext_if inet from $int_if:network to any -> $office_ext ### port forwards rdr on $ext_if proto tcp from any to $office_ext port X -> $pab port 22 rdr on $ext_if proto tcp from any to $office_ext port X -> $pab port 3389 ### ruleset block in log all# default deny block in log quick from urpf-failed # spoofed address protection block in log quick from { , } pass log from { lo0, $int_if:network, $ext_if, $carp_extif, $carp_intif } to any keep state pass in quick from keep state pass log inet proto icmp all icmp-type $icmp_types pass quick on $pfs_if proto pfsync keep state (no-sync) # enable pfsync pass on { $int_if, $ext_if } proto carp keep state (no-sync)# enable CARP FW2 /etc/pf.conf: - ext_if = vr0# External WAN interface int_if = vr1# Internal LAN interface pfs_if = vr2# Pfsync interface carp_extif = carp0 # External CARP interface carp_intif = carp1 ### hosts office = "192.168.1.0/24" office_ext = "98.xxx.xxx.58" soekris1 = "98.xxx.xxx.59" soekris2 = "98.xxx.xxx.60" pab = "192.168.1.2" ### icmp icmp_types = "{ echoreq, unreach }" ### tables table persist table persist file "/etc/badguys" table { $office } set block-policy drop set loginterface $ext_if set skip on lo scrub on $ext_if reassemble tcp no-df random-id ### NAT outgoing connections nat on $ext_if inet from $int_if:network to any -> $office_ext ### port forwards rdr on $ext_if proto tcp from any to $office_ext port X -> $pab port 22 rdr on $ext_if proto tcp from any to $office_ext port X -> $pab port 3389 ### ruleset block in log all# default deny block in log quick from urpf-failed # spoofed address
Re: Packet Loss on FW1 but not FW2 (CARP + PF on FBSD8)
On Fri, May 14, 2010 at 02:56:12PM -0400, l...@cykotix.com wrote: > Hello, > > I recently just purchased 2 Soekris5501 with identical 120gb 2.5" WD > Scorpio HDDs. I'm using them for network failover, using CARP, PF and > pfSync on FreeBSD 8-STABLE. > > The short version of my problem: > > I setup FW2 first, imaged its hard drive to FW1. I changed the > necessary configs to update the IPs and ensure FW1 was carp MASTER. > Using a known working port on the switch, I continue to get 70% packet > loss on FW1 on vr0 (vr0 - extif, vr1 - intif, vr2 - pfsync). If I > flip FW1 and FW2, the packet loss follows FW1. I took FW1 home, > plugged it into my home network on vr0 and it works fine with 0% > packet loss so the interface seems fine. I also took the IP bound to > vr0 on FW1 and bound it to vr0 on FW2 and the ISP isn't the problem. > Show me the output of "sysctl dev.vr.0.stats=1" and "netstat -ndI vr0". > The long version: > > Both Soekris5501's use vr0 (ext), vr1 (int) and vr2 (pfsync). I was > given 98.xxx.xxx.58 - .62 with .57 being the gateway IP. FW1 was > assigned .59. FW2 was assigned .60 and I was going to use .58 to NAT > the office traffic over CARP. If I take carp0 and carp1 down off FW1, > it moves all traffic to FW2 appropriately. If I bring carp0 and carp1 > back up on FW1, it assumes MASTER again as it should. > > FW1 /etc/rc.conf: > - > cloned_interfaces="carp0 carp1" > ifconfig_vr0="inet 98.xxx.xxx.59 netmask 255.255.255.248" > ifconfig_vr1="inet 192.168.1.10 netmask 255.255.255.0" > ifconfig_vr2="inet 10.0.10.12 netmask 255.255.255.0" > ifconfig_carp0="inet 98.xxx.xxx.58 netmask 255.255.255.248 pass > pabsoekris1959 vhid 1" > ifconfig_carp0_alias0="inet 98.xxx.xxx.61 netmask 255.255.255.248" > ifconfig_carp0_alias1="inet 98.xxx.xxx.62 netmask 255.255.255.248" > ifconfig_carp1="inet 192.168.1.1 netmask 255.255.255.0 pass > pabsoekris1959 vhid 2" > ifconfig_pfsync0="syncpeer 10.0.10.13 syncdev vr2" > defaultrouter="98.xxx.xxx.57" > gateway_enable="YES" > > FW2 /etc/rc.conf: > - > cloned_interfaces="carp0 carp1" > ifconfig_vr0="inet 98.xxx.xxx.60 netmask 255.255.255.248" > ifconfig_vr1="inet 192.168.1.11 netmask 255.255.255.0" > ifconfig_vr2="inet 10.0.10.13 netmask 255.255.255.0" > ifconfig_carp0="inet 98.xxx.xxx.58 netmask 255.255.255.248 pass > pabsoekris1959 advskew 100 vhid 1" > ifconfig_carp0_alias0="inet 98.xxx.xxx.61 netmask 255.255.255.248" > ifconfig_carp0_alias1="inet 98.xxx.xxx.62 netmask 255.255.255.248" > ifconfig_carp1="inet 192.168.1.1 netmask 255.255.255.0 pass > pabsoekris1959 vhid 2" > ifconfig_pfsync0="syncpeer 10.0.10.12 syncdev vr2" > defaultrouter="98.xxx.xxx.57" > gateway_enable="YES" > > FW1 /etc/pf.conf: > > ext_if = vr0# External WAN interface > int_if = vr1# Internal LAN interface > pfs_if = vr2# Pfsync interface > carp_extif = carp0 # External CARP interface > carp_intif = carp1 > > ### hosts > office = "192.168.1.0/24" > office_ext = "98.xxx.xxx.58" > soekris1 = "98.xxx.xxx.59" > soekris2 = "98.xxx.xxx.60" > pab = "192.168.1.2" > > ### icmp > icmp_types = "{ echoreq, unreach }" > > ### tables > table persist > table persist file "/etc/badguys" > table { $office } > > set block-policy drop > set loginterface $ext_if > set skip on lo > > scrub on $ext_if reassemble tcp no-df random-id > > ### NAT outgoing connections > nat on $ext_if inet from $int_if:network to any -> $office_ext > > > ### port forwards > rdr on $ext_if proto tcp from any to $office_ext port X -> $pab port 22 > rdr on $ext_if proto tcp from any to $office_ext port X -> $pab port > 3389 > > ### ruleset > block in log all# default deny > block in log quick from urpf-failed # spoofed address protection > block in log quick from { , } > > pass log from { lo0, $int_if:network, $ext_if, $carp_extif, > $carp_intif } to any keep state > pass in quick from keep state > pass log inet proto icmp all icmp-type $icmp_types > pass quick on $pfs_if proto pfsync keep state (no-sync) # > enable pfsync > pass on { $int_if, $ext_if } proto carp keep state (no-sync)# enable > CARP > > > FW2 /etc/pf.conf: > - > ext_if = vr0# External WAN interface > int_if = vr1# Internal LAN interface > pfs_if = vr2# Pfsync interface > carp_extif = carp0 # External CARP interface > carp_intif = carp1 > > ### hosts > office = "192.168.1.0/24" > office_ext = "98.xxx.xxx.58" > soekris1 = "98.xxx.xxx.59" > soekris2 = "98.xxx.xxx.60" > pab = "192.168.1.2" > > ### icmp > icmp_types = "{ echoreq, unreach }" > > > ### tables > table persist > table persist file "/etc/badguys" > table { $office } > > > set block-policy drop > set loginterface $ext_if > set skip on lo > > scrub on $ext_if reassemble tcp no-df rando
Re: Packet Loss on FW1 but not FW2 (CARP + PF on FBSD8)
Quoting Pyun YongHyeon : On Fri, May 14, 2010 at 02:56:12PM -0400, l...@cykotix.com wrote: Hello, I recently just purchased 2 Soekris5501 with identical 120gb 2.5" WD Scorpio HDDs. I'm using them for network failover, using CARP, PF and pfSync on FreeBSD 8-STABLE. The short version of my problem: I setup FW2 first, imaged its hard drive to FW1. I changed the necessary configs to update the IPs and ensure FW1 was carp MASTER. Using a known working port on the switch, I continue to get 70% packet loss on FW1 on vr0 (vr0 - extif, vr1 - intif, vr2 - pfsync). If I flip FW1 and FW2, the packet loss follows FW1. I took FW1 home, plugged it into my home network on vr0 and it works fine with 0% packet loss so the interface seems fine. I also took the IP bound to vr0 on FW1 and bound it to vr0 on FW2 and the ISP isn't the problem. Show me the output of "sysctl dev.vr.0.stats=1" and "netstat -ndI vr0". soekris1# sysctl dev.vr.0.stats=1 dev.vr.0.stats: -1 -> -1 soekris1# netstat -ndI vr0 NameMtu Network Address Ipkts IerrsOpkts Oerrs Coll Drop vr01500 00:00:24:cc:cb:9417491 014993 0 00 vr01500 98.xxx.xxx.56 98.xxx.xxx.59 992 - 9374 - -- soekris2# sysctl dev.vr.0.stats=1 dev.vr.0.stats: -1 -> -1 soekris2# netstat -ndI vr0 NameMtu Network Address Ipkts IerrsOpkts Oerrs Coll Drop vr01500 00:00:24:ca:40:60 575909 0 588703 0 00 vr01500 98.xxx.xxx.56 98.xxx.xxx.6010029 -53106 - -- Let me know if you need any other information! Thanks! Patrick This message was sent using IMP, the Internet Messaging Program. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: Packet Loss on FW1 but not FW2 (CARP + PF on FBSD8)
On Fri, May 14, 2010 at 03:56:38PM -0400, l...@cykotix.com wrote: > Quoting Pyun YongHyeon : > > >On Fri, May 14, 2010 at 02:56:12PM -0400, l...@cykotix.com wrote: > >>Hello, > >> > >>I recently just purchased 2 Soekris5501 with identical 120gb 2.5" WD > >>Scorpio HDDs. I'm using them for network failover, using CARP, PF and > >>pfSync on FreeBSD 8-STABLE. > >> > >>The short version of my problem: > >> > >>I setup FW2 first, imaged its hard drive to FW1. I changed the > >>necessary configs to update the IPs and ensure FW1 was carp MASTER. > >>Using a known working port on the switch, I continue to get 70% packet > >>loss on FW1 on vr0 (vr0 - extif, vr1 - intif, vr2 - pfsync). If I > >>flip FW1 and FW2, the packet loss follows FW1. I took FW1 home, > >>plugged it into my home network on vr0 and it works fine with 0% > >>packet loss so the interface seems fine. I also took the IP bound to > >>vr0 on FW1 and bound it to vr0 on FW2 and the ISP isn't the problem. > >> > > > >Show me the output of "sysctl dev.vr.0.stats=1" and "netstat -ndI vr0". > > soekris1# sysctl dev.vr.0.stats=1 > dev.vr.0.stats: -1 -> -1 > Please check the output of console. It would have printed some MAC counters maintained in driver. > soekris1# netstat -ndI vr0 > NameMtu Network Address Ipkts IerrsOpkts > Oerrs Coll Drop > vr01500 00:00:24:cc:cb:9417491 014993 > 0 00 > vr01500 98.xxx.xxx.56 98.xxx.xxx.59 992 - 9374 > - -- > No Ierrs, so MAC counters would be more helpful here. > > soekris2# sysctl dev.vr.0.stats=1 > dev.vr.0.stats: -1 -> -1 > > soekris2# netstat -ndI vr0 > NameMtu Network Address Ipkts IerrsOpkts > Oerrs Coll Drop > vr01500 00:00:24:ca:40:60 575909 0 588703 > 0 00 > vr01500 98.xxx.xxx.56 98.xxx.xxx.6010029 -53106 > - -- > > > Let me know if you need any other information! Thanks! > > Patrick ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: Packet Loss on FW1 but not FW2 (CARP + PF on FBSD8)
Quoting Pyun YongHyeon : >Show me the output of "sysctl dev.vr.0.stats=1" and "netstat -ndI vr0". soekris1# sysctl dev.vr.0.stats=1 dev.vr.0.stats: -1 -> -1 Please check the output of console. It would have printed some MAC counters maintained in driver. soekris1# netstat -ndI vr0 NameMtu Network Address Ipkts IerrsOpkts Oerrs Coll Drop vr01500 00:00:24:cc:cb:9417491 014993 0 00 vr01500 98.xxx.xxx.56 98.xxx.xxx.59 992 - 9374 - -- FW1: vr0 statistics: Outbound good frames : 14992 Inbound good frames : 17486 Outbound errors : 0 Inbound errors : 0 Inbound no buffers : 0 Inbound no mbuf clusters: 0 Inbound FIFO overflows : 0 Inbound CRC errors : 0 Inbound frame alignment errors : 0 Inbound giant frames : 0 Inbound runt frames : 0 Outbound aborted with excessive collisions : 0 Outbound collisions : 0 Outbound late collisions : 0 Outbound underrun : 0 PCI bus errors : 0 driver restarted due to Rx/Tx shutdown failure : 0 No Ierrs, so MAC counters would be more helpful here. soekris2# sysctl dev.vr.0.stats=1 dev.vr.0.stats: -1 -> -1 soekris2# netstat -ndI vr0 NameMtu Network Address Ipkts IerrsOpkts Oerrs Coll Drop vr01500 00:00:24:ca:40:60 575909 0 588703 0 00 vr01500 98.xxx.xxx.56 98.xxx.xxx.6010029 -53106 - -- FW2: vr0 statistics: Outbound good frames : 588054 Inbound good frames : 575353 Outbound errors : 0 Inbound errors : 0 Inbound no buffers : 0 Inbound no mbuf clusters: 0 Inbound FIFO overflows : 0 Inbound CRC errors : 0 Inbound frame alignment errors : 0 Inbound giant frames : 0 Inbound runt frames : 0 Outbound aborted with excessive collisions : 0 Outbound collisions : 0 Outbound late collisions : 0 Outbound underrun : 0 PCI bus errors : 0 driver restarted due to Rx/Tx shutdown failure : 0 Patrick This message was sent using IMP, the Internet Messaging Program. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"