Re: if_ral regression

2008-01-12 Thread Dag-Erling Smørgrav
Sam Leffler <[EMAIL PROTECTED]> writes:
> FWIW I took ownership of a ral bug where AP mode tx just stopped for
> no apparent reason (I think it was probe response frames but can't
> recall).  This sounds like the same thing; can you check kern/117655?

Looks similar, except for the part about the PCI version - my AP is a
soekris net4801 which I suspect supports only good old 1.1.

DES
-- 
Dag-Erling Smørgrav - [EMAIL PROTECTED]
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Unexpected multicast IPv4 socket behavior

2008-01-12 Thread Fredrik Lindberg

Hi

I find the following socket behavior a bit unexpected. Multicast from
an IPv4 socket (with IP_MULTICAST_IF set) with its source address bound
to INADDR_ANY only works if there is a default route defined, otherwise
send() returns ENETUNREACH.

Default route set, src INADDR_ANY : Works
Default route set, src bind() to interface address : Works
No default route, src INADDR_ANY : Returns ENETUNREACH
No default route, src bind() to interface address : Works

In all cases IP_MULTICAST_IF was set to the outgoing interface and
IP_ADD_MEMBERSHIP was properly called. IGMP membership reports
were seen on the link in all cases.

I believe the cause of this (unless this is the expected behavior?)
is in in_pcbconnect_setup() (netinet/in_pcb.c) [1].
The check for a multicast destination address is run after the attempt
to get the source address by finding a directly connected interface,
this check also returns ENETUNREACH if it fails (which it does for the
destination 224.0.0.0/24 if no default route is set).

Moving the multicast check before the directly connected check solves
this (or any other combinations that makes sure that the
IN_MULTICAST() check is executed).

I've attached a test case and a patch to illustrate it, comments?


Fredrik Lindberg



[1] http://fxr.watson.org/fxr/source/netinet/in_pcb.c#L610


Index: netinet/in_pcb.c
===
RCS file: /home/ncvs/src/sys/netinet/in_pcb.c,v
retrieving revision 1.198
diff -d -u -r1.198 in_pcb.c
--- netinet/in_pcb.c	22 Dec 2007 10:06:11 -	1.198
+++ netinet/in_pcb.c	12 Jan 2008 10:48:08 -
@@ -618,26 +618,6 @@
 		if ((inp->inp_socket->so_options & SO_DONTROUTE) == 0)
 			ia = ip_rtaddr(faddr);
 		/*
-		 * If we found a route, use the address corresponding to
-		 * the outgoing interface.
-		 * 
-		 * Otherwise assume faddr is reachable on a directly connected
-		 * network and try to find a corresponding interface to take
-		 * the source address from.
-		 */
-		if (ia == 0) {
-			bzero(&sa, sizeof(sa));
-			sa.sin_addr = faddr;
-			sa.sin_len = sizeof(sa);
-			sa.sin_family = AF_INET;
-
-			ia = ifatoia(ifa_ifwithdstaddr(sintosa(&sa)));
-			if (ia == 0)
-ia = ifatoia(ifa_ifwithnet(sintosa(&sa)));
-			if (ia == 0)
-return (ENETUNREACH);
-		}
-		/*
 		 * If the destination address is multicast and an outgoing
 		 * interface has been set as a multicast option, use the
 		 * address of that interface as our source address.
@@ -657,6 +637,26 @@
 	return (EADDRNOTAVAIL);
 			}
 		}
+		/*
+		 * If we found a route, use the address corresponding to
+		 * the outgoing interface.
+		 * 
+		 * Otherwise assume faddr is reachable on a directly connected
+		 * network and try to find a corresponding interface to take
+		 * the source address from.
+		 */
+		if (ia == 0) {
+			bzero(&sa, sizeof(sa));
+			sa.sin_addr = faddr;
+			sa.sin_len = sizeof(sa);
+			sa.sin_family = AF_INET;
+
+			ia = ifatoia(ifa_ifwithdstaddr(sintosa(&sa)));
+			if (ia == 0)
+ia = ifatoia(ifa_ifwithnet(sintosa(&sa)));
+			if (ia == 0)
+return (ENETUNREACH);
+		}
 		laddr = ia->ia_addr.sin_addr;
 	}
 
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Re: Unexpected multicast IPv4 socket behavior

2008-01-12 Thread Bruce M. Simpson

Hi,

This is ironic because I've been up against a similar problem with 
255.255.255.255 on my current project, which also requires a 'bump in 
the stack', and the same code you've posted the patch for, I found 
myself reading yesterday to answer another chap's query.


Fredrik Lindberg wrote:

Hi

I find the following socket behavior a bit unexpected. Multicast from
an IPv4 socket (with IP_MULTICAST_IF set) with its source address bound
to INADDR_ANY only works if there is a default route defined, otherwise
send() returns ENETUNREACH.

Default route set, src INADDR_ANY : Works
Default route set, src bind() to interface address : Works
No default route, src INADDR_ANY : Returns ENETUNREACH
No default route, src bind() to interface address : Works


Totally expected behaviour. There's no way for the stack to know which 
interface to originate the traffic from in the case where there is no 
default route, and no IP layer source information elsewhere in the stack.


It could be argued that case 3 is in fact an abuse of the APIs. In IPv6, 
the use of multicast requires that you create a socket and bind to the 
interface where you wish to send and receive the channel. This is 
reasonable because both IGMP and MLD require that their group state 
traffic is bound to a specific address. Thus the glaring holes in IGMP 
due to the lack of IPv4 link-local addressing.


The newer multicast APIs in fact require you to do this, precisely to 
avoid this ambiguity. As such IP_MULTICAST_IF should be considered 
legacy -- however -- as we've seen, there's a lack of knowledge out 
there about exactly how this stuff is supposed to work.




In all cases IP_MULTICAST_IF was set to the outgoing interface and
IP_ADD_MEMBERSHIP was properly called. IGMP membership reports
were seen on the link in all cases.


Now, if you are explicitly telling the stack which interface to use with 
IP_MULTICAST_IF, and you are seeing the regression in case 3 above, THAT 
looks like a bug.




I believe the cause of this (unless this is the expected behavior?)
is in in_pcbconnect_setup() (netinet/in_pcb.c) [1].
The check for a multicast destination address is run after the attempt
to get the source address by finding a directly connected interface,
this check also returns ENETUNREACH if it fails (which it does for the
destination 224.0.0.0/24 if no default route is set).


But but but. Sends to 224.0.0.0/*24* should never fail as it is strictly 
scoped to a link, and does not require any IPv4 route information. This 
is the lonesome kicker -- IP needs to know where to source the send 
from, however, you've told it to already with IP_MULTICAST_IF, so there 
is definitely a bug.


See the IN_LOCAL_GROUP() macro in -CURRENT's netinet/in.h for how to 
check for 224.0.0.0/24 in code.


In fact we should probably disallow multicast sends to this address when 
the socket HAS NOT been bound, except of course for the case where the 
interface is unnumbered -- but we still need a means of telling the 
stack about this case. The answer might be something called IP_SENDIF... 
Linux uses SO_BINDTODEVICE for this. It's a case of sitting down and 
doing it.


It's reasonable to assume that multicast applications should know that 
they need to walk the system's interface tree and be aware of interfaces 
and their addresses. Apps which don't do this are legacy and need to be 
updated to reflect how IP stacks actually behave now.




Moving the multicast check before the directly connected check solves
this (or any other combinations that makes sure that the
IN_MULTICAST() check is executed).


You are quite right that the imo_multicast_ifp check needs to happen 
further up.


This is probably OK as a workaround -- but -- bigger changes need to 
happen in that code as currently source selection is mostly based on 
destination. This isn't always the case, and in multicast it certainly 
ISN'T the case as you have seen.


SO_DONTROUTE is something of a misnomer anyway. Routes still need to be 
present in the forwarding table for certain lookups, and the source 
interface selection is almost wholly based on the destination faddr in 
the inpcb, in both the cases of connect() and temporary connect during a 
sendto().


Your patch should be OK to go in. Regardless of whether there are routes 
for
the multicast channel you're using or not, IP_MULTICAST_IF is a 
sledgehammer which says 'I use THIS interface for multicast', and until 
our IPv4 stack has link scope addresses, it will be needed.


Thanks again...
BMS
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Unexpected multicast IPv4 socket behavior

2008-01-12 Thread Fredrik Lindberg

Bruce M. Simpson wrote:


I find the following socket behavior a bit unexpected. Multicast from
an IPv4 socket (with IP_MULTICAST_IF set) with its source address bound
to INADDR_ANY only works if there is a default route defined, otherwise
send() returns ENETUNREACH.

Default route set, src INADDR_ANY : Works
Default route set, src bind() to interface address : Works
No default route, src INADDR_ANY : Returns ENETUNREACH
No default route, src bind() to interface address : Works


Totally expected behaviour. There's no way for the stack to know which 
interface to originate the traffic from in the case where there is no 
default route, and no IP layer source information elsewhere in the stack.




I would expect this _without_ IP_MULTICAST_IF set, however as I said
the interface had been explicitly set with IP_MULTICAST_IF in all 4
cases, so there indeed is enough information in the stack to send
the packet.

It seems that my test case got stripped of the mail, this was supposed
to be attached to the original post. So just for the record
http://manticore.h3q.net/~fli/multicast.c

It could be argued that case 3 is in fact an abuse of the APIs. In IPv6, 
the use of multicast requires that you create a socket and bind to the 
interface where you wish to send and receive the channel. This is 
reasonable because both IGMP and MLD require that their group state 
traffic is bound to a specific address. Thus the glaring holes in IGMP 
due to the lack of IPv4 link-local addressing.


The newer multicast APIs in fact require you to do this, precisely to 
avoid this ambiguity. As such IP_MULTICAST_IF should be considered 
legacy -- however -- as we've seen, there's a lack of knowledge out 
there about exactly how this stuff is supposed to work.




If IP_MULTICAST_IF should be considered legacy, I'll move away from it.
But, as you said, there is probably a lack of knowledge on how the
APIs should be used and I have never seen anyone or any document
(maybe I haven't looked hard enough) that suggests that this usage is
deprecated.



In all cases IP_MULTICAST_IF was set to the outgoing interface and
IP_ADD_MEMBERSHIP was properly called. IGMP membership reports
were seen on the link in all cases.


Now, if you are explicitly telling the stack which interface to use with 
IP_MULTICAST_IF, and you are seeing the regression in case 3 above, THAT 
looks like a bug.




Yes, that was the whole point.



I believe the cause of this (unless this is the expected behavior?)
is in in_pcbconnect_setup() (netinet/in_pcb.c) [1].
The check for a multicast destination address is run after the attempt
to get the source address by finding a directly connected interface,
this check also returns ENETUNREACH if it fails (which it does for the
destination 224.0.0.0/24 if no default route is set).


But but but. Sends to 224.0.0.0/*24* should never fail as it is strictly 
scoped to a link, and does not require any IPv4 route information. This 
is the lonesome kicker -- IP needs to know where to source the send 
from, however, you've told it to already with IP_MULTICAST_IF, so there 
is definitely a bug.


I know that 224.0.0.0/24 is link-local, I just happened to use that as
a test case. But I wouldn't expect anything in 224.0.0.0/4 to fail
_with_ IP_MULTICAST_IF set.



See the IN_LOCAL_GROUP() macro in -CURRENT's netinet/in.h for how to 
check for 224.0.0.0/24 in code.


In fact we should probably disallow multicast sends to this address when 
the socket HAS NOT been bound, except of course for the case where the 
interface is unnumbered -- but we still need a means of telling the 
stack about this case. The answer might be something called IP_SENDIF... 
Linux uses SO_BINDTODEVICE for this. It's a case of sitting down and 
doing it.


For the purpose of avoiding sends to 224.0.0.0/24 to go via
the default route?

IP_SENDIF/SO_BINDTODEVICE seems to show up from time to time, is
the only reason that it hasn't been implemented simply that nobody
has done it?

Fredrik
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Unexpected multicast IPv4 socket behavior

2008-01-12 Thread Bruce M. Simpson

Fredrik Lindberg wrote:


I would expect this _without_ IP_MULTICAST_IF set, however as I said
the interface had been explicitly set with IP_MULTICAST_IF in all 4
cases, so there indeed is enough information in the stack to send
the packet.


Correct. You found a bug. Well done.



If IP_MULTICAST_IF should be considered legacy, I'll move away from it.
But, as you said, there is probably a lack of knowledge on how the
APIs should be used and I have never seen anyone or any document
(maybe I haven't looked hard enough) that suggests that this usage is
deprecated.


The fact that IPv4 multicast sends appear to work using the default 
route is a historical quirk. It is not multicast forwarding.


For a host/endstation, the mere fact that the group was joined on a 
given socket, on a given interface, should be enough IP layer 
reachability information for the inpcb layer to figure out where to send 
the packets. From that point on, it's the problem of the multicast 
routers on the path between the end-station and the other members of the 
channel, which are normally speaking PIM-SM.


If one follows how IGMP works, then the problem with multicast joins 
which are not scoped to an interface is readily obvious. IGMP/MLD is 
necessary to inform upstream routers that the channel is being opened -- 
otherwise, you will not receive traffic for the group, as the state 
about the end-station's participation in the channel is never 
communicated to routers.


The endpoint address used by the local end of the path in MLD is the 
link-scope IPv6 address. In IGMP, it's the first IPv4 address configured 
on the interface. Both IGMP and MLD are always scoped to the local link 
-- they deal with multicast forwarding and membership state ONLY in the 
domain of the link they are used on.


IPv4 has historically not had link-scope addresses, which are one 
possible answer to the problem. Ergo there is a problem if the interface 
is unnumbered -- or if the inpcb laddr is 0.0.0.0 -- which you have 
seen. It should be possible to use IP_MULTICAST_IF as a workaround for 
this, however, you found that path is buggy...


I guess the textbooks out there haven't caught up with reality.



I wouldn't expect anything in 224.0.0.0/4 to fail
_with_ IP_MULTICAST_IF set.


Correct. This makes the bug even more damaging. It is reasonable for a 
system to be using multicast during early boot when all interfaces are 
unnumbered.


In fact the IGMPv3 RFC suggests no IGMP traffic should be sent for 
groups in 224.0.0.0/24, becuase upstream IGMP routers should never be 
forwarding these groups between links.


Unfortunately, in practice, this can break layer 2 multicasts for these 
groups which traverse IGMP snooping switches.




IP_SENDIF/SO_BINDTODEVICE seems to show up from time to time, is
the only reason that it hasn't been implemented simply that nobody
has done it?


Yup. Everyone seems to be too worried about unicast traffic and bulk I/O 
performance to bother much with other applications of IP, so, this sort 
of issue gets more airtime elsewhere.


later
BMS
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


bgp router preferences

2008-01-12 Thread Eric W. Bates

I think I have finally given up on cisco.

What are folks recommendations for a machine doing full bgp routes?

I think I need to get a Sangoma card; but what is the current favorite 
bgp routing software and how much RAM do folks think I can get away with?


Thanks for your time.

--
Eric W. Bates
[EMAIL PROTECTED]
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Mpd-5.0 released

2008-01-12 Thread Alexander Motin

Hello everybody!

Mpd5 has got all of it's planned functionality and successfully working 
in many production environments. Remarking this I am glad to present you 
a new mpd-5.0 release, the first release of the new epoch!


Comparing to 4.x, mpd5 presents significantly changed configuration and 
operation ideology, based on dynamic link/bundle creation. This gives 
many benefits, such as simplified configuration, better multilink 
operation, call forwarding (LAC/PAC/TSA) alike to Cisco VPDN, better 
scalability and many others.


Mpd5 supports all FreeBSD versions from 5.x to HEAD, while newer system 
is preferred to get full functionality.


Port:  net/mpd5
Manual:http://mpd.sourceforge.net/doc5/mpd.html
Forums:http://sourceforge.net/forum/?group_id=14145

--
Alexander Motin
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: kern/119617: [nfs] nfs error on wpa network when reseting/shutdown

2008-01-12 Thread linimon
Old Synopsis: nfs error on wpa network when reseting/shutdown
New Synopsis: [nfs] nfs error on wpa network when reseting/shutdown

Responsible-Changed-From-To: freebsd-bugs->freebsd-net
Responsible-Changed-By: linimon
Responsible-Changed-When: Sun Jan 13 03:35:29 UTC 2008
Responsible-Changed-Why: 
Over to maintainer(s).

http://www.freebsd.org/cgi/query-pr.cgi?pr=119617
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: bgp router preferences

2008-01-12 Thread Niki Denev
On Jan 13, 2008 1:13 AM, Eric W. Bates <[EMAIL PROTECTED]> wrote:
> I think I have finally given up on cisco.
>
> What are folks recommendations for a machine doing full bgp routes?
>
> I think I need to get a Sangoma card; but what is the current favorite
> bgp routing software and how much RAM do folks think I can get away with?
>
> Thanks for your time.
>
> --
> Eric W. Bates
> [EMAIL PROTECTED]

I'm using openbgpd and i'm quite happy with it. It's a very basic
setup, but works without any problems,
and the memory usage is very low. bgpd uses under 60M memory with one
full routing table from the ISP,
another received via IBGP and another session with about 250K
addresses (local peering).

Regards,
Niki
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"