Re: running out of mbufs?

2005-08-09 Thread Marko Zec
On Monday 08 August 2005 18:47, Andre Oppermann wrote:
> Marko Zec wrote:
> > On Monday 08 August 2005 12:32, Andre Oppermann wrote:
> > > Dave+Seddon wrote:
> > > > BTW, I'd be interested to know people's thoughts on multiple IP
> > > > stacks on FreeBSD.  It would be really cool to be able to give
> > > > a jail it's own IP stack bound to a VLAN interface.  It could
> > > > then be like a VRF on Cisco.
> > >
> > > There is a patch doing that for FreeBSD 4.x.  However while
> > > interesting it is not the way to go.  You don't want to have
> > > multiple parallel stacks but just multiple routing tables and
> > > interface groups one per jail. This gives you the same
> > > functionality as Cisco VRF but is far less intrusive to the
> > > kernel.
> >
> > Andre,
> >
> > the stack virtualization framework for 4.x is based precisely on
> > introducing multiple routing tables and interface groups.  In order
> > to cleanly implement support for multiple independent interface
> > groups, one has to touch both the link and network layers, not
> > forgetting the ARP stuff... and in no time you have ended up with a
> > huge and intrusive diff against the original network stack code.
>
> While your stack indexing approach is interesting I don't think it is
> the way we should take for the generic FreeBSD.  There are better
> ways to implement a less limiting superset of the desired
> functionality.

Andre,

there's no doubt almost any idea or particularly software can be 
improved.  Could you provide a more elaborate argumentation to your 
claim the network stack cloning concept is so severely limited that it 
has no place to search for in the future of FreeBSD?  And what exactly 
did you mean by a "stack indexing approach"?

> And the ARP is almost done, I have to review the code 
> and then it goes into -current.

While having a per-interface ARP logic is certainly a nice feature, this 
alone will not solve much with regards to introducing multiple 
independent interface groups.  You will still most probably have to 
revisit the ARP code once you start introducing non-global interface 
lists in the kernel.

> > So I see no point in pretending we could get such a functionality
> > for free, i.e. with only a negligible intrusiveness to the kernel
> > code.  A more appropriate question would be whether the potential
> > benefits of having multiple stack state instances could outweight
> > the trouble and damage associated with the scope of required
> > modifications to the kernel code tree.  Only if we could get an
> > affirmative answer to that question it would make sense to start
> > thinking / debating on the most appropriate methodology to
> > (re)implement the multiple stacks framework.
>
> Having multiple stacks duplicates a lot of structures for each stack
> which don't have to be duplicated.  With your approach you need a new
> jail for every new stack.  In each jail you have to run a new
> instance of a routing daemon (if you do routing).  And it precludes
> having one routing daemon managing multiple routing tables.  While
> removing one limitation you create some new ones in addition to the
> complexity.

Bemusingly, none of the above claims are true.  

A new jail for each network stack instance is NOT required.  Inside the 
kernel what could be considered "per-jail" and per-network stack 
structures are cleanly separated and independent.  In fact, one can run 
multiple jails bound to a single network stack instance, if desired.

Furthermore, a single process can simultaneously attach to multiple 
network stacks, thus potentially allowing a single routing daemon to 
manage multiple separated routing tables and interface groups.  The 
entity that gets permanently bound to a network stack instance is a 
socket and not a process. This translates to the capability of a single 
process to open multiple sockets in multiple independent stacks.  IMO, 
one particular strength of such an approach is that it requires 
absolutely no extensions or modifications to the existing routing 
socket API.

And finally, I'm wondering what structures exactly are you referring to 
when you say that this approach "duplicates a lot of structures for 
each stack which don't have to be duplicated"?  I absolutely agree that 
the virtualization of the network stack should be done as simple and 
non-intrusive as possible, but my point is that it just cannot be done 
cleanly / properly without taking some sacrifices in terms of the scope 
of minimum required modifications.

Cheers,

Marko

> What I have in mind is a redesign of some parts of the stack as such.
> I have got funding to work on this through my fundraise of two weeks
> ago.  Work is about to start later this week.  I will provide a more
> detailed design document for discussion and review by the FreeBSD
> community in a few days.  It will include features such as multiple
> hierarchical routing tables (bound to jails or not), interface
> groups, virtual interfaces belonging

Stack virtualization (was Re: running out of mbufs?)

2005-08-09 Thread Milan Obuch
On Tuesday 09 August 2005 11:04, Marko Zec wrote:
> On Monday 08 August 2005 18:47, Andre Oppermann wrote:

...

> Andre,
>
> there's no doubt almost any idea or particularly software can be
> improved.  Could you provide a more elaborate argumentation to your
> claim the network stack cloning concept is so severely limited that it
> has no place to search for in the future of FreeBSD?
...
>

I am coming to this talk having used Marko's patch once and planning to use it 
again for similar task if nothing better will be available (which almost 
surely will not be the case in not too distant future).
The only one problem with it is not being -current or 5/6 release based, so 
the whole thing needs to be implemented anew in new kernel environment.

...

> > Having multiple stacks duplicates a lot of structures for each stack
> > which don't have to be duplicated.  With your approach you need a new
> > jail for every new stack.  In each jail you have to run a new
> > instance of a routing daemon (if you do routing).  And it precludes
> > having one routing daemon managing multiple routing tables.  While
> > removing one limitation you create some new ones in addition to the
> > complexity.
>
> Bemusingly, none of the above claims are true.
>
> A new jail for each network stack instance is NOT required.  Inside the
> kernel what could be considered "per-jail" and per-network stack
> structures are cleanly separated and independent.  In fact, one can run
> multiple jails bound to a single network stack instance, if desired.
>
> Furthermore, a single process can simultaneously attach to multiple
> network stacks, thus potentially allowing a single routing daemon to
> manage multiple separated routing tables and interface groups.  The
> entity that gets permanently bound to a network stack instance is a
> socket and not a process. This translates to the capability of a single
> process to open multiple sockets in multiple independent stacks.  IMO,
> one particular strength of such an approach is that it requires
> absolutely no extensions or modifications to the existing routing
> socket API.
>
> And finally, I'm wondering what structures exactly are you referring to
> when you say that this approach "duplicates a lot of structures for
> each stack which don't have to be duplicated"?  I absolutely agree that
> the virtualization of the network stack should be done as simple and
> non-intrusive as possible, but my point is that it just cannot be done
> cleanly / properly without taking some sacrifices in terms of the scope
> of minimum required modifications.
>

As I am no network code guru, I can only tell from reading presentation papers 
and some material on this issue virtualisation as done by Marko meets most of 
above mentioned criteria already. I would like to stress its easiness of use 
- no application modification is necessary, but one could surely make new 
application (or modify old one, for that matter) aware of stack 
virtualization - routing daemon could benefit here.

Only argument against Marko's work could be it's monolithic, which does 
interfere with current movement towards modular network infrastructure, so 
new protocol could be kldload'ed.

To me this would be great start - it does work well, it is usable in many 
various scenarios already and it *is* well structured, even easy to 
understand (at least at conceptual level, code itself is somewhat more 
complicated).

Milan
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: running out of mbufs?

2005-08-09 Thread Andre Oppermann
Dave+Seddon wrote:
> 
> Greetings,
> 
> It’s very cool to hear you guys are interested in separate routing.
> 
> > Having multiple stacks duplicates a lot of structures for each stack
> > which don't have to be duplicated.  With your approach you need a new
> > jail for every new stack.  In each jail you have to run a new instance
> > of a routing daemon (if you do routing).  And it precludes having one
> > routing daemon managing multiple routing tables.  While removing one
> > limitation you create some new ones in addition to the complexity.
> 
> Running multiple routing daemons isn’t too much of a problem though.  The
> memory size isn’t usually very high, and it is more likely to be secure if

It depends on your goals.  If you have full BGP feeds then running multiple
routing daemons is a big problem.  Especially with Quagga's RIB+protocolRIB
design.  Five times 130MB of RAM ain't nice.

> the daemons are separate.  If somebody was going to run a large instance of
> routing they should probably use a router, not a unix box.

Bzzt, wrong answer.  There is no difference between a FreeBSD box and
a "router" per you definition, see Juniper.  The only thing they've got
is a hardware FIB and forwarding plane.  I don't want to run Cisco et
al. because I can't change anything other than what the IOS cli gives
me.  I'm not satisfied.  I can't run my own experimental routing protocols
on it.  I can't fix any of their (plenty) bugs.

Nono, you want to use FreeBSD as router instead of Cisco, Juniper or all
the others.

-- 
Andre
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: running out of mbufs?

2005-08-09 Thread Andre Oppermann
Marko Zec wrote:
> 
> On Monday 08 August 2005 18:47, Andre Oppermann wrote:
> > Marko Zec wrote:
> > > On Monday 08 August 2005 12:32, Andre Oppermann wrote:
> > > > Dave+Seddon wrote:
> > > > > BTW, I'd be interested to know people's thoughts on multiple IP
> > > > > stacks on FreeBSD.  It would be really cool to be able to give
> > > > > a jail it's own IP stack bound to a VLAN interface.  It could
> > > > > then be like a VRF on Cisco.
> > > >
> > > > There is a patch doing that for FreeBSD 4.x.  However while
> > > > interesting it is not the way to go.  You don't want to have
> > > > multiple parallel stacks but just multiple routing tables and
> > > > interface groups one per jail. This gives you the same
> > > > functionality as Cisco VRF but is far less intrusive to the
> > > > kernel.
> > >
> > > Andre,
> > >
> > > the stack virtualization framework for 4.x is based precisely on
> > > introducing multiple routing tables and interface groups.  In order
> > > to cleanly implement support for multiple independent interface
> > > groups, one has to touch both the link and network layers, not
> > > forgetting the ARP stuff... and in no time you have ended up with a
> > > huge and intrusive diff against the original network stack code.
> >
> > While your stack indexing approach is interesting I don't think it is
> > the way we should take for the generic FreeBSD.  There are better
> > ways to implement a less limiting superset of the desired
> > functionality.
> 
> Andre,
> 
> there's no doubt almost any idea or particularly software can be
> improved.  Could you provide a more elaborate argumentation to your
> claim the network stack cloning concept is so severely limited that it
> has no place to search for in the future of FreeBSD?  And what exactly
> did you mean by a "stack indexing approach"?

I'm not saying your concept is wrong or doesn't have its place.  However
there are other approaches doing 98% of what people want to do with less
intrusive code changes.

> > And the ARP is almost done, I have to review the code
> > and then it goes into -current.
> 
> While having a per-interface ARP logic is certainly a nice feature, this
> alone will not solve much with regards to introducing multiple
> independent interface groups.  You will still most probably have to
> revisit the ARP code once you start introducing non-global interface
> lists in the kernel.

I don't want to have non-global interface lists in the kernel.  What
I want to provide is not exactly what your stack virtualizations does.
In fact my work does not preclude virtualization like yours on top of
it.  It's solving a somewhat different problem set in a different,
architectual clean way.  That's way I said we should wait for my paper
before going to too deep into discussions yet. ;)

> > > So I see no point in pretending we could get such a functionality
> > > for free, i.e. with only a negligible intrusiveness to the kernel
> > > code.  A more appropriate question would be whether the potential
> > > benefits of having multiple stack state instances could outweight
> > > the trouble and damage associated with the scope of required
> > > modifications to the kernel code tree.  Only if we could get an
> > > affirmative answer to that question it would make sense to start
> > > thinking / debating on the most appropriate methodology to
> > > (re)implement the multiple stacks framework.
> >
> > Having multiple stacks duplicates a lot of structures for each stack
> > which don't have to be duplicated.  With your approach you need a new
> > jail for every new stack.  In each jail you have to run a new
> > instance of a routing daemon (if you do routing).  And it precludes
> > having one routing daemon managing multiple routing tables.  While
> > removing one limitation you create some new ones in addition to the
> > complexity.
> 
> Bemusingly, none of the above claims are true.
> 
> A new jail for each network stack instance is NOT required.  Inside the
> kernel what could be considered "per-jail" and per-network stack
> structures are cleanly separated and independent.  In fact, one can run
> multiple jails bound to a single network stack instance, if desired.

Ok.

> Furthermore, a single process can simultaneously attach to multiple
> network stacks, thus potentially allowing a single routing daemon to
> manage multiple separated routing tables and interface groups.  The
> entity that gets permanently bound to a network stack instance is a
> socket and not a process. This translates to the capability of a single
> process to open multiple sockets in multiple independent stacks.  IMO,
> one particular strength of such an approach is that it requires
> absolutely no extensions or modifications to the existing routing
> socket API.

The existing API should be modified, it is pretty out of date.

> And finally, I'm wondering what structures exactly are you referring to
> when you say that this approach "duplicates a lot of structures for

Drivers that modify ifp->if_flags's IFF_ALLMULTI field

2005-08-09 Thread Robert Watson


(maintainers or effective maintainers of the affected device drivers CC'd 
-- see below for the details, sorry about dups)


I've recently been reviewing the use of if_flags with respect to network 
stack and driver locking.  Part of that work has been to break the field 
out into two separate fields: one maintained and locked by the device 
driver (if_drv_flags), and the other maintained and locked by the network 
stack (if_flags).  So far, I've moved IFF_OACTIVE and IFF_RUNNING from 
if_flags to if_drv_flags.  This change was recently committed to 7.x, and 
will be merged to 6.x prior to 6.0.


I'm now reviewing the remainder of the flags, and hit upon IFF_ALLMULTI, 
which seems generally to be an administrative flag specificying that all 
multicast packets should be accepted at the device driver layer.  This 
flag is generally set in one of three ways:


(1) if_allmulti() is invoked by network protocols wanting to specify that
they want to see all multicast packets, such as for multicast routing.

(2) IFF_ALLMULTI is sometimes set directly by cross-driver common link
layer code, specifically if IN6_IS_ADDR_UNSPECIFIED(&sin6->sin6_addr)
is matched in if_resolvemulti().

(3) Some device drivers set IFF_ALLMULTI when their multicast address
filters overflow to indicate that all multicast traffic should and is
being accepted, to be handled at the link or network layer.

IFF_ALLMULTI is then generally read by network device drivers in order to 
special case their multicast handling if all multicast is desired.


My feeling is that (2) and (3) are in fact bugs in device drivers and the 
IPv6 code.  Specifically:


- IFF_ALLMULTI should be set using if_allmulti(), which maintains a
  counter of consumers, which (2) bypasses.  Also, once (2) has happened,
  IFF_ALLMULTI is not disabled by the consumer.  And, it may be disabled
  by another consumer, breaking the consumer that wanted it on.  This
  should be corrected to use if_allmulti(), and ideally a symetric "now
  turn it off" should be identified.

- (3) is also a bug, as it reflects internal driver state, and will
  interfere with the administrative setting of IFF_ALLMULTI by turning it
  off even though there are consumers that want it on.  Drivers should
  maintain their forcing of the flag on or off internally.  If it is
  necesary to also expose IFF_ALLMULTI as a status flag for the device
  driver, a new flag should be introduced that is distinguished from the
  administrative state.

(3) is fairly uncommon -- most device drivers already maintain the forcing 
of the all multicast state internally in a separate flag.  The following 
device drivers do not, however:


  src/sys/dev/awi/awi.c
  src/sys/dev/gem/if_gem.c
  src/sys/dev/hme/if_hme.c
  src/sys/dev/ie/if_ie.c
  src/sys/dev/lnc/if_lnc.c
  src/sys/dev/pdq/pdq_ifsubr.c
  src/sys/dev/ray/if_ray.c
  src/sys/dev/snc/dp83932.c
  src/sys/dev/usb/if_udav.c
  src/sys/pci/if_de.c

The fix is generally pretty straight forward, and depending on the device 
driver, may or may not require adding new state to softc.


Robert N M Watson
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Multicast locking LOR

2005-08-09 Thread Ed Maste
On Mon, Aug 08, 2005 at 10:34:53PM +0100, Robert Watson wrote:

> Could you add a hard-coded entry to WITNESS to place the udpinp lock
> before the in_multi_mtx in the lock order, and let me know which path
> resulted in the opposite order from this one?

I hard-coded the correct order, but am now unable to reproduce the
problem.  I guess Murphy's Law at work.  I suspect you're right about
IGMP's packet getting back into ip_input.

I'll post another message if I can get it to happen again.

-ed
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Stack virtualization (was: running out of mbufs?)

2005-08-09 Thread Marko Zec
On Tuesday 09 August 2005 14:41, Andre Oppermann wrote:
> Marko Zec wrote:
> > On Monday 08 August 2005 18:47, Andre Oppermann wrote:
> > > Marko Zec wrote:
> > > > On Monday 08 August 2005 12:32, Andre Oppermann wrote:
...
> > > > > There is a patch doing that for FreeBSD 4.x.  However while
> > > > > interesting it is not the way to go.  You don't want to have
> > > > > multiple parallel stacks but just multiple routing tables and
> > > > > interface groups one per jail. This gives you the same
  
> > > > > functionality as Cisco VRF but is far less intrusive to the
> > > > > kernel.
...
> I don't want to have non-global interface lists in the kernel.

But sooner or later you _will_ end up with some sort of non-global 
interface lists after all, just as you stated yourself at the beginning 
of this tread.  Of course one can still maintain all interfaces linked 
in one list and introduce another set of separated lists on per-stack 
basis which will be used to logically group interfaces into smaller 
sets, but that's really just a question of coding / design style.

...
> > > Having multiple stacks duplicates a lot of structures for each
> > > stack which don't have to be duplicated.  With your approach you
> > > need a new jail for every new stack.  In each jail you have to
> > > run a new instance of a routing daemon (if you do routing).  And
> > > it precludes having one routing daemon managing multiple routing
> > > tables.  While removing one limitation you create some new ones
> > > in addition to the complexity.
> >
> > Bemusingly, none of the above claims are true.
> >
> > A new jail for each network stack instance is NOT required.  Inside
> > the kernel what could be considered "per-jail" and per-network
> > stack structures are cleanly separated and independent.  In fact,
> > one can run multiple jails bound to a single network stack
> > instance, if desired.
>
> Ok.
>
> > Furthermore, a single process can simultaneously attach to multiple
> > network stacks, thus potentially allowing a single routing daemon
> > to manage multiple separated routing tables and interface groups. 
> > The entity that gets permanently bound to a network stack instance
> > is a socket and not a process. This translates to the capability of
> > a single process to open multiple sockets in multiple independent
> > stacks.  IMO, one particular strength of such an approach is that
> > it requires absolutely no extensions or modifications to the
> > existing routing socket API.
>
> The existing API should be modified, it is pretty out of date.
>
> > And finally, I'm wondering what structures exactly are you
> > referring to when you say that this approach "duplicates a lot of
> > structures for each stack which don't have to be duplicated"?  I
> > absolutely agree that the virtualization of the network stack
> > should be done as simple and non-intrusive as possible, but my
> > point is that it just cannot be done cleanly / properly without
> > taking some sacrifices in terms of the scope of minimum required
> > modifications.
>
> Multiple interface lists, vm zones, etc. as your FAQ spells out.

Multiple interface lists are a must, whether as a replacement (the way I 
did it) or as a supplement to a global interface list.  They cost 
nothing in terms of memory use, and greatly simplify the code and 
prevent potential performance and cross-stack-boundary-leaking 
pitfails.

For a long time my framework does _not_ use separate VM zones per 
network stack instance for storing PCBs.  True, the FAQ should probably 
be updated, but it already clearly stated my doubts whether separate VM 
zones were really needed, and the later experiments and working code 
proved they indeed weren't.  What still uses multiple VM zones is the 
TCP syncache code, and I agree it could most likely be reworked to use 
only a single global zone.

Any other offending structures? :-)

It looks like we might be converging in terms of what it takes to 
virtualize a network stack :-)

> Again, I think we are talking past each other right now and we have
> different solutions to different problem sets in mind (or already
> coded).  When I have my paper finished my vision and intentions
> should be more clear and then we can have the discussion on the
> merits of each approach and whether parts of each are complementary
> or converse.

OK, looking forward to it...

Cheers,

Marko
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


panic: if_attach called without if_alloc'd input()

2005-08-09 Thread Darcy Buskermolen
I'm getting the following panic on my RELENG_6 test box:

xl1f0: BUG: if_attach called without if_alloc'd input()

Where should I be looking to track this down? I suspect it has to do with a 
custom kernel, it wasn't doing it when i was running GENERIC

-- 
Darcy Buskermolen
Wavefire Technologies Corp.

http://www.wavefire.com
ph: 250.717.0200
fx: 250.763.1759
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Stack virtualization (was: running out of mbufs?)

2005-08-09 Thread Andre Oppermann
Marko Zec wrote:
> 
> On Tuesday 09 August 2005 14:41, Andre Oppermann wrote:
> ...
> > I don't want to have non-global interface lists in the kernel.
> 
> But sooner or later you _will_ end up with some sort of non-global
> interface lists after all, just as you stated yourself at the beginning
> of this tread.  Of course one can still maintain all interfaces linked
> in one list and introduce another set of separated lists on per-stack
> basis which will be used to logically group interfaces into smaller
> sets, but that's really just a question of coding / design style.

I thinking more along the lines of OpenBSD's interface groups.  There
you just add another attribute called group to an interface.  Claudio
(@openbsd.org, working at next desk to me) explained it quickly to me
after it was raised here on the list.  The group name is a string but
in the ifnet structure only an int is stored.  This group name then
is used primarily for pf firewall to create rules for interface groups.
It handles newly arriving interfaces too.

I haven't fully explored all applications and possible tie-ins with
jails, virtual stacks etc. but it looks very interesting.

For example I want to have multiple routing tables within the same
stack.  These routing tables can be opaque or fall-through and match
on the source and destination address (not at the same time though).
This way we get ultimate routing flexibility in using FreeBSD as
router.  An incoming packet on interface em0 with group priority
would first match into routing table X, and if no match fall-through
to the default routing table.  Or you could create a source matching
routing table Y sending matching packets further to table Z for
low priority routing.

It's hard to describe this textually to its full extent.  That's why
my upcoming paper will have mostly graphics depicting the packet flow
and the processing options.

-- 
Andre
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: panic: if_attach called without if_alloc'd input()

2005-08-09 Thread Brooks Davis
On Tue, Aug 09, 2005 at 08:54:21AM -0700, Darcy Buskermolen wrote:
> I'm getting the following panic on my RELENG_6 test box:
> 
> xl1f0: BUG: if_attach called without if_alloc'd input()
> 
> Where should I be looking to track this down? I suspect it has to do with a 
> custom kernel, it wasn't doing it when i was running GENERIC

The ef(4) device is currently broken.  I haven't had time to look at it
much though it seems mostly correct by inspection so I'm not sure what's
going on.

If you could compile with debugging and get be a stack trace from the
panic, that might help be track this down.  I'm assuming there's a path
through the code that I'm missing that results in using a bogus cast to
get an ifnet pointer and thus causes a panic.  You might also try the
following patch which fixes a couple bugs I think are unrelated, but may
not be.

-- Brooks

Index: if_ef.c
===
RCS file: /home/ncvs/src/sys/net/if_ef.c,v
retrieving revision 1.34
diff -u -p -r1.34 if_ef.c
--- if_ef.c 10 Jun 2005 16:49:18 -  1.34
+++ if_ef.c 26 Jul 2005 23:56:39 -
@@ -477,7 +477,7 @@ ef_clone(struct ef_link *efl, int ft)
efp->ef_pifp = ifp;
efp->ef_frametype = ft;
eifp = efp->ef_ifp = if_alloc(IFT_ETHER);
-   if (ifp == NULL)
+   if (eifp == NULL)
return (ENOSPC);
snprintf(eifp->if_xname, IFNAMSIZ,
"%sf%d", ifp->if_xname, efp->ef_frametype);
@@ -536,8 +536,8 @@ ef_load(void)
SLIST_FOREACH(efl, &efdev, el_next) {
for (d = 0; d < EF_NFT; d++)
if (efl->el_units[d]) {
-   if (efl->el_units[d]->ef_pifp != NULL)
-   
if_free(efl->el_units[d]->ef_pifp);
+   if (efl->el_units[d]->ef_ifp != NULL)
+   
if_free(efl->el_units[d]->ef_ifp);
free(efl->el_units[d], M_IFADDR);
}
free(efl, M_IFADDR);

-- 
Any statement of the form "X is the one, true Y" is FALSE.
PGP fingerprint 655D 519C 26A7 82E7 2529  9BF0 5D8E 8BE9 F238 1AD4


pgpZ6qUFsyBUJ.pgp
Description: PGP signature


very busy ftpd

2005-08-09 Thread Mikhail Teterin
Hi!

I just noticed, that uploading a file over a LANG (at around 5.7Mb/s) resulted 
in around 25% CPU consumption by the ftpd.

I think, that's unusual for a Pentium4 -- what is the process doing?

The machine is running 5.2.1-RELEASE and has TrustedBSD extensions.

-mi
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: panic: if_attach called without if_alloc'd input()

2005-08-09 Thread Darcy Buskermolen
On Tuesday 09 August 2005 11:16, Brooks Davis wrote:
> On Tue, Aug 09, 2005 at 08:54:21AM -0700, Darcy Buskermolen wrote:
> > I'm getting the following panic on my RELENG_6 test box:
> >
> > xl1f0: BUG: if_attach called without if_alloc'd input()
> >
> > Where should I be looking to track this down? I suspect it has to do with
> > a custom kernel, it wasn't doing it when i was running GENERIC
>
> The ef(4) device is currently broken.  I haven't had time to look at it
> much though it seems mostly correct by inspection so I'm not sure what's
> going on.
>
> If you could compile with debugging and get be a stack trace from the
> panic, that might help be track this down.  

Here is a trace without your patch

KDB: enter: panic
[thread pid 0 tid 0 ]
Stoped at   kdb_enter+0x2b: nop
db> trace
Tracing pid 0 tid 0 td 0xc0824dc0
kdb_enter(c07b7c59) at kdb_enter+0xb2
panic(c07bd294,c13b5c10,c0c20cfc,c059605d,c13b8c10) at panic+0xbb
if_attach(c13b5c00,c13b5c00,c1380a00,c0c20d28,c05e93ed) at if_attach+0x33
ether_ifattach(c13b5c00,c12ba2ab,0,c0c20d40,c05e9c86) at ether_ifattach+0x19
ef_attach(c13b04d0) at ef_attach+0x5d
ef_load(c0c20d74,c0572383,c12b6600,0,0) at ef_load+0x1ae
if_ef_modevent(c16b6600,0,0,c08259c0,0) at if_ef_modeevent+0x19
module_redgister)init(c07fe220,c1ec00,c1e000,0,c0444065) at 
module_register_init+0x4b
mi_startup() at mi_startup+0x96
begin() at begin+0x2c
db>

This was transcribed for the monitor behind me so I may have typoed an address 
along the way, but hopefully it's enough to point you down the right path, if 
you need more detail just let me know.

I'm compiling with your patch now, but it may take a bit before i get back to 
you with answers on how it does, darn slow hardware..

> I'm assuming there's a path 
> through the code that I'm missing that results in using a bogus cast to
> get an ifnet pointer and thus causes a panic.  You might also try the
> following patch which fixes a couple bugs I think are unrelated, but may
> not be.
>
> -- Brooks
>
> Index: if_ef.c
> ===
> RCS file: /home/ncvs/src/sys/net/if_ef.c,v
> retrieving revision 1.34
> diff -u -p -r1.34 if_ef.c
> --- if_ef.c   10 Jun 2005 16:49:18 -  1.34
> +++ if_ef.c   26 Jul 2005 23:56:39 -
> @@ -477,7 +477,7 @@ ef_clone(struct ef_link *efl, int ft)
>   efp->ef_pifp = ifp;
>   efp->ef_frametype = ft;
>   eifp = efp->ef_ifp = if_alloc(IFT_ETHER);
> - if (ifp == NULL)
> + if (eifp == NULL)
>   return (ENOSPC);
>   snprintf(eifp->if_xname, IFNAMSIZ,
>   "%sf%d", ifp->if_xname, efp->ef_frametype);
> @@ -536,8 +536,8 @@ ef_load(void)
>   SLIST_FOREACH(efl, &efdev, el_next) {
>   for (d = 0; d < EF_NFT; d++)
>   if (efl->el_units[d]) {
> - if (efl->el_units[d]->ef_pifp != NULL)
> - 
> if_free(efl->el_units[d]->ef_pifp);
> + if (efl->el_units[d]->ef_ifp != NULL)
> + 
> if_free(efl->el_units[d]->ef_ifp);
>   free(efl->el_units[d], M_IFADDR);
>   }
>   free(efl, M_IFADDR);

-- 
Darcy Buskermolen
Wavefire Technologies Corp.

http://www.wavefire.com
ph: 250.717.0200
fx: 250.763.1759
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Stack virtualization (was: running out of mbufs?)

2005-08-09 Thread Jeremie Le Hen
> I haven't fully explored all applications and possible tie-ins with
> jails, virtual stacks etc. but it looks very interesting.
>
> For example I want to have multiple routing tables within the same
> stack.  These routing tables can be opaque or fall-through and match
> on the source and destination address (not at the same time though).
> This way we get ultimate routing flexibility in using FreeBSD as
> router.  An incoming packet on interface em0 with group priority
> would first match into routing table X, and if no match fall-through
> to the default routing table.  Or you could create a source matching
> routing table Y sending matching packets further to table Z for
> low priority routing.

What you are saying clearly reminds me the way Linux does it.
Basically they have about 256 routing tables available, one of them
being the default one (254 IIRC).  Once you have filled the those you
want to use, you can assign a routing table to each packet with what
they simply call "rules".  The routing criteria are classical, such as
"from", "to", "tos", "iif" (incoming interface)...
(See the manpage [1] for more informations, the IPRoute2 framework is
quite powerful.)

One of the most powerful criteria it provides is "fwmark" which allows
to match against a mark stamped on the skbuff (their mbuf) by the
firewall.  This leads to the ability to route packets based on the
whole capabilities of the firewall framework (NetFilter in this case) :
TCP/UDP ports, ICMP types, and so on...

This might appear a little bit hackish to networking guys, especially
those ones that are working on backbone routers, but this flexibility
is almost nothing to add (pf already has the ability to tag packets,
IIRC) and it doesn't constrain the design at all, IMHO.  FYI, this has
already been discussed in this subthread [2].

I have to say that I was quite impressed by Linux networking
capabilities (this was in the 2.4 days), and that's why I would really
like to see FreeBSD to be able to do this.

> It's hard to describe this textually to its full extent.  That's why
> my upcoming paper will have mostly graphics depicting the packet flow
> and the processing options.

I'm in haste to read your paper.

[1] http://www.manpage.org/cgi-bin/man/man2html?8+ip
[2] http://lists.freebsd.org/pipermail/freebsd-net/2005-June/007743.html

Regards,
-- 
Jeremie Le Hen
< jeremie at le-hen dot org >< ttz at chchile dot org >
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: panic: if_attach called without if_alloc'd input()

2005-08-09 Thread Darcy Buskermolen
On Tuesday 09 August 2005 13:48, Darcy Buskermolen wrote:
> On Tuesday 09 August 2005 11:16, Brooks Davis wrote:
> > On Tue, Aug 09, 2005 at 08:54:21AM -0700, Darcy Buskermolen wrote:
> > > I'm getting the following panic on my RELENG_6 test box:
> > >
> > > xl1f0: BUG: if_attach called without if_alloc'd input()
> > >
> > > Where should I be looking to track this down? I suspect it has to do
> > > with a custom kernel, it wasn't doing it when i was running GENERIC
> >
> > The ef(4) device is currently broken.  I haven't had time to look at it
> > much though it seems mostly correct by inspection so I'm not sure what's
> > going on.
> >
> > If you could compile with debugging and get be a stack trace from the
> > panic, that might help be track this down.
>
> Here is a trace without your patch
>
> KDB: enter: panic
> [thread pid 0 tid 0 ]
> Stoped at   kdb_enter+0x2b: nop
> db> trace
> Tracing pid 0 tid 0 td 0xc0824dc0
> kdb_enter(c07b7c59) at kdb_enter+0xb2
> panic(c07bd294,c13b5c10,c0c20cfc,c059605d,c13b8c10) at panic+0xbb
> if_attach(c13b5c00,c13b5c00,c1380a00,c0c20d28,c05e93ed) at if_attach+0x33
> ether_ifattach(c13b5c00,c12ba2ab,0,c0c20d40,c05e9c86) at
> ether_ifattach+0x19 ef_attach(c13b04d0) at ef_attach+0x5d
> ef_load(c0c20d74,c0572383,c12b6600,0,0) at ef_load+0x1ae
> if_ef_modevent(c16b6600,0,0,c08259c0,0) at if_ef_modeevent+0x19
> module_redgister)init(c07fe220,c1ec00,c1e000,0,c0444065) at
> module_register_init+0x4b
> mi_startup() at mi_startup+0x96
> begin() at begin+0x2c
> db>
>
> This was transcribed for the monitor behind me so I may have typoed an
> address along the way, but hopefully it's enough to point you down the
> right path, if you need more detail just let me know.
>
> I'm compiling with your patch now, but it may take a bit before i get back
> to you with answers on how it does, darn slow hardware..
>
And with the patch no difference, same looking trace

> > I'm assuming there's a path
> > through the code that I'm missing that results in using a bogus cast to
> > get an ifnet pointer and thus causes a panic.  You might also try the
> > following patch which fixes a couple bugs I think are unrelated, but may
> > not be.
> >
> > -- Brooks
> >
> > Index: if_ef.c
> > ===
> > RCS file: /home/ncvs/src/sys/net/if_ef.c,v
> > retrieving revision 1.34
> > diff -u -p -r1.34 if_ef.c
> > --- if_ef.c 10 Jun 2005 16:49:18 -  1.34
> > +++ if_ef.c 26 Jul 2005 23:56:39 -
> > @@ -477,7 +477,7 @@ ef_clone(struct ef_link *efl, int ft)
> > efp->ef_pifp = ifp;
> > efp->ef_frametype = ft;
> > eifp = efp->ef_ifp = if_alloc(IFT_ETHER);
> > -   if (ifp == NULL)
> > +   if (eifp == NULL)
> > return (ENOSPC);
> > snprintf(eifp->if_xname, IFNAMSIZ,
> > "%sf%d", ifp->if_xname, efp->ef_frametype);
> > @@ -536,8 +536,8 @@ ef_load(void)
> > SLIST_FOREACH(efl, &efdev, el_next) {
> > for (d = 0; d < EF_NFT; d++)
> > if (efl->el_units[d]) {
> > -   if (efl->el_units[d]->ef_pifp != NULL)
> > -   
> > if_free(efl->el_units[d]->ef_pifp);
> > +   if (efl->el_units[d]->ef_ifp != NULL)
> > +   
> > if_free(efl->el_units[d]->ef_ifp);
> > free(efl->el_units[d], M_IFADDR);
> > }
> > free(efl, M_IFADDR);

-- 
Darcy Buskermolen
Wavefire Technologies Corp.

http://www.wavefire.com
ph: 250.717.0200
fx: 250.763.1759
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Atheros 5212

2005-08-09 Thread Kevin Downey
I am running a generic kernel with all the debugging knobs.
if I use BitTorrent or Gnutella in X the computer reboots after a few minutes.
>From the console it drops into the debugger deal. But will not give me
a crash dump.

Under the assumption that something was wrong with the ath driver I
tried to get the atheros card working with project evil and if_ndis.ko
built fine, but after I loaded it nothing happened (ndis0 did not show
up in ifconfig).

Rebuilding world from old sources (7/20/05) to try and go back to
before this showed up. This did not help.

Here are some juicy bits from dmesg, the whole dmesg is attached.

FreeBSD zifnab 6.0-BETA1 FreeBSD 6.0-BETA1 #4: Tue Aug  9 15:16:43 PDT
2005 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/GENERIC  i386


Aug  9 03:32:16 zifnab kernel: ath0:  mem
0xec41-0xec41 irq 11 at device 10.0 on pci0
Aug  9 03:32:16 zifnab kernel: ath0: Ethernet address: 00:0f:3d:ae:ad:b8
Aug  9 03:32:16 zifnab kernel: ath0: mac 5.9 phy 4.3 radio 4.6

.


Aug  9 03:32:16 zifnab kernel: malloc(M_WAITOK) of "16", forcing
M_NOWAIT with the following non-sleepable locks held:
Aug  9 03:32:16 zifnab kernel: exclusive sleep mutex ath0 (network
driver) r = 0 (0xc1a73d0c) locked @
/usr/src/sys/modules/ath/../../dev/ath/if_ath
.c:4677
Aug  9 03:32:16 zifnab kernel: KDB: stack backtrace:
Aug  9 03:32:16 zifnab kernel:
kdb_backtrace(c0a2b0dc,cec5aabc,1,c1a9cda0,c1461b40) at
kdb_backtrace+0x2e
Aug  9 03:32:16 zifnab kernel:
witness_warn(5,0,c092e935,c08f67d4,c1a73bcc) at witness_warn+0x1a3
Aug  9 03:32:16 zifnab kernel:
uma_zalloc_arg(c1461b40,0,2,c146c960,c1a9cda0) at uma_zalloc_arg+0x5b
Aug  9 03:32:16 zifnab kernel: malloc(0,c09772c0,2,c1a9cda0,c1a4008c)
at malloc+0xc9
Aug  9 03:32:16 zifnab kernel:
ieee80211_ioctl_setoptie(c1a731ac,c1a9cda0,cec5ab40,c068d33d,c09e4a40)
at ieee80211_ioctl_setoptie+0x50
Aug  9 03:32:16 zifnab kernel:
ieee80211_ioctl_set80211(c1a731ac,801c69ea,c1a9cda0,1245,0) at
ieee80211_ioctl_set80211+0x71c
Aug  9 03:32:16 zifnab kernel:
ieee80211_ioctl(c1a731ac,801c69ea,c1a9cda0,1245,c0917cc2) at
ieee80211_ioctl+0x137
Aug  9 03:32:16 zifnab kernel:
ath_ioctl(c1a66000,801c69ea,c1a9cda0,c0a2be20,1) at ath_ioctl+0xdc
Aug  9 03:32:16 zifnab kernel:
in_control(c1be8de8,801c69ea,c1a9cda0,c1a66000,c1a8b600) at
in_control+0xcbe
Aug  9 03:32:16 zifnab kernel:
ifioctl(c1be8de8,801c69ea,c1a9cda0,c1a8b600,1) at ifioctl+0x1e9
Aug  9 03:32:16 zifnab kernel:
soo_ioctl(c1b450d8,801c69ea,c1a9cda0,c1941a80,c1a8b600) at
soo_ioctl+0x3bf
Aug  9 03:32:16 zifnab kernel: ioctl(c1a8b600,cec5ad04,c,422,3) at ioctl+0x45d
Aug  9 03:32:16 zifnab kernel: syscall(3b,3b,3b,bfbfe13c,80672c0) at
syscall+0x2a2
Aug  9 03:32:16 zifnab kernel: Xint0x80_syscall() at Xint0x80_syscall+0x1f
Aug  9 03:32:16 zifnab kernel: --- syscall (54, FreeBSD ELF32, ioctl),
eip = 0x280fd92f, esp = 0xbfbfe0ec, ebp = 0xbfbfe158 ---

-- 
The best prophet of the future is the past.


dmesg.today
Description: Binary data
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Re: very busy ftpd

2005-08-09 Thread Mikhail Teterin
> > I just noticed, that uploading a file over a LANG (at around
> > 5.7Mb/s) resulted in around 25% CPU consumption by the ftpd.
> >
> > I think, that's unusual for a Pentium4 -- what is the process doing?
> 
> Check the client does not use ascii mode when uploading (getc() vs
> read()).

That's quite possible, indeed. I wouldn't put it past some users --
some still use the ancient ftp-clients, which default to text-mode
transfers.

Is there any way to disable this mode on the server, perhaps? Even
if it violates the protocol :-/

Thanks!

-mi
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Atheros 5212

2005-08-09 Thread Sam Leffler

Kevin Downey wrote:

I am running a generic kernel with all the debugging knobs.
if I use BitTorrent or Gnutella in X the computer reboots after a few minutes.

From the console it drops into the debugger deal. But will not give me

a crash dump.


The stack trace below isn't a crash, it's just witness warning about a 
malloc w/ WAITOK while holding a lock.


Try updating to BETA2 and getting a trace from the crash.  Then also 
provide basic info like what you're trying to do at the time.  It 
appears you're trying to use wpa_supplicant w/ WPA based on the stack trace.




Under the assumption that something was wrong with the ath driver I
tried to get the atheros card working with project evil and if_ndis.ko
built fine, but after I loaded it nothing happened (ndis0 did not show
up in ifconfig).

Rebuilding world from old sources (7/20/05) to try and go back to
before this showed up. This did not help.

Here are some juicy bits from dmesg, the whole dmesg is attached.

FreeBSD zifnab 6.0-BETA1 FreeBSD 6.0-BETA1 #4: Tue Aug  9 15:16:43 PDT
2005 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/GENERIC  i386


Aug  9 03:32:16 zifnab kernel: ath0:  mem
0xec41-0xec41 irq 11 at device 10.0 on pci0
Aug  9 03:32:16 zifnab kernel: ath0: Ethernet address: 00:0f:3d:ae:ad:b8
Aug  9 03:32:16 zifnab kernel: ath0: mac 5.9 phy 4.3 radio 4.6

.


Aug  9 03:32:16 zifnab kernel: malloc(M_WAITOK) of "16", forcing
M_NOWAIT with the following non-sleepable locks held:
Aug  9 03:32:16 zifnab kernel: exclusive sleep mutex ath0 (network
driver) r = 0 (0xc1a73d0c) locked @
/usr/src/sys/modules/ath/../../dev/ath/if_ath
.c:4677
Aug  9 03:32:16 zifnab kernel: KDB: stack backtrace:
Aug  9 03:32:16 zifnab kernel:
kdb_backtrace(c0a2b0dc,cec5aabc,1,c1a9cda0,c1461b40) at
kdb_backtrace+0x2e
Aug  9 03:32:16 zifnab kernel:
witness_warn(5,0,c092e935,c08f67d4,c1a73bcc) at witness_warn+0x1a3
Aug  9 03:32:16 zifnab kernel:
uma_zalloc_arg(c1461b40,0,2,c146c960,c1a9cda0) at uma_zalloc_arg+0x5b
Aug  9 03:32:16 zifnab kernel: malloc(0,c09772c0,2,c1a9cda0,c1a4008c)
at malloc+0xc9
Aug  9 03:32:16 zifnab kernel:
ieee80211_ioctl_setoptie(c1a731ac,c1a9cda0,cec5ab40,c068d33d,c09e4a40)
at ieee80211_ioctl_setoptie+0x50
Aug  9 03:32:16 zifnab kernel:
ieee80211_ioctl_set80211(c1a731ac,801c69ea,c1a9cda0,1245,0) at
ieee80211_ioctl_set80211+0x71c
Aug  9 03:32:16 zifnab kernel:
ieee80211_ioctl(c1a731ac,801c69ea,c1a9cda0,1245,c0917cc2) at
ieee80211_ioctl+0x137
Aug  9 03:32:16 zifnab kernel:
ath_ioctl(c1a66000,801c69ea,c1a9cda0,c0a2be20,1) at ath_ioctl+0xdc
Aug  9 03:32:16 zifnab kernel:
in_control(c1be8de8,801c69ea,c1a9cda0,c1a66000,c1a8b600) at
in_control+0xcbe
Aug  9 03:32:16 zifnab kernel:
ifioctl(c1be8de8,801c69ea,c1a9cda0,c1a8b600,1) at ifioctl+0x1e9
Aug  9 03:32:16 zifnab kernel:
soo_ioctl(c1b450d8,801c69ea,c1a9cda0,c1941a80,c1a8b600) at
soo_ioctl+0x3bf
Aug  9 03:32:16 zifnab kernel: ioctl(c1a8b600,cec5ad04,c,422,3) at ioctl+0x45d
Aug  9 03:32:16 zifnab kernel: syscall(3b,3b,3b,bfbfe13c,80672c0) at
syscall+0x2a2
Aug  9 03:32:16 zifnab kernel: Xint0x80_syscall() at Xint0x80_syscall+0x1f
Aug  9 03:32:16 zifnab kernel: --- syscall (54, FreeBSD ELF32, ioctl),
eip = 0x280fd92f, esp = 0xbfbfe0ec, ebp = 0xbfbfe158 ---





___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: very busy ftpd

2005-08-09 Thread Maxim Konovalov
On Tue, 9 Aug 2005, 15:49-0400, Mikhail Teterin wrote:

> Hi!
>
> I just noticed, that uploading a file over a LANG (at around
> 5.7Mb/s) resulted in around 25% CPU consumption by the ftpd.
>
> I think, that's unusual for a Pentium4 -- what is the process doing?

Check the client does not use ascii mode when uploading (getc() vs
read()).

-- 
Maxim Konovalov
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"