On 2002-10-15 00:12, Nicolas Christin <[EMAIL PROTECTED]> wrote:
> On Mon, 14 Oct 2002, Andrew Gallatin wrote:
> > > Would people be open to renaming the 'MSIZE' kernel option to something
> > > more specific such as 'MBUF_SIZE' or 'MBUFSIZE'? Using 'MSIZE' can
> >
> > No.
> >
> > MSIZE is a tr
On Tue, 15 Oct 2002, Bruce M Simpson wrote:
BMS>On Mon, Oct 14, 2002 at 11:13:05PM -0700, Guy Harris wrote:
BMS>> The current CVS versions of libpcap and tcpdump, and the current
BMS>> released version of Ethereal, support a DLT_SUNATM DLT_ type. SunATM's
BMS>> DLPI interface supplies packets wi
Hello!
I'm looking for a good L2TP server for FreeBSD, someone knows it ?
If I'm right MPD does not (yet?) support L2TP.
Thanks in advance!
--
bye!
Ale
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message
Hello,
I have made a patch set for ping(8). I'll appreciate your comments.
I did not include patches #3 and #4, they are stylistic mostly (based
on BDE's style patch).
A cumulative patch is there:
http://people.freebsd.org/~maxim/p.cumulative
#1, Print strict source routing option. Requested
Alessandro de Manzano wrote:
> Hello!
>
> I'm looking for a good L2TP server for FreeBSD, someone knows it ?
>
> If I'm right MPD does not (yet?) support L2TP.
>
>
> Thanks in advance!
>
>
man ng_l2tp
DESCRIPTION
The ng_l2tp node type implements the encapsulation layer of the L2TP pr
On Tue, Oct 15, 2002 at 07:10:29AM -0700, Michael Sierchio wrote:
> man ng_l2tp
>
> DESCRIPTION
> The ng_l2tp node type implements the encapsulation layer of the L2TP pro-
> tocol as described in RFC 2661. This includes adding the L2TP packet
thanks, but I'm looking for something a
In arved.freebsd.net, you wrote:
> On Tue, Oct 15, 2002 at 07:10:29AM -0700, Michael Sierchio wrote:
>
>> man ng_l2tp
>>
>> DESCRIPTION
>> The ng_l2tp node type implements the encapsulation layer of the L2TP pro-
>> tocol as described in RFC 2661. This includes adding the L2TP packe
On Tue, Oct 15, 2002 at 11:54:52AM +0100, Bruce M Simpson wrote:
> This sounds very similar to the promiscuous cell receive option on ENI's
> SpeedStream 5861 router. I found the raw hex cell output was essentially
> a 4 byte ATM UNI header omitting the CRC byte, and the 48 bytes of the raw
> AAL5
On Tue, Oct 15, 2002 at 01:01:05PM +0200, Harti Brandt wrote:
> Does Sun still make ATM cards? As far as I remember I saw the last SBUS
> cards a couple of years ago.
They still have a Web page for SunATM:
http://www.sun.com/products-n-solutions/hw/networking/connectivity/sunatm/index.h
A year & a half ago, the l2tpd interface and code was still in its
infancy. If all you seek is to create tunnels/sessions, and don't care
about security or other more complex l2tp issues, it should work ok.
I developed my own L2TP stack for Linux with much higher level of
functionality. It wou
On Mon, 14 Oct 2002, Steve Francis wrote:
> Kirill Ponomarew wrote:
> >
> > is it recommended to use net.inet.tcp.delayed_ack=0 on the machines with
> > heavy network traffic ?
> >
> If you want to increase your network traffic for no particular reason,
> and increase load on your server, then ye
There is a new L2TP project from Roaring Penguin. It supports both LAC and
LNS features:
http://sourceforge.net/projects/rp-l2tp
It requires pppd. It has been written for Linux, however it should support
FreeBSD easily.
Vincent
Le Mardi 15 Octobre 2002 14:15, Alessandro de Manzano a écrit :
>
I am trying to load the if_em, if_fxp, if_bge drivers
via /boot/loader.conf.
I've added
if_fxp_load="YES"
if_bge_load="YES"
if_em_load="YES"
The problem is that the bge driver doesn't load. It will
if I manually load it after startup with kldload. The issue
seems to be a dependency on miibus,
Paul Herman wrote:
>
> Not true. Although some bugs have been fixed in 4.3, FreeBSD's
> delayed ACKs will still degrade your performance dramatically in
> some cases.
I'm sorry, but such statements without a packet trace that exhibits the
problem are just not useful.
Lars
--
Lars Eggert <[EM
My processes writing to SOCK_DGRAM sockets are getting ENOBUFS
while netstat -s counter under the heading of "ip" is incrementing:
7565828 output packets dropped due to no bufs, etc.
but netstat -m shows:
> netstat -m
579/1440/131072 mbufs in use (current/peak/max):
578 mbufs all
Petri Helenius wrote:
> My processes writing to SOCK_DGRAM sockets are getting ENOBUFS
> while netstat -s counter under the heading of "ip" is incrementing:
> 7565828 output packets dropped due to no bufs, etc.
What rate are you sending these packets at? A standard interface queue
lengt
almost 7 years ago, this commit introduced the _IP_VHL hack in our
IP-stack:
] revision 1.7
] date: 1995/12/21 21:20:27; author: wollman; state: Exp; lines: +5 -1
] If _IP_VHL is defined, declare a single ip_vhl member in struct ip rather
] than separate ip_v and ip_hl members. Should have n
On Wed, 16 Oct 2002, Petri Helenius wrote:
>
> My processes writing to SOCK_DGRAM sockets are getting ENOBUFS
> while netstat -s counter under the heading of "ip" is incrementing:
> 7565828 output packets dropped due to no bufs, etc.
> but netstat -m shows:
my guess is that the inter
On Wed, Oct 16, 2002 at 12:17:13AM +0200, Poul-Henning Kamp wrote:
...
> I would therefore propose to eliminate the _IP_VHL hack from the kernel
yes, go for it.
cheers
luigi
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message
< said:
> In the meantime absolutely no code has picked up on this idea,
It was copied in spirit from OSF/1.
> The side effect of having some source-files using the _IP_VHL hack and
> some not is that sizeof(struct ip) varies from file to file,
Not so. Any compiler which allocates different a
< said:
> My processes writing to SOCK_DGRAM sockets are getting ENOBUFS
Probably means that your outgoing interface queue is filling up.
ENOBUFS is the only way the kernel has to tell you ``slow down!''.
-GAWollman
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net
Lars Eggert wrote:
>Paul Herman wrote:
>
>
>>Not true. Although some bugs have been fixed in 4.3, FreeBSD's
>>delayed ACKs will still degrade your performance dramatically in
>>some cases.
>>
>>
>
>I'm sorry, but such statements without a packet trace that exhibits the
>problem are just n
Steve Francis wrote:
>>
> He's probably referring to poorly behaved windows clients, on certain
> applications, if you leave net.inet.tcp.slowstart_flightsize at default.
Ah. Well, that's a Windows problem :-)
> Incidentally, why are not the defaults on
> net.inet.tcp.slowstart_flightsize high
>
> What rate are you sending these packets at? A standard interface queue
> length is 50 packets, you get ENOBUFS when it's full.
>
This might explain the phenomenan. (packets are going out bursty, with average
hovering at ~500Mbps:ish) I recomplied kernel with IFQ_MAXLEN of 5000
but there seems
>
> Probably means that your outgoing interface queue is filling up.
> ENOBUFS is the only way the kernel has to tell you ``slow down!''.
>
How much should I be able to send to two em interfaces on one
66/64 PCI ?
Pete
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-
On Wed, Oct 16, 2002 at 02:04:11AM +0300, Petri Helenius wrote:
> >
> > What rate are you sending these packets at? A standard interface queue
> > length is 50 packets, you get ENOBUFS when it's full.
> >
> This might explain the phenomenan. (packets are going out bursty, with average
> hovering a
Petri Helenius wrote:
>>Probably means that your outgoing interface queue is filling up.
>>ENOBUFS is the only way the kernel has to tell you ``slow down!''.
>>
>
> How much should I be able to send to two em interfaces on one
> 66/64 PCI ?
I've seen netperf UDP throughputs of ~950Mpbs with a fi
On Wed, 16 Oct 2002, Poul-Henning Kamp wrote:
> almost 7 years ago, this commit introduced the _IP_VHL hack in our
> IP-stack:
>
> ] revision 1.7
> ] date: 1995/12/21 21:20:27; author: wollman; state: Exp; lines: +5 -1
> ] If _IP_VHL is defined, declare a single ip_vhl member in struct ip rath
On Tue, 15 Oct 2002, Lars Eggert wrote:
> Paul Herman wrote:
> >
> > Not true. Although some bugs have been fixed in 4.3, FreeBSD's
> > delayed ACKs will still degrade your performance dramatically in
> > some cases.
>
> I'm sorry, but such statements without a packet trace that exhibits the
> p
> The side effect of having some source-files using the _IP_VHL hack and
> some not is that sizeof(struct ip) varies from file to file, which at
> best is confusing an at worst the source of some really evil bugs.
> I would therefore propose to eliminate the _IP_VHL hack from the kernel
this smells a lot as a bad interaction between default window
size and mtu -- loopback has 16k default, maybe tar uses a
smallish window (32k is default now for net.inet.tcp.sendspace,
but used to be 16k at the time), which means only 1 or 2 packets in
flight at once, meaning that many times you g
On Tue, 15 Oct 2002, Luigi Rizzo wrote:
> this smells a lot as a bad interaction between default window
> size and mtu -- loopback has 16k default, maybe tar uses a
> smallish window (32k is default now for net.inet.tcp.sendspace,
> but used to be 16k at the time), which means only 1 or 2 packet
On Tue, Oct 15, 2002 at 08:52:49PM -0500, Mike Silbersack wrote:
...
> NetBSD introduced a "fix" for this recently, it seems sorta hackish, but
> maybe we need to do something similar.
this helps you if the other side has delayed acks, but halves the
throughput if you are being window limited and
On Tue, 15 Oct 2002, Luigi Rizzo wrote:
> On Tue, Oct 15, 2002 at 08:52:49PM -0500, Mike Silbersack wrote:
> ...
> > NetBSD introduced a "fix" for this recently, it seems sorta hackish, but
> > maybe we need to do something similar.
>
> this helps you if the other side has delayed acks, but halv
>
> how large are the packets and how fast is the box ?
Packets go out at an average size of 1024 bytes. The box is dual
P4 Xeon 2400/400 so I think it should qualify as "fast" ? I disabled
hyperthreading to figure out if it was causing problems. I seem to
be able to send packets at a rate in the
Petri Helenius wrote:
>>how large are the packets and how fast is the box ?
>
>
> Packets go out at an average size of 1024 bytes. The box is dual
> P4 Xeon 2400/400 so I think it should qualify as "fast" ? I disabled
> hyperthreading to figure out if it was causing problems. I seem to
> be able
> The 900Mbps are similar to what I see here on similar hardware.
What kind of receive performance do you observe? I haven´t got that
far yet.
>
> For your two-interface setup, are the 600Mbps aggregate send rate on
> both interfaces, or do you see 600Mbps per interface? In the latter
600Mbps pe
Petri Helenius wrote:
>>The 900Mbps are similar to what I see here on similar hardware.
>
> What kind of receive performance do you observe? I haven´t got that
> far yet.
Less :-) Let me tell you tomorrow, don't have the numbers here right now.
> 600Mbps per interface. I´m going to try this out
In message <[EMAIL PROTECTED]>, Garrett Wollman
writes:
>Much better to delete the bogus BYTE_ORDER kluge from ip.h. (Note
>that the definition of the bitfields in question has nothing
>whatsoever to do with the actual byte order in use; it simply relies
>on the historical behavior of compilers
39 matches
Mail list logo