Ruslan Ermilov wrote:
>
> On Tue, Jun 04, 2002 at 12:05:51AM +0200, Andre Oppermann wrote:
> > After reading this whole redirect stuff a couple of time I've come to
> > the conclusion that the function is right as it is there. There is no
> > such bug as I described it. The rtalloc1() in rtredire
On Tue, Jun 04, 2002 at 10:24:49AM +0200, Andre Oppermann wrote:
> Ruslan Ermilov wrote:
> >
> > On Tue, Jun 04, 2002 at 12:05:51AM +0200, Andre Oppermann wrote:
> > > After reading this whole redirect stuff a couple of time I've come to
> > > the conclusion that the function is right as it is th
Hello ppl
I need rock-solid OSPF daemon that works on FreeBSD-4.5.
Zebra has not proved to be that one since tun(4) interfaces
are created on the fly - it crashes the box.
Please give me some advice what port should I use.
With best wishes, Oles' Hnatkevych, http://gnut.kiev.ua, [EMA
Oles' Hnatkevych wrote:
>I need rock-solid OSPF daemon that works on FreeBSD-4.5.
>Zebra has not proved to be that one since tun(4) interfaces
>are created on the fly - it crashes the box.
Which is not zebra's fault... better fix the system than work around it
by using an other progra
Archie Cobbs writes:
> Re: the -stable patch. I agree we need a more general MFC/cleanup
> of some of the mbuf improvements from -current into -stable.
> If I find time perhaps I'll do that as well, but in a separate patch.
> For the present time, I'll commit this once 4.6-REL is done.
The b
Luigi Rizzo wrote:
> the signal that tell the WFQ algorithm when you can transmit the
> next packet comes from the pipe. The latter ticks either at a
> predefined rate (as configured with the 'bw NNN bit/s' parameter),
> or from the tx interrupt coming from a device (e.g. you can say
> something l
Lars Eggert wrote:
> I'm trying to merge this into the sis driver, which seems to batch
> transmissions together. For clarification, do you expect one if_tx_rdy()
> call per packet or one per batch? Per packet may result in a burst of
> these calls, does dummynet handle this?
Oh, I'm also usi
Most device drivers batch transmissions, but if you use the interface
as a clock for the pipe, dummynet will only send a single packet
at a time to the device, so you won't have to bother about the
batching.
The overhead is in the fact that if_tx_rdy() has to scan all pipes
to find the one who ne
Luigi Rizzo wrote:
> BTW if you use polling, you have to be careful in the place where you
> put the call to if_tx_rdy() to make sure that it catches the tx queue
> becoming empty only once and not at every polling cycle.
How about at the very end of sis_intr(), as a new "else" branch of the
que
On Tue, Jun 04, 2002 at 09:22:13AM -0700, Lars Eggert wrote:
> Luigi Rizzo wrote:
> > BTW if you use polling, you have to be careful in the place where you
> > put the call to if_tx_rdy() to make sure that it catches the tx queue
> > becoming empty only once and not at every polling cycle.
>
> Ho
Luigi Rizzo wrote:
>>> BTW if you use polling, you have to be careful in the place
>>> where you put the call to if_tx_rdy() to make sure that it
>>> catches the tx queue becoming empty only once and not at every
>>> polling cycle.
>>
>> How about at the very end of sis_intr(), as a new "els
Lars Eggert writes:
> So I ignore the error for now, and make the TCP tunnel as follows:
>
> Server:
> /usr/sbin/ngctl mkpeer iface dummy inet
> /sbin/ifconfig ng0 10.10.10.1 10.10.10.2
> /usr/sbin/ngctl mkpeer ng0: ksocket inet inet/stream/tcp
> /usr/sbin/ngctl msg ng0:in
Archie Cobbs wrote:
> I don't think you can have a point-to-point interface who's
> remote IP address is also local to your box. In other words,
> this may not work on the same machine but it might work if
> you use two different machines... can you try that?
The addresses of the point-to-point i
On Tue, Jun 04, 2002 at 09:47:22AM -0700, Lars Eggert wrote:
> Luigi Rizzo wrote:
> >>> BTW if you use polling, you have to be careful in the place
> >>> where you put the call to if_tx_rdy() to make sure that it
> >>> catches the tx queue becoming empty only once and not at every
> >>> pollin
Lars Eggert writes:
> > I don't think you can have a point-to-point interface who's
> > remote IP address is also local to your box. In other words,
> > this may not work on the same machine but it might work if
> > you use two different machines... can you try that?
>
> The addresses of the poin
On Mon, 3 Jun 2002, Mike Silbersack wrote:
> A few questions:
>
> 1. Is this 4.5-release, or 4.5-stable (aka 4.6-RC2)? 4.5-release had a
> few bugs in the syn cache which could cause crashes.
>
> 2. Are you using accept filters? Accept filters act oddly on
> 4.5-release, you'll have to upgr
Bellow is a patch that enables all outgoing sessions to always use the
same source IP address by default, no matter what outbound interface is
used. If on a multi-homed host the source IP address always originates
from an "always-up" internal virtual interface, than the established TCP
sessions wo
Can you dump the output of netstat -s -p tcp ?
Checking for listen queue overflows and syncache bucket overflows.
jayanth
Nguyen-Tuong Long Le ([EMAIL PROTECTED]) wrote:
> On Mon, 3 Jun 2002, Mike Silbersack wrote:
>
> > A few questions:
> >
> > 1. Is this 4.5-release, or 4.5-stable (aka 4.
On Tue, 4 Jun 2002, jayanth wrote:
> Can you dump the output of netstat -s -p tcp ?
> Checking for listen queue overflows and syncache bucket overflows.
>
> jayanth
And "netstat -La" too, please. I'm interested in if you're accepting
sockets fast enough.
Mike "Silby" Silbersack
To Unsubsc
Hi,
On Tue, 4 Jun 2002, Mike Silbersack wrote:
>
> On Tue, 4 Jun 2002, jayanth wrote:
>
> > Can you dump the output of netstat -s -p tcp ?
> > Checking for listen queue overflows and syncache bucket overflows.
> >
> > jayanth
>
Here is the output of "netstat -s -p tcp".
tcp:
26413
On Tue, 4 Jun 2002, Nguyen-Tuong Long Le wrote:
> Here is the output of "netstat -La"
>
> Current listen queue sizes (qlen/incqlen/maxqlen)
> Proto Listen Local Address
> tcp4 3/0/8192 *.6789
>
>
> I wonder why the listen queue overflows when there are so few
> connections in the
> It appears that the primary reason a syncache abort would occur is because
> the system has run out of sockets. Is kern.ipc.numopensockets approaching
> kern.ipc.maxsockets?
Works like a charm. Thanks! I forgot to set this when I upgraded
my system from 4.3 to 4.5 release. My bad. Thanks again
On Tue, 4 Jun 2002 10:13:17 -0700 (PDT), Archie Cobbs <[EMAIL PROTECTED]> wrote:
[.]
> I don't think you can have a point-to-point interface who's
> remote IP address is also local to your box. In other words,
> this may not work on the same machine but it might work if
> you use two different
23 matches
Mail list logo