Mike Tancsa writes:
> At 08:44 PM 10/15/2001 -0700, Archie Cobbs wrote:
> >This makes sense.. and that's is exactly what queues are for:
> >absorbing bursts. If you have big bursts then you'll need big
> >queues.. in general this is the only reason to have them.
>
> The only mystery I didnt solve
At 08:44 PM 10/15/2001 -0700, Archie Cobbs wrote:
>This makes sense.. and that's is exactly what queues are for:
>absorbing bursts. If you have big bursts then you'll need big
>queues.. in general this is the only reason to have them.
The only mystery I didnt solve in the end was what was genera
Mike Tancsa writes:
> >> Is it better for the networking layer to deal with this (potentially
> >> introducing some latency) as opposed to letting the application ?
> >
> >But no, the network should just do "best effort".. that is, unless
> >you are a telco type in which case, go back to your X.2
[Quoting Archie Cobbs, I think:]
>> There is probably a good paper somewhere outlining the "best effort"
>> philosophy but I don't know what it is.
That would be ``End-to-End Arguments in System Design'' by Jerry
Saltzer, Dave Reed, and Dave Clark, one of the most influential papers
ever written
On Mon, 15 Oct 2001 23:00:27 + (UTC), in sentex.lists.freebsd.net you
wrote:
>Mike Tancsa writes:
>> >If the forwarding path is maxed out, then it is the application layer's
>> >responsibility to back off (think TCP).
>>
>> Is it better for the networking layer to deal with this (potentially
Mike Tancsa writes:
> >If the forwarding path is maxed out, then it is the application layer's
> >responsibility to back off (think TCP).
>
> Is it better for the networking layer to deal with this (potentially
> introducing some latency) as opposed to letting the application ?
Oops, can substi
On Fri, Oct 12, 2001 at 03:31:42PM -0700, Crist J. Clark wrote:
> On Fri, Oct 12, 2001 at 12:13:59PM -0400, Mike Tancsa wrote:
> > At 06:16 PM 10/11/01 -0700, Archie Cobbs wrote:
> >
> > >If the forwarding path is maxed out, then it is the application layer's
> > >responsibility to back off (thin
On Fri, Oct 12, 2001 at 12:13:59PM -0400, Mike Tancsa wrote:
> At 06:16 PM 10/11/01 -0700, Archie Cobbs wrote:
>
> >If the forwarding path is maxed out, then it is the application layer's
> >responsibility to back off (think TCP).
>
> Is it better for the networking layer to deal with this (pote
At 11:42 AM 10/12/01 -0700, Luigi Rizzo wrote:
> > If you find yourself hitting the queue limit I'd suggest using RED or
> > GRED with ipfw to drop packets more intelligently before the hard limit
>
>we are talking about a different queue here, which is not managed by
>ipfw (now you are actually g
> On Fri, Oct 12, 2001 at 11:56:32AM -0400, Mike Tancsa wrote:
> > Is it better to drop
> > packets when the queue is full and let the various applications behind me
> > figure it out, or is it better to add some latency at the network layer so
> > the apps dont have to deal with it.
>
> If y
At 06:16 PM 10/11/01 -0700, Archie Cobbs wrote:
>If the forwarding path is maxed out, then it is the application layer's
>responsibility to back off (think TCP).
Is it better for the networking layer to deal with this (potentially
introducing some latency) as opposed to letting the application
At 06:30 PM 10/11/01 -0700, Luigi Rizzo wrote:
> > > from pinging the other side of the OC-3 or ethernet connection and
> > > measuring the response time, how can I see how much latency is added by
> > > increasing these buffers ?
>
>of course the latency increase depends on how full are the buffe
> > from pinging the other side of the OC-3 or ethernet connection and
> > measuring the response time, how can I see how much latency is added by
> > increasing these buffers ?
of course the latency increase depends on how full are the buffers,
and the worst case is easier to determine by back
Mike Tancsa writes:
> > > net.inet.ip.intr_queue_maxlen from 50 to 100. and there didnt seem to be
> > > any positive results in terms of lessening the rate of
> > > net.inet.ip.intr_queue_drops.
> >
> >This is consistent with the situation where packets are received
> >at a rate faster than they
At 01:38 PM 10/11/01 -0700, Archie Cobbs wrote:
>[ jumping into the middle of this discussion... ]
>
>Mike Tancsa writes:
> > net.inet.ip.intr_queue_maxlen from 50 to 100. and there didnt seem to be
> > any positive results in terms of lessening the rate of
> > net.inet.ip.intr_queue_drops.
>
>Th
[ jumping into the middle of this discussion... ]
Mike Tancsa writes:
> net.inet.ip.intr_queue_maxlen from 50 to 100. and there didnt seem to be
> any positive results in terms of lessening the rate of
> net.inet.ip.intr_queue_drops.
This is consistent with the situation where packets are rec
Queue drops generally corresponded to bandwidth. Charting the bandwidth
going through the box and the rate at which queue drops increased certainly
seemed to correspond. I didnt run any statistical analysis, as the visual
correlation was very evident... But here is a strange result I dont qu
17 matches
Mail list logo