On Thu, Sep 03, 2015 at 05:48:00PM +0300, Saku Ytti wrote:
> Hey Brett,
>
> > Here's a paper that shows you don't need buffers equal to
> > bandwidth*delay to get near capacity:
> > http://www.cs.bu.edu/~matta/Papers/hstcp-globecom04.pdf
> > (I'm not endorsing it. Just pointing out it out as a dat
Hey Brett,
> Here's a paper that shows you don't need buffers equal to
> bandwidth*delay to get near capacity:
> http://www.cs.bu.edu/~matta/Papers/hstcp-globecom04.pdf
> (I'm not endorsing it. Just pointing out it out as a datapoint.)
Quick glance makes me believe the S and D nodes are equal ba
On Thu, Sep 03, 2015 at 01:04:34PM +0100, Nick Hilliard wrote:
> On 03/09/2015 11:56, Saku Ytti wrote:
> > 40GE server will flood the window as fast as it can, instead of
> > limiting itself to 10Gbps, optimally it'll send at linerate.
>
> optimally, but tcp slow start will generally stop this fro
On 3 September 2015 at 15:04, Nick Hilliard wrote:
> optimally, but tcp slow start will generally stop this from happening on
> well behaved sending-side stacks so you send up ramping up quickly to path
> rate rather than egress line rate from the sender side.
This assumes network is congested a
On 03/09/2015 11:56, Saku Ytti wrote:
> 40GE server will flood the window as fast as it can, instead of
> limiting itself to 10Gbps, optimally it'll send at linerate.
optimally, but tcp slow start will generally stop this from happening on
well behaved sending-side stacks so you send up ramping up
> only serialise them 10GE out, causing majority of that 375MB ending up
> in the sender side switch/router buffers.
s/sender/receiver/
--
++ytti
Hey,
In past few years there's been lot of talk about reducing buffer
depths, and many seem to think vendors are throwing memory on the
chips for the fun of it.
If we look at some particularly pathological case. Let's assume sender
is CDN network with 40GE connected server and receiver is 10GE
co
7 matches
Mail list logo