Benny Lyne Amorsen wrote:
TCP looks quite different in 2023 than it did in 1998. It should handle
packet reordering quite gracefully;
Maybe and, even if it isn't, TCP may be modified. But that
is not my primary point.
ECMP, in general, means pathes consist of multiple routers
and links. The l
Per packet LB is one of those ideas that at a conceptual level are great,
but in practice are obvious that they’re out of touch with reality. Kind
of like the EIGRP protocol from Cisco and using the load, reliability, and
MTU metrics.
On Wed, Sep 6, 2023 at 1:13 PM Mark Tinka wrote:
>
>
> On 9/
On Wed, 6 Sept 2023 at 19:28, Mark Tinka wrote:
> Yes, this has been my understanding of, specifically, Juniper's
> forwarding complex.
Correct, packet is sprayed to some PPE, and PPEs do not run in
deterministic time, after PPEs there is reorder block that restores
flow, if it has to.
EZchip is
On 9/6/23 18:52, Tom Beecher wrote:
Well, not exactly the same thing. (But it's my mistake, I was
referring to L3 balancing, not L2 interface stuff.)
Fair enough.
load-balance per-packet will cause massive reordering, because it's
random spray , caring about nothing except equal loading
>
> Unless you specifically configure true "per-packet" on your LAG:
>
Well, not exactly the same thing. (But it's my mistake, I was referring to
L3 balancing, not L2 interface stuff.)
load-balance per-packet will cause massive reordering, because it's random
spray , caring about nothing except e
On 9/6/23 12:01, Saku Ytti wrote:
Fun fact about the real world, devices do not internally guarantee
order. That is, even if you have identical latency links, 0
congestion, order is not guaranteed between packet1 coming from
interfaceI1 and packet2 coming from interfaceI2, which packet first
> If you applications can tolerate reordering, per-packet is fine. In the public
> Internet space, it seems we aren't there yet.
Yeah this
During lockdown here in Italy one day we started getting calls about
performance issues performance degradation, vpns dropping or becoming unusable,
and gen
On 9/6/23 11:20, Benny Lyne Amorsen wrote:
TCP looks quite different in 2023 than it did in 1998. It should handle
packet reordering quite gracefully; in the best case the NIC will
reassemble the out-of-order TCP packets into a 64k packet and the OS
will never even know they were reordered. U
On 9/6/23 17:27, Tom Beecher wrote:
At least on MX, what Juniper calls 'per-packet' is really 'per-flow'.
Unless you specifically configure true "per-packet" on your LAG:
set interfaces ae2 aggregated-ether-options load-balance per-packet
I ran per-packet on a Juniper LAG 10 years ag
On 9/6/23 16:14, Saku Ytti wrote:
For example Juniper offers true per-packet, I think mostly used in
high performance computing.
Cisco did it too with CEF supporting "ip load-sharing per-packet" at the
interface level.
I am not sure this is still supported on modern code/boxes.
Mark.
>
> For example Juniper offers true per-packet, I think mostly used in
> high performance computing.
>
At least on MX, what Juniper calls 'per-packet' is really 'per-flow'.
On Wed, Sep 6, 2023 at 10:17 AM Saku Ytti wrote:
> On Wed, 6 Sept 2023 at 17:10, Benny Lyne Amorsen
> wrote:
>
> > TCP lo
On Wed, 6 Sept 2023 at 17:10, Benny Lyne Amorsen
wrote:
> TCP looks quite different in 2023 than it did in 1998. It should handle
> packet reordering quite gracefully; in the best case the NIC will
I think the opposite is true, TCP was designed to be order agnostic.
But everyone uses cubic, and
Mark Tinka writes:
> And just because I said per-flow load balancing has been the gold
> standard for the last 25 years, does not mean it is the best
> solution. It just means it is the gold standard.
TCP looks quite different in 2023 than it did in 1998. It should handle
packet reordering quite
William Herrin wrote:
I recognize what happens in the real world, not in the lab or text books.
What's the difference between theory and practice?
W.r.t. the fact that there are so many wrong theories
and wrong practices, there is no difference.
In theory, there is no difference.
Especia
On Wed, Sep 6, 2023 at 12:23 AM Mark Tinka wrote:
> I recognize what happens in the real world, not in the lab or text books.
What's the difference between theory and practice? In theory, there is
no difference.
--
William Herrin
b...@herrin.us
https://bill.herrin.us/
Saku Ytti wrote:
Fun fact about the real world, devices do not internally guarantee
order. That is, even if you have identical latency links, 0
congestion, order is not guaranteed between packet1 coming from
interfaceI1 and packet2 coming from interfaceI2, which packet first
goes to interfaceE1
On Wed, 6 Sept 2023 at 10:27, Mark Tinka wrote:
> I recognize what happens in the real world, not in the lab or text books.
Fun fact about the real world, devices do not internally guarantee
order. That is, even if you have identical latency links, 0
congestion, order is not guaranteed between p
On 9/6/23 09:12, Masataka Ohta wrote:
you now recognize that per-flow load balancing is not a very
good idea.
You keep moving the goal posts. Stay on-topic.
I was asking you to clarify your post as to whether you were speaking of
per-flow or per-packet load balancing. You did not do that
Mark Tinka wrote:
Are you saying you thought a 100G Ethernet link actually consisting
of 4 parallel 25G links, which is an example of "equal speed multi
parallel point to point links", were relying on hashing?
No...
So, though you wrote:
>> If you have multiple parallel links over which man
19 matches
Mail list logo