On 21/Mar/20 18:25, Saku Ytti wrote:
> Yeah we run it in a multivendor network (JNPR, CSCO, NOK), works.
>
> I would also recommend people exclusively using CW+FAT and disabling
> LSR payload heuristics (JNPR default, but by default won't do with CW,
> can do with CW too).
We weren't as succes
On 20/03/2020 21:33, Nimrod Levy wrote:
I was contacted by my NOC to investigate a LAG that was not distributing
traffic evenly among the members to the point where one member was
congested while the utilization on the LAG was reasonably low.
I don't know how well-known this is, and it may not
On 22/Mar/20 10:08, Adam Atkinson wrote:
>
> I don't know how well-known this is, and it may not be something many
> people would want to do, but Enterasys switches, now part of Extreme's
> portfolio, allow "round-robin" as a load-sharing algorithm on LAGs.
>
> see e.g.
>
> https://gtacknowledg
On Sat, Mar 21, 2020 at 12:53 AM Saku Ytti wrote:
> Hey Matthew,
>
> > There are *several* caveats to doing dynamic monitoring and remapping of
> > flows; one of the biggest challenges is that it puts extra demands on the
> > line cards tracking the flows, especially as the number of flows rises
Hey Tassos,
On Sat, 21 Mar 2020 at 22:51, Tassos Chatzithomaoglou
wrote:
> Yep, the RFC gives this option.
> Does Juniper MX/ACX series support it?
> I know for sure Cisco doesn't.
I only run bidir, which Cisco do you mean? ASR9k allows you to configure it.
both Insert/Discard Flow labe
On Sun, 22 Mar 2020 at 09:41, Mark Tinka wrote:
> We weren't as successful (MX480 ingress/egress devices transiting a CRS
> core).
So you're not even talking about multivendor, as both ends are JNPR?
Or are you confusing entropy label with FAT?
Transit doesn't know anything about FAT, FAT is PW
On 22/Mar/20 11:52, Saku Ytti wrote:
> So you're not even talking about multivendor, as both ends are JNPR?
> Or are you confusing entropy label with FAT?
Some cases were MX480 to ASR920, but most were MX480 to MX480, either
transiting CRS.
>
> Transit doesn't know anything about FAT, FAT is
On Sun, 22 Mar 2020 at 16:25, Mark Tinka wrote:
> So the latter. We used both FAT + entropy to provide even load balancing
> of l2vpn payloads in the edge and core, with little success.
You don't need both. My rule of thumb, green field, go with entropy
and get all the services in one go. Brown
On 22/Mar/20 19:17, Saku Ytti wrote:
> You don't need both. My rule of thumb, green field, go with entropy
> and get all the services in one go. Brown field, go FAT, and target
> just PW, ensure you also have CW, then let transit LSR balance
> MPLS-IP. With entropy label you can entirely disabl
Fellow NANOGers,
Not a big deal by any means, but for those of you who have traffic data, I’m
curious what Sunday morning looked like as compared to other Sundays. Sure,
Netflix and similar companies have no doubt seen traffic increase, but I’m
wondering if an influx of church service streaming
Maybe it’s time to revisit inter-domain multicast?
Owen
> On Mar 22, 2020, at 11:57 , Andy Ringsmuth wrote:
>
> Fellow NANOGers,
>
> Not a big deal by any means, but for those of you who have traffic data, I’m
> curious what Sunday morning looked like as compared to other Sundays. Sure,
> N
On Sun, 22 Mar 2020 19:08:24 +
Owen DeLong wrote:
> Maybe it’s time to revisit inter-domain multicast?
Uhmm... no thank you. :-)
John
We are still far away from apocalypse to realistically think about
inter-domain multicast.
And even if we were ..
On 3/22/20 8:08 PM, Owen DeLong wrote:
> Maybe it’s time to revisit inter-domain multicast?
>
> Owen
>
>
>> On Mar 22, 2020, at 11:57 , Andy Ringsmuth wrote:
>>
>> Fellow NANOGers
On 3/22/20 1:11 PM, John Kristoff wrote:
Owen DeLong wrote:
Maybe it’s time to revisit inter-domain multicast?
Uhmm... no thank you. :-)
As someone who 1) wasn't around during the last Internet scale foray
into multicast and 2) working with multicast in a closed environment,
I'm curios:
On Sun, 22 Mar 2020 at 21:20, Grant Taylor via NANOG wrote:
> What was wrong with Internet scale multicast? Why did it get abandoned?
It is flow based routing, we do not have a solution to store and
lookup large amount of flows.
--
++ytti
We didn't really see a noticeable inbound or outbound traffic change.
But we also streamed and had 80+ people watching online, so there was
absolutely a traffic shift.
Still, Sunday Mornings are low traffic periods normally anyway, so the
overall traffic "dent" was minimal.
On 22/Mar/20 20:57, Andy Ringsmuth wrote:
> Fellow NANOGers,
>
> Not a big deal by any means, but for those of you who have traffic data, I’m
> curious what Sunday morning looked like as compared to other Sundays. Sure,
> Netflix and similar companies have no doubt seen traffic increase, but I
On Sun, 22 Mar 2020 19:17:59 +
Grant Taylor via NANOG wrote:
> What was wrong with Internet scale multicast? Why did it get abandoned?
There are about 20 years of archives to weed through, and some of our
friends are still trying to make this happen. I expect someone (Hi
Lenny) to appear a
Grant Taylor via NANOG wrote on 22/03/2020 19:17:
What was wrong with Internet scale multicast? Why did it get abandoned?
there wasn't any problem with inter-domain multicast that couldn't be
resolved by handing over to level 3 engineering and the vendor's support
escalation team.
But then
Le 22/03/2020 à 21:31, Nick Hilliard a écrit :
Grant Taylor via NANOG wrote on 22/03/2020 19:17:
What was wrong with Internet scale multicast? Why did it get abandoned?
there wasn't any problem with inter-domain multicast that couldn't be
resolved by handing over to level 3 engineering and
On Sun, 22 Mar 2020 at 22:43, Alexandre Petrescu
wrote:
> On another hand, link-local multicast does seem to work ok, at least
> with IPv6. The problem it solves there is not related to the width of
> the pipe, but more to resistance against 'storms' that were witnessed
> during ARP storms. I c
On Sun, 22 Mar 2020, John Kristoff wrote:
> On Sun, 22 Mar 2020 19:17:59 +
> Grant Taylor via NANOG wrote:
>
> > What was wrong with Internet scale multicast? Why did it get abandoned?
>
> There are about 20 years of archives to weed through,
>
most of the challenges, in particular in
On Sun, 22 Mar 2020 13:17:59 -0600, Grant Taylor via NANOG said:
> As someone who 1) wasn't around during the last Internet scale foray
> into multicast and 2) working with multicast in a closed environment,
> I'm curios:
>
> What was wrong with Internet scale multicast? Why did it get abandoned?
> It failed to scale for some of the exact same reasons QoS failed to
> scale - what works inside one administrative domain doesn't work once
> it crosses domain boundaries.
>
> Plus, there's a lot more state to keep - if you think spanning tree
> gets ugly if the tree gets too big, think about wh
On 22/Mar/20 23:36, Valdis Kl ē tnieks wrote:
> It failed to scale for some of the exact same reasons QoS failed to scale -
> what works inside one administrative domain doesn't work once it crosses
> domain
> boundaries.
This, for me, is one of the biggest reasons I feel inter-AS Multicast
do
On 23/Mar/20 00:19, Randy Bush wrote:
>
> add to that it is the TV model in a VOD world. works for sports, maybe,
> not for netflix
Agreed - on-demand is the new economy, and sport is the single thing
still propping up the old economy.
When sport eventually makes into the new world, linear T
I think that's the thing:
Drop cache boxes inside eyeball networks; fill the caches during off-peak;
unicast from the cache boxes inside the eyeball provider's network to
subscribers. Do a single stream from source to each "replication point"
(cache box) rather than a stream per ultimate receiver
Does anyone have a contact for Frontier Central PA OSP contact?
There is a line that has been down for over 8 months that I have been unable to
get them to hang.
It is across a driveway and roadway.
Hugo,
> On 23 Mar 2020, at 01:32, Hugo Slabbert wrote:
>
> I think that's the thing:
> Drop cache boxes inside eyeball networks; fill the caches during off-peak;
> unicast from the cache boxes inside the eyeball provider's network to
> subscribers. Do a single stream from source to each "repl
I can see it now Business driver that moved the world towards multicast
2020 Coronavirus
Also, I wonder how much money would be lost by big pipe providers with
multicast working everywhere
-Aaron
-Original Message-
From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Alex
I know Facebook live had some congestion/capacity issues in some geographical
regions this AM.
Sent from my iPhone
> On Mar 22, 2020, at 2:59 PM, Andy Ringsmuth wrote:
>
> Fellow NANOGers,
>
> Not a big deal by any means, but for those of you who have traffic data, I’m
> curious what Sunda
> On Mar 22, 2020, at 13:41 , Alexandre Petrescu
> wrote:
>
>
> Le 22/03/2020 à 21:31, Nick Hilliard a écrit :
>> Grant Taylor via NANOG wrote on 22/03/2020 19:17:
>>> What was wrong with Internet scale multicast? Why did it get abandoned?
>>
>> there wasn't any problem with inter-domain m
> On Mar 22, 2020, at 15:49 , Mark Tinka wrote:
>
>
>
> On 23/Mar/20 00:19, Randy Bush wrote:
>
>>
>> add to that it is the TV model in a VOD world. works for sports, maybe,
>> not for netflix
>
> Agreed - on-demand is the new economy, and sport is the single thing
> still propping up th
>
> But that’s already happening. All big content providers are doing just
> that. They even sponsor you the appliance(s) to make more money and save on
> transit costs ;)
Noted; this was a comment on what's already the case, not a proposal for
how to address it instead. Apologies as I used poor
On 23/Mar/20 05:05, Aaron Gould wrote:
> I can see it now Business driver that moved the world towards multicast
> 2020 Coronavirus
Hehe, the Coronavirus has only accelerated and amplified what was
already coming - the new economy.
You're constantly hearing about "changing business m
35 matches
Mail list logo