Re: Tools and procedure for Network testing
Another interesting candidate is https://trex-tgn.cisco.com/ On Thu, Aug 26, 2021 at 10:58 AM Humberto Galiza wrote: > I've used Ixia for similar purposes (nothing related to voip stuff > though), but as others already said equipment cost is a factor here. > If the budget is short or if you're willing to go with an open source > suite for testing, you might want to have a look at Pktgen-DPDK too: > https://github.com/pktgen/Pktgen-DPDK > There are tons of tutorials out there explaining how to use Linux + > pktgen-dpdk to generate traffic. I hope it helps. > > Cheers! > > On Thu, Aug 26, 2021 at 2:07 PM Joe Yabuki wrote: > > > > Hi all, > > > > I just wanted to know how you do your network testing when validating a > new design/technology in your Network, especially to ensure that it will > meet your SLA requirements for example that a voice call will not be > dropped in case of a network element failure ? > > > > Do you test with IXIA, multiping, launch somes VM using ping with -i > option, Windows ping by setting the timeout interval, or may be directly > from the Network device (routers...), > > > > Many thanks, > > Joe >
Re: rsvp-te admission control - i don't see it
What's the signalled bandwidth being reserved by the headend "R20" in your example? it's a hunch that you may not have that defined and it becomes Zero bandwidth LSPs. On Fri, Sep 4, 2020 at 9:09 AM wrote: > Thanks Mark, I have a tunnel traversing those interfaces. Customer > routers (r10, r30) can ping end to end via tunnel. > > > > Not sure if I’m missing something here. I wonder if I’m not signaling for > the rsvp bandwidth correctly. I just don’t see any allocated bandwidth in > the rsvp interfaces anywhere. > > > > Here’s one of the transit routers… r24…. Should I see “allocated (bps)” > here ? > > > > RP/0/0/CPU0:r24#sh rsvp int > > Fri Sep 4 10:54:16.451 CST > > > > *: RDM: Default I/F B/W % : 75% [default] (max resv/bc0), 0% [default] > (bc1) > > > > Interface MaxBW (bps) MaxFlow (bps) Allocated (bps) > MaxSub (bps) > > - - > - > > GigabitEthernet0/0/0/0 750M* 750M 0 ( > 0%)0* > > GigabitEthernet0/0/0/1 750M* 750M 0 ( > 0%)0* > > > > > > Details…. > > > > LSP/TE-tunnel has dynamic path option, but I disallow it to flow via r21… > so tunnel takes the southbound path via r20-24-r25-r23-r22 > > > > (2) unidirectional te-tunnels > > > > r20 is headend and r22 is tailend r20>r22 > > r22 is headed and r20 is tailend r22>r20 > > > > > > R10 R30 > > | | > > | | > > r20-r21-r22 > > | | > > | | > > | | > > r24-r25-r23 > > > > r20’s tunnel… > > > > RP/0/0/CPU0:r20#sh mpls traffic-eng tun br > > Fri Sep 4 10:59:51.509 CST > > > > TUNNEL NAME DESTINATION STATUS STATE > > tunnel-te1 10.20.0.22 up up > > r22--->r20 10.20.0.20 up up > > Displayed 1 (of 1) heads, 0 (of 0) midpoints, 1 (of 1) tails > > Displayed 1 up, 0 down, 0 recovering, 0 recovered heads > > > > RP/0/0/CPU0:r20#sh mpls traffic-eng tun name tunnel-te1 | be count > > Fri Sep 4 10:59:54.309 CST > > Node hop count: 4 > > Hop0: 10.20.1.21 > > Hop1: 10.20.1.18 > > Hop2: 10.20.1.17 > > Hop3: 10.20.1.14 > > Hop4: 10.20.1.13 > > Hop5: 10.20.1.10 > > Hop6: 10.20.1.9 > > Hop7: 10.20.0.22 > > Displayed 1 (of 1) heads, 0 (of 0) midpoints, 0 (of 1) tails > > Displayed 1 up, 0 down, 0 recovering, 0 recovered heads > > > > r22’s tunnel…. > > > > RP/0/0/CPU0:r22#sh mpl tr tun br > > Fri Sep 4 10:25:32.668 CST > > > > TUNNEL NAME DESTINATION STATUS STATE > > tunnel-te1 10.20.0.20 up up > > r20--->r22 10.20.0.22 up up > > Displayed 1 (of 1) heads, 0 (of 0) midpoints, 1 (of 1) tails > > Displayed 1 up, 0 down, 0 recovering, 0 recovered heads > > > RP/0/0/CPU0:r22#sh mpl tr tun name tunnel-te1 | be count > > Fri Sep 4 10:25:35.858 CST > > Node hop count: 4 > > Hop0: 10.20.1.10 > > Hop1: 10.20.1.13 > > Hop2: 10.20.1.14 > > Hop3: 10.20.1.17 > > Hop4: 10.20.1.18 > > Hop5: 10.20.1.21 > > Hop6: 10.20.1.22 > > Hop7: 10.20.0.20 > > Displayed 1 (of 1) heads, 0 (of 0) midpoints, 0 (of 1) tails > > Displayed 1 up, 0 down, 0 recovering, 0 recovered heads > > > > X = router number > > 10.20.0.0/16 > > 10.20.0.X/24 - loopbacks > > 10.20.1.0/24 – /30’s between routers > > (numbered clockwise, lowest to highest, start at r20) > > (r20 is .1 , r21 is .2 , r21 is .5 , etc) > > 10.20.1.0/30 – r20---r21 > > 10.20.1.4/30 – r21---r22 > > 10.20.1.8/30 – r22---r23 > > 10.20.1.12/30 – r23---r25 > > 10.20.1.16/30 – r25---r24 > > 10.20.1.20/30 – r24---r20 > > > > r10#sh ip int br | in up > > GigabitEthernet3 1.0.0.2 YES manual up > up > > > > RP/0/0/CPU0:r30#sh ip int br | in Up > > GigabitEthernet0/0/0/2 1.1.1.2 Up Up > default > > > > r10#trace 1.1.1.2 > > Type escape sequence to abort. > > Tracing the route to 1.1.1.2 > > VRF info: (vrf in name/id, vrf out name/id) > > 1 1.0.0.1 23 msec 5 msec 7 msec > > 2 10.20.1.21 [MPLS: Labels 24000/24010 Exp 0] 43 msec 50 msec 40 msec > > 3 10.20.1.17 [MPLS: Labels 19/24010 Exp 0] 49 msec 42 msec 41 msec > > 4 10.20.1.13 [MPLS: Labels 24001/24010 Exp 0] 42 msec 46 msec 46 msec > > 5 10.20.1.9 42 msec 38 msec 34 msec > > 6 1.1.1.2 55 msec * 44 msec > > > > RP/0/0/CPU0:r30#traceroute 1.0.0.2 > > Fri Sep 4 15:25:10.129 UTC > > > > Type escape sequence to abort. > > Tracing the route to 1.0.0.2 > > > > 1 1.1.1.1 29 msec 0 msec 0 msec > > 2 10.20.1.10 [MPLS: Labels 24000/24009 Exp 0] 49 msec 49 msec 49 msec > > 3 10.20.1.14 [MPLS: Labels 20/24009 Exp 0] 39 msec 49 msec 39 msec > > 4 10.20.1.18 [MPLS: Labels 24001/24009 Exp 0] 49 msec 39 msec 49
Re: rsvp-te admission control - i don't see it
can you try this https://www.cisco.com/c/en/us/td/docs/ios_xr_sw/iosxr_r3-7/mpls/command/reference/gr37mpte.html#wp2134470 On Fri, Sep 4, 2020 at 10:26 AM wrote: > Thanks dip, let me know what you think. > > r20 is headend and r22 is tailend r20>r22 > > r22 is headed and r20 is tailend r22>r20 > > RP/0/0/CPU0:r20#sh run int tt1 > > Fri Sep 4 12:25:09.198 CST > > interface tunnel-te1 > > bandwidth 20 > > ipv4 unnumbered Loopback0 > > signalled-name r20--->r22 > > autoroute announce > > ! > > destination 10.20.0.22 > > path-option 10 dynamic > > > > > > RP/0/0/CPU0:r22#sh run int tt1 > > Fri Sep 4 11:50:01.581 CST > > interface tunnel-te1 > > bandwidth 20 > > ipv4 unnumbered Loopback0 > > signalled-name r22--->r20 > > autoroute announce > > ! > > destination 10.20.0.20 > > path-option 10 dynamic > > > > > > > > > > *From:* dip > *Sent:* Friday, September 4, 2020 11:15 AM > *To:* Aaron > *Cc:* Mark Tinka ; NANOG > *Subject:* Re: rsvp-te admission control - i don't see it > > > > What's the signalled bandwidth being reserved by the headend "R20" in your > example? it's a hunch that you may not have that defined and it becomes > Zero bandwidth LSPs. > > > > On Fri, Sep 4, 2020 at 9:09 AM wrote: > > Thanks Mark, I have a tunnel traversing those interfaces. Customer > routers (r10, r30) can ping end to end via tunnel. > > > > Not sure if I’m missing something here. I wonder if I’m not signaling for > the rsvp bandwidth correctly. I just don’t see any allocated bandwidth in > the rsvp interfaces anywhere. > > > > Here’s one of the transit routers… r24…. Should I see “allocated (bps)” > here ? > > > > RP/0/0/CPU0:r24#sh rsvp int > > Fri Sep 4 10:54:16.451 CST > > > > *: RDM: Default I/F B/W % : 75% [default] (max resv/bc0), 0% [default] > (bc1) > > > > Interface MaxBW (bps) MaxFlow (bps) Allocated (bps) > MaxSub (bps) > > - - > - > > GigabitEthernet0/0/0/0 750M* 750M 0 ( > 0%)0* > > GigabitEthernet0/0/0/1 750M* 750M 0 ( > 0%)0* > > > > > > Details…. > > > > LSP/TE-tunnel has dynamic path option, but I disallow it to flow via r21… > so tunnel takes the southbound path via r20-24-r25-r23-r22 > > > > (2) unidirectional te-tunnels > > > > r20 is headend and r22 is tailend r20>r22 > > r22 is headed and r20 is tailend r22>r20 > > > > > > R10 R30 > > | | > > | | > > r20-r21-r22 > > | | > > | | > > | | > > r24-r25-r23 > > > > r20’s tunnel… > > > > RP/0/0/CPU0:r20#sh mpls traffic-eng tun br > > Fri Sep 4 10:59:51.509 CST > > > > TUNNEL NAME DESTINATION STATUS STATE > > tunnel-te1 10.20.0.22 up up > > r22--->r20 10.20.0.20 up up > > Displayed 1 (of 1) heads, 0 (of 0) midpoints, 1 (of 1) tails > > Displayed 1 up, 0 down, 0 recovering, 0 recovered heads > > > > RP/0/0/CPU0:r20#sh mpls traffic-eng tun name tunnel-te1 | be count > > Fri Sep 4 10:59:54.309 CST > > Node hop count: 4 > > Hop0: 10.20.1.21 > > Hop1: 10.20.1.18 > > Hop2: 10.20.1.17 > > Hop3: 10.20.1.14 > > Hop4: 10.20.1.13 > > Hop5: 10.20.1.10 > > Hop6: 10.20.1.9 > > Hop7: 10.20.0.22 > > Displayed 1 (of 1) heads, 0 (of 0) midpoints, 0 (of 1) tails > > Displayed 1 up, 0 down, 0 recovering, 0 recovered heads > > > > r22’s tunnel…. > > > > RP/0/0/CPU0:r22#sh mpl tr tun br > > Fri Sep 4 10:25:32.668 CST > > > > TUNNEL NAME DESTINATION STATUS STATE > > tunnel-te1 10.20.0.20 up up > > r20--->r22 10.20.0.22 up up > > Displayed 1 (of 1) heads, 0 (of 0) midpoints, 1 (of 1) tails > > Displayed 1 up, 0 down, 0 recovering, 0 recovered heads > > > RP/0/0/CPU0:r22#sh mpl tr tun name tunnel-te1 | be count > > Fri Sep 4 10:25:35.858 CST > > Node hop count: 4 > > Hop0: 10.20.1.10 > > H
Re: Cisco WAE
See if this helps https://www.cisco.com/c/en/us/td/docs/net_mgmt/wae/6-4/platform/configuration/guide/WAE_Platform_Configuration_Guide/wp_col_overview.html#pgfId-1072022 On Thu, Sep 30, 2021 at 7:41 AM Mark Davis wrote: > Is anyone on the list familiar with configuring WAE to exclude specific > devices from collection? > > Thanks > > > -- > Mark William Davis > mda...@gmail.com > > -- Sent from iPhone
Re: 400G forwarding - how does it work?
mandatory slide of laundry analogy for pipelining https://cs.stanford.edu/people/eroberts/courses/soco/projects/risc/pipelining/index.html On Tue, 26 Jul 2022 at 12:41, Lawrence Wobker wrote: > >> "Pipeline" in the context of networking chips is not a terribly >> well-defined term. In some chips, you'll have a pipeline that is built >> from very rigid hardware logic blocks -- the first block does exactly one >> part of the packet forwarding, then hands the packet (or just the header >> and metadata) to the second block, which does another portion of the >> forwarding. You build the pipeline out of as many blocks as you need to >> solve your particular networking problem, and voila! > > > > "Pipeline", in the context of networking chips, is not a terribly > well-defined term! In some chips, you'll have an almost-literal pipeline > that is built from very rigid hardware logic blocks. The first block does > exactly one part of the packet forwarding, then hands the packet (or just > the header and metadata) to the second block, which does another portion of > the forwarding. You build the pipeline out of as many blocks as you need > to solve your particular networking problem, and voila! > The advantages here is that you can make things very fast and power > efficient, but they aren't all that flexible, and deity help you if you > ever need to do something in a different order than your pipeline! > > You can also build a "pipeline" out of software functions - write up some > Python code (because everyone loves Python, right?) where function A calls > function B and so on. At some level, you've just build a pipeline out of > different software functions. This is going to be a lot slower (C code > will be faster but nowhere near as fast as dedicated hardware) but it's WAY > more flexible. You can more or less dynamically build your "pipeline" on a > packet-by-packet basis, depending on what features and packet data you're > dealing with. > > "Microcode" is really just a term we use for something like "really > optimized and limited instruction sets for packet forwarding". Just like > an x86 or an ARM has some finite set of instructions that it can execute, > so do current networking chips. The larger that instruction space is and > the more combinations of those instructions you can store, the more > flexible your code is. Of course, you can't make that part of the chip > bigger without making something else smaller, so there's another tradeoff. > > MOST current chips are really a hybrid/combination of these two extremes. > You have some set of fixed logic blocks that do exactly One Set Of Things, > and you have some other logic blocks that can be reconfigured to do A Few > Different Things. The degree to which the programmable stuff is > programmable is a major input to how many different features you can do on > the chip, and at what speeds. Sometimes you can use the same hardware > block to do multiple things on a packet if you're willing to sacrifice some > packet rate and/or bandwidth. The constant "law of physics" is that you > can always do a given function in less power/space/cost if you're willing > to optimize for that specific thing -- but you're sacrificing flexibility > to do it. The more flexibility ("programmability") you want to add to a > chip, the more logic and memory you need to add. > > From a performance standpoint, on current "fast" chips, many (but > certainly not all) of the "pipelines" are designed to forward one packet > per clock cycle for "normal" use cases. (Of course we sneaky vendors get > to decide what is normal and what's not, but that's a separate issue...) > So if I have a chip that has one pipeline and it's clocked at 1.25Ghz, that > means that it can forward 1.25 billion packets per second. Note that this > does NOT mean that I can forward a packet in "a > one-point-two-five-billionth of a second" -- but it does mean that every > clock cycle I can start on a new packet and finish another one. The length > of the pipeline impacts the latency of the chip, although this part of the > latency is often a rounding error compared to the number of times I have to > read and write the packet into different memories as it goes through the > system. > > So if this pipeline can do 1.25 billion PPS and I want to be able to > forward 10BPPS, I can build a chip that has 8 of these pipelines and get my > performance target that way. I could also build a "pipeline" that > processes multiple packets per clock, if I have one that does 2 > packets/clock then I only need 4 of said pipelines... and so on and so > forth. The exact details of how the pipelines are constructed and how much > parallelism I built INSIDE a pipeline as opposed to replicating pipelines > is sort of Gooky Implementation Details, but it's a very very important > part of doing the chip level architecture as those sorts of decisions drive > lots of Other Important Decisions in the silicon design...
Re: 400G forwarding - how does it work?
Disclaimer: I often use the M/M/1 queuing assumption for much of my work to keep the maths simple and believe that I am reasonably aware in which context it's a right or a wrong application :). Also, I don't intend to change the core topic of the thread, but since this has come up, I couldn't resist. >> With 99% load M/M/1, 500 packets (750kB for 1500B MTU) of >> buffer is enough to make packet drop probability less than >> 1%. With 98% load, the probability is 0.0041%. To expand the above a bit so that there is no ambiguity. The above assumes that the router behaves like an M/M/1 queue. The expected number of packets in the systems can be given by [image: image.png] where [image: image.png] is the utilization. The probability that at least B packets are in the system is given by [image: image.png] where B is the number of packets in the system. for a link utilization of .98, the packet drop probability is .98**(500) = 0.41%. for a link utilization of 99%, .99**500 = 0.00657%. >> When many TCPs are running, burst is averaged and traffic >> is poisson. M/M/1 queuing assumes that traffic is Poisson, and the Poisson assumption is 1) The number of sources is infinite 2) The traffic arrival pattern is random. I think the second assumption is where I often question whether the traffic arrival pattern is truly random. I have seen cases where traffic behaves more like self-similar. Most Poisson models rely on the Central limit theorem, which loosely states that the sample distribution will approach a normal distribution as we aggregate more from various distributions. The mean will smooth towards a value. Do you have any good pointers where the research has been done that today's internet traffic can be modeled accurately by Poisson? For as many papers supporting Poisson, I have seen as many papers saying it's not Poisson. https://www.icir.org/vern/papers/poisson.TON.pdf https://www.cs.wustl.edu/~jain/cse567-06/ftp/traffic_models2/#sec1.2 On Sun, 7 Aug 2022 at 04:18, Masataka Ohta wrote: > Saku Ytti wrote: > > >> I'm afraid you imply too much buffer bloat only to cause > >> unnecessary and unpleasant delay. > >> > >> With 99% load M/M/1, 500 packets (750kB for 1500B MTU) of > >> buffer is enough to make packet drop probability less than > >> 1%. With 98% load, the probability is 0.0041%. > > > I feel like I'll live to regret asking. Which congestion control > > algorithm are you thinking of? > > I'm not assuming LAN environment, for which paced TCP may > be desirable (if bandwidth requirement is tight, which is > unlikely in LAN). > > > But Cubic and Reno will burst tcp window growth at sender rate, which > > may be much more than receiver rate, someone has to store that growth > > and pace it out at receiver rate, otherwise window won't grow, and > > receiver rate won't be achieved. > > When many TCPs are running, burst is averaged and traffic > is poisson. > > > So in an ideal scenario, no we don't need a lot of buffer, in > > practical situations today, yes we need quite a bit of buffer. > > That is an old theory known to be invalid (Ethernet switches with > small buffer is enough for IXes) and theoretically denied by: > > Sizing router buffers > https://dl.acm.org/doi/10.1145/1030194.1015499 > > after which paced TCP was developed for unimportant exceptional > cases of LAN. > > > Now add to this multiple logical interfaces, each having 4-8 queues, > > it adds up. > > Having so may queues requires sorting of queues to properly > prioritize them, which costs a lot of computation (and > performance loss) for no benefit and is a bad idea. > > > Also the shallow ingress buffers discussed in the thread are not delay > > buffers and the problem is complex because no device is marketable > > that can accept wire rate of minimum packet size, so what trade-offs > > do we carry, when we get bad traffic at wire rate at small packet > > size? We can't empty the ingress buffers fast enough, do we have > > physical memory for each port, do we share, how do we share? > > People who use irrationally small packets will suffer, which is > not a problem for the rest of us. > > Masataka Ohta > > >
Re: Segment Routing
Matt, Just to clarify, Are you asking for SR and LDP interop or SR over LDP? Two different things. Thanks Dip On Fri, May 18, 2018 at 3:11 AM, Matt Geary wrote: > Hello maillist anyone had any experience with segment routing and its > performance over LDP? We are evaluating the option to move to SR over LDP > so we can label switch across our Nexus L3 switching environment. > > Thanks > Packet Plumber > -- Sent from iPhone
Re: BGP advertise-best-external on RR
There's no such thing as free lunch right :). If BGP Add-Path or Diverse-Path isn't option for you then as you mentioned Internet VRF with different RD is the third option but you have to understand the implications here as well around increase in % of memory. Another option could be to bypass the RR's and just fully mesh it. But if none of the above is an option, then you may have to do some unnatural act to solve it. On Tue, Sep 1, 2015 at 7:51 AM, Mohamed Kamal wrote: > Hi, > > Diverse-path will only send the second best path, and in my case I have > three routes not two. In addition to that, every PE will have to peer with > the RR via a second session (on the same RR, as I will not deploy a new > standalone shadow RR) and this will increase the BGP sessions to the double. > > Add-path will have a network-wide IOS upgrade for this BGP capability to > be supported which is not viable now. > > So, is there any other recommendation other than the internet VRF with > different RDs solution? > > Regards, > > Mohamed Kamal > Core Network Sr. Engineer > > On 8/25/2015 11:37 AM, Jeff Tantsura wrote: > >> Hi, >> >> In your case I¹d recommend to use diverse path, due to its simplicity and >> non disruptive deployment characteristics. >> As you know - diverse path requires additional BGP session per additional >> (second, next, etc) path, in most cases not a problem, however mileage >> might vary. >> >> To my memory, in Cisco land - it has only been implemented in IOS, not XR, >> please check. >> >> Cheers, >> Jeff >> >> >> >> >> -Original Message- >> From: Diptanshu Singh >> Date: Monday, August 24, 2015 at 10:53 PM >> To: Mohamed Kamal >> Cc: "nanog@nanog.org" >> Subject: Re: BGP advertise-best-external on RR >> >> Yes . In the case of diverse path , shadow route reflector will be the >>> one wherever you enable commands to trigger diverse path computation. >>> >>> Good thing with diverse path is that the RR-Clients don't have to have >>> any support but bad thing is that it can only reflect One additional >>> best-path( second best path ) . >>> >>> Sent from my iPhone >>> >>> On Aug 24, 2015, at 2:31 PM, Mohamed Kamal wrote: It's only supported on the 15.2(4)S and later not the SRE train. I might consider an upgrade. One more question regarding this, can you configure the RR to be the main and shadow RR? Mohamed Kamal Core Network Sr. Engineer On 8/24/2015 9:16 PM, Diptanshu Singh wrote: > BGP Add-Path might be your friend . You can look at diverse-path as > well . > >> >