Thanks, will look into this. --- Colin Legendre President and CTO
Coextro - Unlimited. Fast. Reliable. w: www.coextro.com e: clegen...@coextro.com p: 647-693-7686 ext.101 m: 416-560-8502 f: 647-812-4132 On Sat, Nov 27, 2021 at 7:42 AM Tassos <ach...@forthnet.gr> wrote: > In the past we had packet loss issues due SIP's PLIM buffer. > > The following docs may provide some guidance: > > > https://www.cisco.com/c/en/us/support/docs/routers/asr-1000-series-aggregation-services-routers/200674-Throughput-issues-on-ASR1000-Series-rout.html > > https://www.cisco.com/c/en/us/td/docs/interfaces_modules/shared_port_adapters/configuration/ASR1000/asr1000-sip-spa-book/asr-spa-pkt-class.html > > -- > Tassos > > On 27/11/21 02:11, Fiona Weber via NANOG wrote: > > Hi > > we see similar problems on ASR1006-X with ESP100 and MIP100. At about ~45 > Gbit/s of traffic (on ~30k PPPoE Sessions and ~700k CGN sessions) the QFP > utilization skyrockets from ~45 % straight to ~95 % :( > I don't know if it's the CGN sessions or the traffic/packets causing the > load increase, the datasheet says it supports something like 10M > sessions.... but maybe not if you really intend to push packets through it? > We have not seen such spikes with way higher pps, but lower CGN session > count, when we had DDoS Attacks against end customers. > > Fiona > On 11/26/21 20:09, Colin Legendre wrote: > > Hi, > > We have ... > > ASR1006 that has following cards... > 1 x ESP40 > 1 x SIP40 > 4 x SPA-1x10GE-L-V2 > 1 x 6TGE > 1 x RP2 > > We've been having latency and packet loss during peak periods... > > We notice all is good until we reach 50% utilization on output of... > > 'show platform hardware qfp active datapath utilization summary' > > Literally ... 47% good... 48% good... 49% latency to next hop goes from > 1ms to 15-20ms... 50% we see 1-2% packet-loss and 30-40ms latency... 53% we > see 60-70ms latency and 8-10% packet loss. > > Is this expected... the ESP40 can only really push 20G and then starts to > have performance issues? > > > > --- > Colin Legendre > > >