Re: Cisco 7606 RSP720 - no SVI bit/packets counters

2010-05-26 Thread Rodney Dunn

Dean,

Probably good to move this over to cisco-...@puck.nether.net and answer 
the questions below over there.


What code is it?

Looks like a bug where the IDB counters are not being bumped on the SVI.

Is it L3 switching traffic through the SVI or L2 (intra vlan traffic 
between L2 ports)?


Rodney



On 5/26/10 4:17 AM, Dean Belev wrote:

Hi all,

Another strange Cisco behavior - or may be unknown one.
Creating SVI interface with a lot of traffic passing through - nothing
suspicious.
Until ...

/7606#sh int vlan XXX
Vlan537 is up, line protocol is up
Hardware is EtherSVI, address is MAC (bia 001c.b0b7.6400)
Description: 0449-070C001#NetLan_Int
Internet address is IP/30
MTU 1500 bytes, BW 100 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 0/255
Encapsulation ARPA, loopback not set
Keepalive not supported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 00:08:18
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
*30 second input rate 0 bits/sec, 0 packets/sec
30 second output rate 0 bits/sec, 0 packets/sec*
L2 Switched: ucast: 25025 pkt, 2426613 bytes - mcast: 0 pkt, 0 bytes
L3 in Switched: ucast: 3597038 pkt, 3755053095 bytes - mcast: 0 pkt, 0
bytes mcast
L3 out Switched: ucast: 2000511 pkt, 602156179 bytes mcast: 0 pkt, 0 bytes
3678505 packets input, 3803335802 bytes, 0 no buffer
Received 0 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
1899084 packets output, 568639522 bytes, 0 underruns
0 output errors, 0 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out/

7606#sh int vlan XXX summary

*: interface is up
IHQ: pkts in input hold queue IQD: pkts dropped from input queue
OHQ: pkts in output hold queue OQD: pkts dropped from output queue
RXBS: rx rate (bits/sec) RXPS: rx rate (pkts/sec)
TXBS: tx rate (bits/sec) TXPS: tx rate (pkts/sec)
TRTL: throttle count

Interface IHQ IQD OHQ OQD RXBS RXPS TXBS TXPS TRTL


*VlanXXX 0 0 0 0 *0 0 0 0 * 0
###

All counters but RXBS/RXPS/TXBS/TXPS showing there is a traffic (~400
Mbps).
Also visible using MIB values. But the SVI bits/packets not showing it.
There are 146 SVIs configured there (we have another Cisco 7609 RSP720
with 393 ones and no such a problem present).
We deleted around 10 interfaces trying to release any resources -
without result, delete/create the interface - the same.

Is there any limit I do not know. I tried to read about any limits of
the number of SVIs - no result.
Are there any assumptions?

Thank you in advance!

Best~

/Dean Belev
Network Management Team
Neterra Ltd.
Sofia, Bulgaria
Phone: +359 2 974 33 11
Fax: +359 2 975 34 36
Mobile: +359 886 663 123
http://www.neterra.net/




Re: Cisco ASR

2010-05-26 Thread Rodney Dunn

Sherwin,

Let's move this specific crash/code question over to 
cisco-...@puck.nether.net.


Try the 12.2(33)XNF1 release.

If you would like to try and find the matching bug for what you are 
seeing before you upgrade email me offline with the crashinfo file and 
the full logs and I'll get someone to take a look at it with you.


Thanks,
Rodney


On 5/26/10 4:10 AM, Sherwin Ang wrote:

using ASR1006 here, had 2 automatic reboots last friday which is not a
good sign.

System image file is
"bootflash:/asr1000rp1-adventerprisek9.02.04.02.122-33.XND2.bin"
Last reload reason: Critical software exception, check
bootflash:crashinfo_RP_01_00_20100521-080244-XXX

last thing i always see before boom,

May 21 07:27:11.752 XXX: %BGP-6-BIGCHUNK: Big chunk pool request (252)
for community. Replenishing with malloc

i am starting to feel ASR1000 series' software is not yet ready for
primetime, but there are newer software available, will try that first
and if it still fails, then i'll cancel all ASR1000 orders.




On Tue, May 25, 2010 at 8:05 AM, Elijah Savage III
  wrote:

On 5/24/10 4:00 PM, "Thomas Magill"  wrote:


Anyone using ASRs?  We are demoing one to possibly upgrade our 7206s.
We are seeing what looks like a memory leak on the RP.  Cisco is looking
at it and says they haven't seen it before.  I am wondering if anyone
else has run across this.  With the default 2G of memory the RP only had
about 1% free memory, and the router was rebooting every 5 days or so
when the RP ran out.  We upgraded and now have about 60% free on the RP,
but I still see the used memory incrementing at a pretty steady rate.
We are running IOS-XE 12.2(33)XNF.



The router is currently not even routing traffic, just acting as a BGP
peer so it has one set of full tables.  It seems to be a process on the
Linux OS side that has the leak as the IOS memory commands show
everything staying pretty static.



Thomas Magill
Network Engineer

Office: (858) 909-3777

Cell: (858) 869-9685
mailto:tmag...@providecommerce.com


provide-commerce
4840 Eastgate Mall

San Diego, CA  92121



ProFlowers| redENVELOPE
| Cherry Moon Farms
| Shari's Berries



I am using a few 1002's and I am not seeing that issue. I will get you the
IOS train later.










Re: PPP multilink help

2009-05-11 Thread Rodney Dunn
On Mon, May 11, 2009 at 10:37:25AM -0400, Andrey Gordon wrote:
> Hey folks, I'm sure to you it's peanuts, but I'm a bit puzzled (most likely
> because of the lack of knowledge, I bet).
> 
> I'm buying an IP backbone from VNZ (presumably MPLS). I get a MLPPP hand off
> on all sites, so I don't do the actual labeling and switching, so I guess
> for practical purposes what I'm trying to say is that I have no physical
> control over the other side of my MLPPP links.
> 
> When I transfer a large file over FTP (or CIFS, or anything else), I'd
> expect it to max out either one or both T1,

Most MLPPP implementations don't has the flows at the IP layer to an
individual MLPPP member link. The bundle is a virtual L3 interface and
the packets themselves are distributed over the member links. Some people
reference it as a "load balancing" scenario vs. "load sharing" as the
traffic is given to the link that isn't currently "busy".

  but instead utilization on the
> T1s is hoovering at 70% on both and sometimes MLPPP link utilization even
> drops below 50%. What am I'm not gettting here?

If you have Multilink fragmentation disabled it sends a packet down each
path. It could be a reordering delay causing just enough variance in
the packet stream that the application thorttles back. If you have a bunch
of individual streams going you would probably see a higher throughput.
Remember there is additional overhead for the MLPPP.

Rodney


> 
> Tx,
> Andrey
> 
> Below is a snip of my config.
> 
> controller T1 0/0/0
>  cablelength long 0db
>  channel-group 1 timeslots 1-24
> !
> controller T1 0/0/1
>  cablelength long 0db
>  channel-group 1 timeslots 1-24
> !
> ip nbar custom rdesktop tcp 3389
> ip cef
> !
> class-map match-any VoIP
>  match  dscp ef
> class-map match-any interactive
>  match protocol rdesktop
>  match protocol telnet
>  match protocol ssh
> !
> policy-map QWAS
>  class VoIP
> priority 100
>  class interactive
> bandwidth 500
>  class class-default
> fair-queue 4096
> !
> interface Multilink1
>  description Verizon Business MPLS Circuit
>  ip address x.x.x.150 255.255.255.252
>  ip flow ingress
>  ip nat inside
>  ip virtual-reassembly
>  load-interval 30
>  no peer neighbor-route
>  ppp chap hostname R1
>  ppp multilink
>  ppp multilink links minimum 1
>  ppp multilink group 1
>  ppp multilink fragment disable
>  service-policy output QWAS
> !
> interface Serial0/0/0:1
>  no ip address
>  ip flow ingress
>  encapsulation ppp
>  load-interval 30
>  fair-queue 4096 256 0
>  ppp chap hostname R1
>  ppp multilink
>  ppp multilink group 1
> !
> interface Serial0/0/1:1
>  no ip address
>  ip flow ingress
>  encapsulation ppp
>  load-interval 30
>  fair-queue 4096 256 0
>  ppp chap hostname R1
>  ppp multilink
>  ppp multilink group 1
> 
> 
> 
> 
> -
> Andrey Gordon [andrey.gor...@gmail.com]



Re: PPP multilink help

2009-05-11 Thread Rodney Dunn
It could very well be microburst in the flow creating congestion
as seen in the default class:

>   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 252140

>   30 second output rate 1795000 bits/sec, 243 packets/sec
> 
> Class-map: class-default (match-any)
>   27511 packets, 120951145536 bytes
>   30 second offered rate 1494000 bps, drop rate 0 bps
>   Match: any
>   Queueing
>   queue limit 64 packets
>   (queue depth/total drops/no-buffer drops/flowdrops) 0/251092/0/251092
>   (pkts output/bytes output) 276085337/122442704318
>   Fair-queue: per-flow queue limit 16

Which matches mostly to the default class. I don't recall if the per flow
queue limit kicks in without congestion or not.

You could try a few things:

a) remove WFQ in the default class
b) add a BW statement to it to allocate a dedicated amount
c) implement WRED in the class
d) remove WFQ in the default class

to see if one of those improves it.

btw, the overhead I was referring to was the additional MLPPP overhead
to each packet which reduces effective throughput.


Rodney





Re: anyone else seeing very long AS paths?

2009-02-17 Thread Rodney Dunn
Ivan,

It is confusing but from what I have tested you have it correct.

The confusing part comes from multiple issues.

a) The documentation about the default maxas limit being 75 appears to be
   incorrect. I'll get that fixed.

b) Prior to CSCee30718 there was a hard limit of 255. After that fix
   AS sets of more than 255 should work.

c) CSCeh13489 implemented the maxas command to mark it as invalid and
   not send.


There does appear to be an issue when you cross the 255 boundary
and the next hop router sends a notification back.

I've got it recreated in the lab and we are working to clearly understand
why that is. I'll post an update once we have more.

The way to prevent it is the upstream device that crosses the 255 boundary
on sending needs to use the maxas limit command to keep it less than 255.

It doesn't work on the device that receives the update with the AS path
larger than 255.

Rodney

On Tue, Feb 17, 2009 at 08:58:48PM +0100, Ivan Pepelnjak wrote:
> > We were dropping ALL prefixes and the eBGP session was still 
> > resetting. 
> 
> Upstream or downstream?
> 
> > 1) "bgp maxas-limit 75" had no effect mitigating this problem 
> > on the IOS we were using. That is: it was previously verified 
> > to be working just fine to drop paths longer than 75, but 
> > once we started receiving paths >
> > 255 then BGP started resetting.
> 
> I was able to receive BGP paths longer than 255 on IOS release 12.2SRC. The
> paths were generated by Quagga BGP daemon.
> 
> 12.2SRC causes the downstream session to break when the installed AS-path
> length is close to 255 and you use downstream AS-path prepending.
> 
> In your case, I'm assuming you were hit with an older bug (probably at the
> 128 AS-path length boundary). It would be very hard to generate just the
> right AS-path length to unintentionally break your upstream EBGP session (as
> I said before, it's a nice targeted attack if you know your downstream
> topology).
> 
> If your IOS is vulnerable to the older bugs that break inbound processing of
> AS paths longer than 128, there's nothing you can do on your end. The
> internal BGP checks reject the inbound update before the inbound filters (or
> bgp maxas-limit) can touch it and reset the upstream BGP session.
> 
> Hope this helps
> Ivan
> 
> Disclaimer: as I don't have internal access to Cisco, all of the above is a
> result of lab tests.
> 



Re: anyone else seeing very long AS paths?

2009-02-17 Thread Rodney Dunn
If you want to take this offline send it unicast or we could
move it to cisco-nsp.

What scenarios are you seeing that appear broken other than
when a notification is sent when a > 255 hop update is received?
That's the one I'm working on right now.

Rodney

On Tue, Feb 17, 2009 at 05:31:49PM -0500, German Martinez wrote:
> On Tue Feb 17, 2009, Rodney Dunn wrote:
> 
> Hello Rodney,
> It will be great if you can share with us your findings.  It seems
> like we are hitting different bugs in different platforms.
> 
> Thanks
> German
> 
> > Ivan,
> > 
> > It is confusing but from what I have tested you have it correct.
> > 
> > The confusing part comes from multiple issues.
> > 
> > a) The documentation about the default maxas limit being 75 appears to be
> >incorrect. I'll get that fixed.
> > 
> > b) Prior to CSCee30718 there was a hard limit of 255. After that fix
> >AS sets of more than 255 should work.
> > 
> > c) CSCeh13489 implemented the maxas command to mark it as invalid and
> >not send.
> > 
> > 
> > There does appear to be an issue when you cross the 255 boundary
> > and the next hop router sends a notification back.
> > 
> > I've got it recreated in the lab and we are working to clearly understand
> > why that is. I'll post an update once we have more.
> > 
> > The way to prevent it is the upstream device that crosses the 255 boundary
> > on sending needs to use the maxas limit command to keep it less than 255.
> > 
> > It doesn't work on the device that receives the update with the AS path
> > larger than 255.
> > 
> > Rodney
> > 
> > On Tue, Feb 17, 2009 at 08:58:48PM +0100, Ivan Pepelnjak wrote:
> > > > We were dropping ALL prefixes and the eBGP session was still 
> > > > resetting. 
> > > 
> > > Upstream or downstream?
> > > 
> > > > 1) "bgp maxas-limit 75" had no effect mitigating this problem 
> > > > on the IOS we were using. That is: it was previously verified 
> > > > to be working just fine to drop paths longer than 75, but 
> > > > once we started receiving paths >
> > > > 255 then BGP started resetting.
> > > 
> > > I was able to receive BGP paths longer than 255 on IOS release 12.2SRC. 
> > > The
> > > paths were generated by Quagga BGP daemon.
> > > 
> > > 12.2SRC causes the downstream session to break when the installed AS-path
> > > length is close to 255 and you use downstream AS-path prepending.
> > > 
> > > In your case, I'm assuming you were hit with an older bug (probably at the
> > > 128 AS-path length boundary). It would be very hard to generate just the
> > > right AS-path length to unintentionally break your upstream EBGP session 
> > > (as
> > > I said before, it's a nice targeted attack if you know your downstream
> > > topology).
> > > 
> > > If your IOS is vulnerable to the older bugs that break inbound processing 
> > > of
> > > AS paths longer than 128, there's nothing you can do on your end. The
> > > internal BGP checks reject the inbound update before the inbound filters 
> > > (or
> > > bgp maxas-limit) can touch it and reset the upstream BGP session.
> > > 
> > > Hope this helps
> > > Ivan
> > > 
> > > Disclaimer: as I don't have internal access to Cisco, all of the above is 
> > > a
> > > result of lab tests.
> > > 





Re: anyone else seeing very long AS paths?

2009-02-19 Thread Rodney Dunn
We are working on a document for Cisco.com but in the interim
here is the bug that will fix the issue of a Cisco IOS device
sending an incorrectly formatted BGP update when as a result
of prepending it goes over 255 AS hops.

Note: The Title and Release-note on bug toolkit may be a
bit different as I just updated it to be more accurate.

Of all the scenarios I've looked at (thanks to those that responded
offline) there wasn't a condition found where this could happen
without AS path prepending being used.

Please respond offline or let's move the discussion over to
cisco-nsp at this point.

CSCsx73770
Invalid BGP formatted update causes peer reset with AS prepending

Symptom:
 
 A Cisco IOS device that receives a BGP update message and as a result of AS
prepending needs to send an update downstream that would have over 255 AS hops
will send an invalid formatted update. This update when received by a
downstream BGP speaker triggers a NOTIFICATION back to the sender which results
in the BGP session being reset.
 
 Conditions:
 
 This problem is seen when a Cisco IOS device receives a BGP update and
 due to a combination of either inbound, outbound, or both AS prepending it
needs to send an update downstream that has more than 255 AS hops.
 
 Workaround:
 
 The workaround is to implement  bgp maxas-limit X  on the
device that after prepending would need to send an update with over 255 AS
hops. Since IOS limits the inbound prepending value to 10 the most that
could be added iss 11 AS hops (10 on ingress, 10 on egress, and 1 for normal
eBGP AS hop addition). Therefore, a conservative value to configure would be
200 to prevent this condition.
 
 

Full support of Section 5.1.2 of RFC4271 is being tracked under
CSCsx75937
Add BGP support of AS paths longer than 255 per Section 5.1.2 of RFC4271

Thanks to those that worked offline with us to verify the field results
reported.

Rodney




On Tue, Feb 17, 2009 at 05:27:01PM -0500, Rodney Dunn wrote:
> If you want to take this offline send it unicast or we could
> move it to cisco-nsp.
> 
> What scenarios are you seeing that appear broken other than
> when a notification is sent when a > 255 hop update is received?
> That's the one I'm working on right now.
> 
> Rodney
> 
> On Tue, Feb 17, 2009 at 05:31:49PM -0500, German Martinez wrote:
> > On Tue Feb 17, 2009, Rodney Dunn wrote:
> > 
> > Hello Rodney,
> > It will be great if you can share with us your findings.  It seems
> > like we are hitting different bugs in different platforms.
> > 
> > Thanks
> > German
> > 
> > > Ivan,
> > > 
> > > It is confusing but from what I have tested you have it correct.
> > > 
> > > The confusing part comes from multiple issues.
> > > 
> > > a) The documentation about the default maxas limit being 75 appears to be
> > >incorrect. I'll get that fixed.
> > > 
> > > b) Prior to CSCee30718 there was a hard limit of 255. After that fix
> > >AS sets of more than 255 should work.
> > > 
> > > c) CSCeh13489 implemented the maxas command to mark it as invalid and
> > >not send.
> > > 
> > > 
> > > There does appear to be an issue when you cross the 255 boundary
> > > and the next hop router sends a notification back.
> > > 
> > > I've got it recreated in the lab and we are working to clearly understand
> > > why that is. I'll post an update once we have more.
> > > 
> > > The way to prevent it is the upstream device that crosses the 255 boundary
> > > on sending needs to use the maxas limit command to keep it less than 255.
> > > 
> > > It doesn't work on the device that receives the update with the AS path
> > > larger than 255.
> > > 
> > > Rodney
> > > 
> > > On Tue, Feb 17, 2009 at 08:58:48PM +0100, Ivan Pepelnjak wrote:
> > > > > We were dropping ALL prefixes and the eBGP session was still 
> > > > > resetting. 
> > > > 
> > > > Upstream or downstream?
> > > > 
> > > > > 1) "bgp maxas-limit 75" had no effect mitigating this problem 
> > > > > on the IOS we were using. That is: it was previously verified 
> > > > > to be working just fine to drop paths longer than 75, but 
> > > > > once we started receiving paths >
> > > > > 255 then BGP started resetting.
> > > > 
> > > > I was able to receive BGP paths longer than 255 on IOS release 12.2SRC. 
> > > > The
> > > > paths were generated by Quagga BGP daemon.
> > > > 
> > > > 12.2SRC causes the downst

Re: anyone else seeing very long AS paths?

2009-02-20 Thread Rodney Dunn
Typo in one part so sending to make it accurate.

>  The workaround is to implement  bgp maxas-limit X  on the
> device that after prepending would need to send an update with over 255 AS
> hops. Since IOS limits the inbound prepending value to 10 the most that
> could be added iss 11 AS hops (10 on ingress, 10 on egress, and 1 for normal
> eBGP AS hop addition). Therefore, a conservative value to configure would be
> 200 to prevent this condition.

It should be 21 hops (10 in a route-map on ingress, 10 in a route-map on
egress, and 1 normal eBGP AS hop addition).

Thanks to John Stuppi for pointing it out.

Rodney



On Thu, Feb 19, 2009 at 03:15:02PM -0500, Rodney Dunn wrote:
> We are working on a document for Cisco.com but in the interim
> here is the bug that will fix the issue of a Cisco IOS device
> sending an incorrectly formatted BGP update when as a result
> of prepending it goes over 255 AS hops.
> 
> Note: The Title and Release-note on bug toolkit may be a
> bit different as I just updated it to be more accurate.
> 
> Of all the scenarios I've looked at (thanks to those that responded
> offline) there wasn't a condition found where this could happen
> without AS path prepending being used.
> 
> Please respond offline or let's move the discussion over to
> cisco-nsp at this point.
> 
> CSCsx73770
> Invalid BGP formatted update causes peer reset with AS prepending
> 
> Symptom:
>  
>  A Cisco IOS device that receives a BGP update message and as a result of AS
> prepending needs to send an update downstream that would have over 255 AS hops
> will send an invalid formatted update. This update when received by a
> downstream BGP speaker triggers a NOTIFICATION back to the sender which 
> results
> in the BGP session being reset.
>  
>  Conditions:
>  
>  This problem is seen when a Cisco IOS device receives a BGP update and
>  due to a combination of either inbound, outbound, or both AS prepending it
> needs to send an update downstream that has more than 255 AS hops.
>  
>  Workaround:
>  
>  The workaround is to implement  bgp maxas-limit X  on the
> device that after prepending would need to send an update with over 255 AS
> hops. Since IOS limits the inbound prepending value to 10 the most that
> could be added iss 11 AS hops (10 on ingress, 10 on egress, and 1 for normal
> eBGP AS hop addition). Therefore, a conservative value to configure would be
> 200 to prevent this condition.
>  
>  
> 
> Full support of Section 5.1.2 of RFC4271 is being tracked under
> CSCsx75937
> Add BGP support of AS paths longer than 255 per Section 5.1.2 of RFC4271
> 
> Thanks to those that worked offline with us to verify the field results
> reported.
> 
> Rodney
> 
> 
> 
> 
> On Tue, Feb 17, 2009 at 05:27:01PM -0500, Rodney Dunn wrote:
> > If you want to take this offline send it unicast or we could
> > move it to cisco-nsp.
> > 
> > What scenarios are you seeing that appear broken other than
> > when a notification is sent when a > 255 hop update is received?
> > That's the one I'm working on right now.
> > 
> > Rodney
> > 
> > On Tue, Feb 17, 2009 at 05:31:49PM -0500, German Martinez wrote:
> > > On Tue Feb 17, 2009, Rodney Dunn wrote:
> > > 
> > > Hello Rodney,
> > > It will be great if you can share with us your findings.  It seems
> > > like we are hitting different bugs in different platforms.
> > > 
> > > Thanks
> > > German
> > > 
> > > > Ivan,
> > > > 
> > > > It is confusing but from what I have tested you have it correct.
> > > > 
> > > > The confusing part comes from multiple issues.
> > > > 
> > > > a) The documentation about the default maxas limit being 75 appears to 
> > > > be
> > > >incorrect. I'll get that fixed.
> > > > 
> > > > b) Prior to CSCee30718 there was a hard limit of 255. After that fix
> > > >AS sets of more than 255 should work.
> > > > 
> > > > c) CSCeh13489 implemented the maxas command to mark it as invalid and
> > > >not send.
> > > > 
> > > > 
> > > > There does appear to be an issue when you cross the 255 boundary
> > > > and the next hop router sends a notification back.
> > > > 
> > > > I've got it recreated in the lab and we are working to clearly 
> > > > understand
> > > > why that is. I'll post an update once we have more.
> > > > 
> > > > The way to prevent it is the upstream device that crosses th