Thanks Adam, but I was only talking about the RSVP-TE signaled bandwidth reservation (not the actual qos, which I think is what you are referring to) .... a guy on NANOG mail list answered it for me. This is what I was missing on the Headend TE-Tunnel interface config...
interface tunnel-te1 signalled-bandwidth 5000 ...that one simple command. Now all LSR's in the RSVP-TE path allocate that bandwidth... Seen with these commands... RP/0/0/CPU0:r20#sh rsvp reservation detail | in ate Rate: 0 bits/sec. Burst: 1K bytes. Peak: 0 bits/sec. State expires in 0.000 sec. Rate: 5000000 bits/sec. Burst: 1K bytes. Peak: 5M bits/sec. State expires in 358.630 sec. RP/0/0/CPU0:r20#sh rsvp int *: RDM: Default I/F B/W % : 75% [default] (max resv/bc0), 0% [default] (bc1) Interface MaxBW (bps) MaxFlow (bps) Allocated (bps) MaxSub (bps) ------------------------- ------------ ------------- -------------------- ------------- GigabitEthernet0/0/0/0 750M* 750M 0 ( 0%) 0* GigabitEthernet0/0/0/1 750M* 750M 5M ( 0%) 0* On transit lsr in core. RP/0/0/CPU0:r24#sh rsvp session detail | in ate Tspec: avg rate=0, burst=1K, peak rate=0 Fspec: avg rate=0, burst=1K, peak rate=0 Tspec: avg rate=5M, burst=1K, peak rate=5M Fspec: avg rate=5M, burst=1K, peak rate=5M RP/0/0/CPU0:r24#sh rsvp int *: RDM: Default I/F B/W % : 75% [default] (max resv/bc0), 0% [default] (bc1) Interface MaxBW (bps) MaxFlow (bps) Allocated (bps) MaxSub (bps) ------------------------- ------------ ------------- -------------------- ------------- GigabitEthernet0/0/0/0 750M* 750M 0 ( 0%) 0* GigabitEthernet0/0/0/1 750M* 750M 5M ( 0%) 0* -Aaron _______________________________________________ cisco-nsp mailing list [email protected] https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
