IPv6 explicit BGP group configs

2012-02-08 Thread keith tokash

Hi,

I'm prepping an environment for v6 and I'm wondering what, if
 any, benefit there is to splitting v4 and v6 into separate groups.  
We're running Junipers and things are fairly neat and ordered; we have 
multiple links to a few providers in many sites, so we group them and 
apply the policies at the group level.  We could stick the new v6 
neighbors into the same group and apply the policies at the neighbor 
level, or create new groups (i.e. Level3 and Level3v6).

This 
might sound a little nit-picky, but I'm concerned that there's a nuance 
I'm not thinking of right now and I don't want to be "that guy" who puts
 something in place and is cursed for a decade.

Thanks,
Keith Tokash  

Industry standard bandwidth guarantee?

2014-10-29 Thread keith tokash
Hi *, sorry if this has been answered, I did look.

Is there an industry standard regarding how much bandwidth an inter-carrier 
circuit should guarantee?  Specifically I'm thinking of a sub-interface on a 
shared physical interface.  I've not thought much about it but if there's a 
more generally-accepted guideline than, "when the customers start leaving / 
when you leave," I'm at least 5% ears.

Thanks,
Keith
  

RE: Industry standard bandwidth guarantee?

2014-10-29 Thread keith tokash
I'm sorry I should have been more specific.  I'm referring to the *percentage* 
of a circuit's bandwidth.  For example if you order a 20Mb site to site circuit 
and iperf shows 17Mb.  Well ... that's 15% off, which sounds hefty, but I'm not 
sure what's realistic to expect.  

And beyond expectations, I'm wondering if there's a threshold that industry 
movers/shakers generally yell at their vendor for going below, and try to get a 
refund or move the link to a new port/box.




> To: ktok...@hotmail.com
> Subject: Re: Industry standard bandwidth guarantee?
> From: valdis.kletni...@vt.edu
> Date: Wed, 29 Oct 2014 19:02:53 -0400
> CC: nanog@nanog.org
> 
> On Wed, 29 Oct 2014 15:24:46 -0700, keith tokash said:
> 
> > Is there an industry standard regarding how much bandwidth an inter-carrier 
> > circuit should guarantee?
> 
> How are you going to come up with a standard that covers both the uplink from
> Billy-Bob's Bait, Fish, Tackle, and Wifi, where a fractional gigabit may be
> plenty, and the size pipes that got clogged in the recent Netflix network
> neutrality kerfluffle?
> 
> And where your PoPs are (and how many) matters as well - if you have a peering
> agreement with another carrier, and you exchange 35Gbits/sec of traffic, the
> bandwidth at each peer point will depend on whether you peer at one location,
> or 5, or 7, or 15.
> 
  

RE: Industry standard bandwidth guarantee?

2014-10-30 Thread keith tokash
I'm willing to recommend to sales people that they advertise the size of the 
*usable* tube as well as the tube overall, but I'm fairly sure they won't care. 
 Ben rightly stated the order of operations: BS quote > disappointment > mea 
culpa/level setting.

If that fails I'll at least make sure no one quotes circuit sizes in terms of 
"movies transferred," or whatever metric is popular at the moment.

>From that nice gronkulator page I see a couple of MPLS and a dot1q tag 
>bringing a theoretical limit down to around 94% (non-jumbos), which to my 
>conservatively estimating mind means customers should expect ~90 on a normal 
>day.  This isn't factoring latency, intermittent loss, or congestion elsewhere 
>on the tubes, so I'm not sure where this has gotten me.  A number has been 
>specified to be sure, but one that blows away with a gentle sneeze.


From: raf...@gav.ufsc.br
Date: Thu, 30 Oct 2014 13:21:41 -0500
Subject: Re: Industry standard bandwidth guarantee?
To: mysi...@gmail.com
CC: bensjob...@gmail.com; ktok...@hotmail.com; nanog@nanog.org

You can't just ignore protocol overhead (or any system's overhead). If an 
application requires X bits per second of actual payload, then your system 
should be designed properly and take into account overhead, as well as failure 
rates, peak utilization hours, etc. This is valid for networking, automobile 
production, etc etc..
On Thu, Oct 30, 2014 at 7:23 AM, Jimmy Hess  wrote:
On Wed, Oct 29, 2014 at 7:04 PM, Ben Sjoberg  wrote:



> That 3Mb difference is probably just packet overhead + congestion



Yes...  however, that's actually an industry standard of implying

higher performance than reality,  because end users don't care about

the datagram overhead which their applications do not see they just

want X  megabits of  real-world performance, and this industry would

perhaps be better off if we called a link that can deliver at best 17

Megabits of Goodput reliably a  "15 Megabit goodput +5 service"

instead of calling it a "20 Megabit service"



Or at least appended a disclaimer   *"Real-world best case download

performance: approximately  1.8 Megabytes per second"





Subtracting overhead and quoting that instead of raw link speeds.

But that's not the industry standard. I believe the industry standard

is to provide the numerically highest performance number as is

possible through best-case theoretical testing;   let the end user

experience disappointment and explain the misunderstanding later.



End users also more concerned about their individual download rate on

actual file transfers  and not  the total averaged aggregate

throughput of the network of 10 users  or 10 streams downloading data

simultaneously,or characteristics transport protocols are

concerned about such as fairness.





> control. Goodput on a single TCP flow is always less than link

> bandwidth, regardless of the link.



---

-JH


  

Bandcon

2009-07-09 Thread keith tokash

We've run our circuits pretty hot and not noticed anything that would indicate 
a lack of shaping/policing.  And while the flame war with the attrition guys 
was pretty funny (I'm a sucker for classics) I wouldn't really use that as any 
type of barometer of ... well anything really.  This email may be accurate, but 
I'll pretend I'm an engineer for a moment and ask for something to back up 
allegations before I believe them.  Even with the internet's stellar reputation 
for accuracy via intuition.





---

Keith Tokash, CCIE #21236

Network blah blah blah, Myspace.com







-Original Message-

From: Paul Wall [mailto:pauldotw...@gmail.com]

Sent: Thursday, July 09, 2009 11:50 AM

To: Robin Rodriguez

Cc: nanog@nanog.org

Subject: Re: Bandcon



On Wed, Jul 8, 2009 at 11:52 AM, Robin

Rodriguez wrote:

> I don't have any usage experience, but would be very interested from anyone

> who does as well. We have spoken with them about long-haul circuits (with

> small to no commit) and their prices are indeed incredible. The prices we

> heard were for Equinix to Equinix circuits (specifically CHI1 & CHI3 to DAL1

> & NJ2) they also quoted us great deals on resold IBX-link to get to IBX's

> that they don't have a physical presence in (they aren't in CHI3 for

> example). I do wonder how they can undercut everyone's price by such a

> margin. Were you seeing great quotes into non Equinix facilities?



Simple, they're oversubscribing their transport circuits and letting

users fight for

bandwidth. Basically what they're doing is buying a 10GE unprotected wavelength

from a carrier, dropping a switch on the ends, and loading up multiple customer

VLANs onto the circuit. There are no bandwidth controls, no

reservations, no traffic

engineering, nothing to keep and the circuit uncongested, and these

are unprotected

waves so they go down on a regular basis whenever their carrier does a

maintenance.

How they implement multi-point service is even scarier, they just slap all your

locations into one big VLAN and let unknown unicast flooding and MAC

learning sort it

out. Most serious customers run screaming, I'm sure you can find some former

customers who can describe the horror in more detail off-list.



When things break, their support is nothing to write home about.  They

often brag that they have a former Level3 engineer on payroll,

unfortunately he's nowhere to be found, and their suport people aren't

terribly sharp on those rare occasoions when they *do* answer the

phone or respond to e-mail.  Like someone else pointed out, multi-day

outages aren't at all uncommon, so if you end up going with Bandcon,

make sure you have sufficient redundancy in place.



Since they can't really compete on quality, they compete instead on

price.  Their sales force spams and cold-calls every website, ARIN,

peeringdb, etc on a regular basis, and can't take "no" for an answer.

The following exchange sums it up nicely (warning: foul language):



http://attrition.org/postal/z/034/0931.html



They are currently running a $2.50/mg transit promotion, which makes

me wonder how they're doing on their Level3 and Global Crossing

bandwidth commits and whether or not they're solvent.



Drive Slow,

Paul Wall










_
Windows Live™ SkyDrive™: Get 25 GB of free online storage.
http://windowslive.com/online/skydrive?ocid=TXT_TAGLM_WL_SD_25GB_062009

Cisco Virtual Port-Channel experience

2009-09-02 Thread keith tokash

If anyone has deployed VPC on the Cisco Nexus 5/7k platforms in a production 
environment I'd appreciate knowing how you feel about it (off-list please).  
This isn't a hunt for negative Cisco bashing, we're very interested in 
deploying this feature and would like to know what new problems we may be 
replacing our old ones with.  Hopefully nothing serious.  :)

Keith Tokash

_
Windows Live: Keep your friends up to date with what you do online.
http://windowslive.com/Campaign/SocialNetworking?ocid=PID23285::T:WLMTAGL:ON:WL:en-US:SI_SB_online:082009