Content providers (e.g. Netflix, Hulu, YouTube) will always try to get their 
content serviced for little to no cost. The low cost, web-only plan isn't 
sustainable, and the amount of Netflix traffic around the globe is a good 
example; There's a lot of traffic that they aren't paying for.  The free market 
only works if entities self-police. But as has been expertly stated, there's no 
money in that.

I had an idea, I'm sure it's been said before:

If we actually had solid "Tier 1 vs Tier 2 vs Tier 3" thresholds, and we could 
come up with an agreeable metric, we might be able to minimize the impact of 
bandwidth hogs (sorry Netflix, pointing at you).

So, if you are a Tier 1, you are required to have at least 10 piers in 10 
locations, 5 of which must be Tier 1 providers. If you are Tier 2, that number 
is halved. It could be a combination of having the "status" of being a Tier 1 
provider, but the major benefit is a reduction of the diameter of the Internet. 
Even done by continent, this could offer enough parallel paths to help address 
(potentially) the cost of doing business.

I think we would need to have something similar for content providers. To reach 
Tier 1 status, you are required to have 10 piers in 10 locations, which should 
cover a set multiple of your total bandwidth (1 TB if it is 500 GB, etc....) 
For reaching different tiers, they could receive a price break on the cost of 
Internet circuits.

There would also need to be a middle ground somewhere. Circuits would either 
need to stop being unlimited or have service thresholds. For exceeding, the 
content provider would be liable to pay X amount per Gigabit of bandwidth. This 
would then force Content providers to scale their business rather than relying 
on the upstream providers' upstream provider to do so.

Not perfect by a great margin, but I think something like that could help.

Sincerely,

Brian A . Rettke

-----Original Message-----
From: Robert F Maxwell [mailto:rmaxw...@umd.edu]
Sent: Tuesday, June 07, 2011 7:45 AM
To: Jon Lewis
Cc: bmann...@vacation.karoshi.com; nanog@nanog.org
Subject: Re: Why don't ISPs peer with everyone?

I'd like to foster a discussion here to better understand this, not rile anyone 
up.  That said, what I see so far is a representation of those who do not 
recall the halcyon days before a rabid profit motive was the driving force 
behind ISPs.

Peering in it's original sense is/was free. It was a swap of traffic. That 
profit motive has created the phrase "settlement free peering" to refer to the 
original definition so it seems like the free swap of traffic is the 
aberration. The big ISPs used to seek to balance content hosting and the 
customer load to avoid having to pay for any sort of transit. AOL was known to 
acquire companies which had huge downstream traffic for this purpose.

Now we see ISPs waging an economic war with content providers wanting to find a 
way to charge, say, Google for allowing them to to pass their YouTube content 
along to the ISP's subscribers. This is the result of letting non-technical, 
profit-driven managers run the show and not the usually eager to cooperate 
network engineers who actually understand how this stuff works.

The problem here is that the closer you are to the end user, the harder you're 
getting screwed, and not in a good way. The very large ISPs are doing real 
peering, and charging smaller, end-user focused ISPs high transit rates so that 
they can't possibly compete on price with the inferior, 
customer-service-impaired ISP end-user offerings. The US government has 
declined to enforce any sort of rule which might require the huge ISPs to grant 
wholesale-type access to their physical networks (for better or worse depending 
on your POV) or examine any of this cartel-type behavior under the light of 
monopoly rules.

So please, short of socialism, and in light of the rampant legislation-for-sale 
culture in our government (how many FCC commissioners get jobs with huge ISPs?) 
how do we fix this?

Please note: I'm not advocating socialism. I might advocate regulation a la 
public utilities. There is universal agreement that the internet is "critical 
infrastructure." deregulating other utilities hasn't been uniformly successful, 
especially when measured from the consumers' point of view. Thoughts?

Rob

Sent from my iPad, so I can't have a fun sig.

On Jun 7, 2011, at 10:00 AM, "Jon Lewis" <jle...@lewis.org> wrote:

> On Tue, 7 Jun 2011 bmann...@vacation.karoshi.com wrote:
>
>> in this context, anyone who is a BGP speaker is an ISP.
>
> Peering costs money.  The transit bandwidth saved by peering with another
> network may not be sufficient to cover the cost of installing and
> maintaining whatever connections are necessary to peer.  Then there's the
> big networks who really don't want to peer with anyone other than
> similarly sized big networks...everyone else should be their transit
> customer.
>
> I manage a network that's primarily a hosting network.  There's a similar
> hosting network at the other end of the building.  We both have multiple
> gigs of transit.  We don't peer with each other.  Perhaps we should,
> because the cost of the connection would be negligible (I think we already
> have multiple fiber pairs between our suites), but looking at my sampled
> netflow data, I'm guessing we average about 100kbit/s or less traffic in
> each direction between us.  At that low a level, is it even worth the time
> and trouble to coordinate setting up a peering connection, much less
> tying up a gigE port at each end?
>
> Anyone from hostdime reading this?  :)
> If so, what are your thoughts?
>
> ----------------------------------------------------------------------
>  Jon Lewis, MCP :)           |  I route
>  Senior Network Engineer     |  therefore you are
>  Atlantic Net                |
> _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
>


Reply via email to