Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-23 Thread michael.dillon

> > If the content senders do not want this dipping and levelling 
> > off, then they will have to foot the bill for the network capacity.
> 
> That's kind of the funniest thing I've seen today, it sounds 
> so much like an Ed Whitacre.  

> Then Ed learns that 
> the people he'd like to charge for the privilege of using 
> "his" pipes are already paying for pipes.

If they really were paying for pipes, there would be no issue.
The reason there is an issue is because network operators have
been assuming that consumers, and content senders, would not use
100% of the access link capacity through the ISP's core network.
When you assume any kind of overbooking then you are taking the 
risk that you have underpriced the service. The ideas people are
talking about, relating to pumping lots of video to every end user,
are fundamentally at odds with this overbooking model. The risk
level has change from one in 10,000 to one in ten or one in five.

> > But today, content production is cheap, and competition has 
> driven the 
> > cost of content down to zero.
> 
> Right, that's a "problem" I'm seeing too.

Unfortunately, the content owners still think that content is 
king and that they are sitting on a gold mine. They fail to see
that they are only raking in revenues because they spend an awful
lot of money on marketing their content. And the market is now
so diverse (YouTube, indie bands, immigrant communities) that
nobody can get anywhere close to 100% share. The long tail seems
to be getting a bigger share of the overall market.

> Host the video on your TiVo, or your PC, and take advantage 
> of your existing bandwidth.  (There are obvious non- 
> self-hosted models already available, I'm not focusing on 
> them, but they would work too)

Not a bad idea if the asymmetry in ADSL is not too small. But
this all goes away if we really do get the kind of distributed 
data centers that I envision, where most business premises convert
their machine rooms into generic compute/storage arrays.
I should point out that the enterprise world is moving this way,
not just Google/Amazon/Yahoo. For instance, many companies are moving
applications onto virtual machines that are hosted on relatively
generic compute arrays, with storage all in SANs. VMWare has a big
chunk of this market but XEN based solutions with their ability to
migrate running virtual machines, are also in use. And since a lot
of enterprise software is built with Java, clustering software like
Terracotta makes it possible to build a compute array with several
JVM's per core and scale applications with a lot less fuss than
traditional cluster operating systems. 

Since most ISPs are now owned by telcos and since most telcos have 
lots of strategically located buildings with empty space caused by
physical shrinkage of switching equipment, you would think that 
everybody on this list would be thinking about how to integrate all
these data center pods into their networks.

> So what I'm thinking of is a device that is doing the 
> equivalent of being a "personal video assistant" on the 
> Internet.  And I believe it is coming.  Something that's 
> capable of searching out and speculatively downloading the 
> things it thinks you might be interested in.  Not some 
> techie's cobbled together PC with BitTorrent and HDMI 
> outputs. 

Speculative downloading is the key here, and I believe that
cobbled together boxes will end up doing the same thing.
However, this means that any given content file will be
going to a much larger number of endpoints, which is something
that P2P handles quite well. P2P software is a form of multicast
as is a CDN (Content Delivery Network) like Akamai. Just because
IP Multicast is built into the routers, does not make it the
best way to multicast content. Given that widespread IP multicast
will *NOT* happen without ISP investment and that it potentially
impacts every router in the network, I think it has a disadvantage
compared with P2P or systems which rely on a few middleboxes
strategically places, such as caching proxies.

> The hardware specifics of this is getting a bit off-topic, at 
> least for this list.  Do we agree that there's a potential 
> model in the future where video may be speculatively fetched 
> off the Internet and then stored for possible viewing, and if 
> so, can we refocus a bit on that?

I can only see this speculative fetching if it is properly implemented
to minimize its impact on the network. The idea of millions of unicast
streams or FTP downloads in one big exaflood, will kill speculative
fetching. If the content senders create an exaflood, then the audience
will not get the kind of experience that they expect, and will go
elsewhere.

We had this experience recently in the UK when they opened a new
terminal
at Heathrow airport and British-Airways moved operations to T5
overnight.
The exaflood of luggage was too much for the system, and it has taken
weeks
to get to a level of service that people still consid

Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-23 Thread Williams, Marc
Just an ad used to illustrate the low cost and ease of use.  The fact that it's 
quicktime also made me realize it's also ipods, iphones/wifi, and that Apple 
has web libraries ready for web site development on their darwin boxes.  Also, 
I would imagine this device could easily be cross connected and multicasted 
into each access router so that the only bandwidth used is that bandwidth being 
paid for by customer or QoS unicast streams feeding an MCU.  Rambling now, but 
happy to answer your question.



> -Original Message-
> From: Marc Manthey [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, April 22, 2008 9:07 PM
> To: nanog@nanog.org
> Subject: Re: [Nanog] ATT VP: Internet to hit capacity by 2010
> 
> > ...is the first H.264 encoder .. designed by 
> > specifically for ... environments. It natively supports 
> the RTSP 
> > streaming media protocol.  can stream directly to .
> 
> hi marc
> so your " oskar" can rtsp multicast stream over ipv6 and 
> quicktime not , or was this just an ad ?
> 
> cheers
> 
> Marc
> 
> 
> --
> Les enfants teribbles - research and deployment Marc Manthey 
> -  Hildeboldplatz 1a D - 50672 Köln - Germany
> Tel.:0049-221-3558032
> Mobil:0049-1577-3329231
> jabber :[EMAIL PROTECTED]
> blog : http://www.let.de
> ipv6 http://www.ipsix.org
> 
> Klarmachen zum Ändern!
> http://www.piratenpartei-koeln.de/
> ___
> NANOG mailing list
> NANOG@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
> 

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-23 Thread Marshall Eubanks
Here is a spec sheet :



Regards
Marshall

On Apr 23, 2008, at 10:08 AM, Williams, Marc wrote:

> Just an ad used to illustrate the low cost and ease of use.  The  
> fact that it's quicktime also made me realize it's also ipods,  
> iphones/wifi, and that Apple has web libraries ready for web site  
> development on their darwin boxes.  Also, I would imagine this  
> device could easily be cross connected and multicasted into each  
> access router so that the only bandwidth used is that bandwidth  
> being paid for by customer or QoS unicast streams feeding an MCU.   
> Rambling now, but happy to answer your question.
>
>
>
>> -Original Message-
>> From: Marc Manthey [mailto:[EMAIL PROTECTED]
>> Sent: Tuesday, April 22, 2008 9:07 PM
>> To: nanog@nanog.org
>> Subject: Re: [Nanog] ATT VP: Internet to hit capacity by 2010
>>
>>> ...is the first H.264 encoder .. designed by 
>>> specifically for ... environments. It natively supports
>> the RTSP
>>> streaming media protocol.  can stream directly to .
>>
>> hi marc
>> so your " oskar" can rtsp multicast stream over ipv6 and
>> quicktime not , or was this just an ad ?
>>
>> cheers
>>
>> Marc
>>
>>
>> --
>> Les enfants teribbles - research and deployment Marc Manthey
>> -  Hildeboldplatz 1a D - 50672 Köln - Germany
>> Tel.:0049-221-3558032
>> Mobil:0049-1577-3329231
>> jabber :[EMAIL PROTECTED]
>> blog : http://www.let.de
>> ipv6 http://www.ipsix.org
>>
>> Klarmachen zum Ändern!
>> http://www.piratenpartei-koeln.de/
>> ___
>> NANOG mailing list
>> NANOG@nanog.org
>> http://mailman.nanog.org/mailman/listinfo/nanog
>>
>
> ___
> NANOG mailing list
> NANOG@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] ATT VP: Internet to hit capacity by 2010

2008-04-23 Thread Marc Manthey
Am 23.04.2008 um 16:08 schrieb Williams, Marc:

> Just an ad

hi marc

cool . so i have 3 computers that does not do the job and i have not  
much money can you send me one of those;) ?
Or cheapest_beta_tester_non_commercial_offer you can make . ?

I accept offlist conversation.

thanks and sorry for my ramblings

greetings from germany

Marc


>> -Original Message-
>> From: Marc Manthey [mailto:[EMAIL PROTECTED]
>> Sent: Tuesday, April 22, 2008 9:07 PM
>> To: nanog@nanog.org
>> Subject: Re: [Nanog] ATT VP: Internet to hit capacity by 2010
>>
>>> ...is the first H.264 encoder .. designed by 
>>> specifically for ... environments. It natively supports
>> the RTSP
>>> streaming media protocol.  can stream directly to .
>>
>> hi marc
>> so your " oskar" can rtsp multicast stream over ipv6 and
>> quicktime not , or was this just an ad ?
>>
>> cheers
>>
>> Marc
>>
>>
>> --
>> Les enfants teribbles - research and deployment Marc Manthey
>> -  Hildeboldplatz 1a D - 50672 Köln - Germany
>> Tel.:0049-221-3558032
>> Mobil:0049-1577-3329231
>> jabber :[EMAIL PROTECTED]
>> blog : http://www.let.de
>> ipv6 http://www.ipsix.org
>>
>> Klarmachen zum Ändern!
>> http://www.piratenpartei-koeln.de/
>> ___
>> NANOG mailing list
>> NANOG@nanog.org
>> http://mailman.nanog.org/mailman/listinfo/nanog
>>


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Christopher Morrow
On Tue, Apr 22, 2008 at 10:48 AM, Laird Popkin <[EMAIL PROTECTED]> wrote:
> This raises an interesting issue - should optimization of p2p traffic (P4P) 
> be based on "static" network information, or "dynamic" network information. 
> It's certainly easier for ISP's to provide a simple network map that 
> real-time network condition data, but the real-time data might be much more 
> effective. Or even if it's not real-time, perhaps there could be "static" 
> network maps reflecting conditions at different times of day?
>

100% solution + 100% more complexity vs 80% solution ?

It strikes me that often just doing a reverse lookup on the peer
address would be 'good enough' to keep things more 'local' in a
network sense. Something like:

1) prefer peers with PTR's like mine (perhaps get address from a
public-ish server - myipaddress.com/ipchicken.com/dshield.org)
2) prefer peers within my /24->/16 ?

This does depend on what you define as 'local' as well, 'stay off my
transit links' or 'stay off my last-mile' or 'stay off that godawful
expensive VZ link from CHI to NYC in my backhaul network...

P4P is an interesting move by Verizon, tin-hat-ness makes me think
it's a method to raise costs on the direct competitors to VZ (increase
usage on access-links where competitors mostly have shared
access-links) but I agree with Harrowell that it's sure nice to see VZ
participating in Internet things in a good way for the community.
(though see tin-hat perhaps it's short-term good and long-term
bad.../me puts away hat now)

-Chris

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Alexander Harrowell
On Wed, Apr 23, 2008 at 3:47 PM, Christopher Morrow <
[EMAIL PROTECTED]> wrote:

>
> It strikes me that often just doing a reverse lookup on the peer
> address would be 'good enough' to keep things more 'local' in a
> network sense. Something like:
>
> 1) prefer peers with PTR's like mine (perhaps get address from a
> public-ish server - myipaddress.com/ipchicken.com/dshield.org)
> 2) prefer peers within my /24->/16 ?
>
> This does depend on what you define as 'local' as well, 'stay off my
> transit links' or 'stay off my last-mile' or 'stay off that godawful
> expensive VZ link from CHI to NYC in my backhaul network...


Well. here's your problem; depending on the architecture, the IP addressing
structure doesn't necessarily map to the network's cost structure. This is
why I prefer the P4P/DillTorrent announcement model.

Alex
___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Daniel Reed
On Tue, Apr 22, 2008 at 5:12 AM, Petri Helenius <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote:
> > But there is another way. That is for software developers to build a
>  > modified client that depends on a topology guru for information on the
>  > network topology. This topology guru would be some software that is run
>  number of total participants) I fail to figure out the necessary
>  mathematics where topology information would bring superior results
>  compared to the usual greedy algorithms where data is requested from the
>  peers where it seems to be flowing at the best rates. If local peers
>  with sufficient upstream bandwidth exist, majority of the data blocks
>  are already retrieved from them.

You can think of the scheduling process as two independent problems:
1. Given a list of all the chunks that all the peers you're connected
to have, select the chunks you think will help you complete the
fastest. 2. Given a list of all peers in a cloud, select the peers you
think will help you complete the fastest.

Traditionally, peer scheduling (#2) has been to just connect to
everyone you see and let network bottlenecks drive you toward
efficiency, as you pointed out.

However, as your chunk scheduling becomes more effective, it usually
becomes more expensive. At some point, its increasing complexity will
reverse the trend and start slowing down copies, as real-world clients
begin to block making chunk requests waiting for CPU to make
scheduling decisions.

A more selective peer scheduler would allow you to reduce the inputs
into the chunk scheduler (allowing it to do more complex things with
the same cost). The idea is, doing more math on the best data will
yield better overall results than doing less math on the best + the
worse data, with the assumption that a good peer scheduler will help
you find the best data.


As seems to be a trend, Michael appears to be fixated on a specific
implementation, and may end up driving many observers into thinking
this idea is annoying :)  However, there is a mathematical basis for
including topology (and other nontraditional) information in
scheduling decisions.

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Christopher Morrow
On Wed, Apr 23, 2008 at 11:39 AM, Alexander Harrowell
<[EMAIL PROTECTED]> wrote:
>
>
>
> On Wed, Apr 23, 2008 at 3:47 PM, Christopher Morrow
> <[EMAIL PROTECTED]> wrote:
> >
> > It strikes me that often just doing a reverse lookup on the peer
> > address would be 'good enough' to keep things more 'local' in a
> > network sense. Something like:
> >
> > 1) prefer peers with PTR's like mine (perhaps get address from a
> > public-ish server - myipaddress.com/ipchicken.com/dshield.org)
> > 2) prefer peers within my /24->/16 ?
> >
> > This does depend on what you define as 'local' as well, 'stay off my
> > transit links' or 'stay off my last-mile' or 'stay off that godawful
> > expensive VZ link from CHI to NYC in my backhaul network...
>
> Well. here's your problem; depending on the architecture, the IP addressing
> structure doesn't necessarily map to the network's cost structure. This is
> why I prefer the P4P/DillTorrent announcement model.
>

sure 80/20 rule... less complexity in the clients and some benefit(s).
perhaps short term something like the above with longer term more
realtime info about locality.

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Laird Popkin


On Apr 23, 2008, at 1:45 PM, Daniel Reed wrote:
> On Tue, Apr 22, 2008 at 5:12 AM, Petri Helenius <[EMAIL PROTECTED]>  
> wrote:
>> [EMAIL PROTECTED] wrote:
>>> But there is another way. That is for software developers to build a
>>> modified client that depends on a topology guru for information on  
>>> the
>>> network topology. This topology guru would be some software that  
>>> is run
>> number of total participants) I fail to figure out the necessary
>> mathematics where topology information would bring superior results
>> compared to the usual greedy algorithms where data is requested  
>> from the
>> peers where it seems to be flowing at the best rates. If local peers
>> with sufficient upstream bandwidth exist, majority of the data blocks
>> are already retrieved from them.

It's true that in the long run p2p transfers can optimize data sources  
by measuring actual throughput, but at any given moment this approach  
can only optimize within the set of known peers. The problem is that  
for large swarms, any given peer only knows about a very small subset  
of available peers, so it may take a long time to discover the best  
peers. This means (IMO) that starting with good peers instead of  
random peers can make a big difference in p2p performance, as well as  
reducing data delivery costs to the ISP.

For example, let's consider a downloader in a swarm of 100,000 peers,  
using a BitTorrent announce once a minute that returns 40 peers. Of  
course, this is a simple case, but it should be sufficient to make the  
general point that the selection of which peers you connect to matters.

Let's look at the odds that you'll find out about the closest peer (in  
network terms) over time.

With random peer assignment, the odds of any random peer being the  
closest peer is 40/100,000, and if you do the math, the odds of  
finding the closest peer on the first announce is 1.58%. Multiplying  
that out, it means that you'll have a 38.1% chance of finding the  
closest peer in the first half hour, and a 61.7% chance in the first  
hour, and 85.3% chance in the first two hours, and so on out as a  
geometric curve.

In the real world there are factors that complicate the analysis (e.g.  
most Trackers announce much less often than 1/minute, but some peers  
have other discovery mechanisms such as Peer Exchange). But as far as  
I can tell, the basic issue (that it takes a long time to find out  
about and test data exchanges with all of the peers in a large swarm)  
still holds.

With P4P, you find out about the closest peers on the first announce.

There's a second issue that I think is relevant, which is that  
measured network throughput may not reflect ISP costs and business  
policies. For example, a downloader might get data from a fast peer  
through a trans-atlantic pipe, but the ISP would really rather have  
that user get data from a fast peer on their local loop instead. This  
won't happen unless the p2p network knows about (and makes decisions  
based on) network topology.

What we found in our first field test was that random peer assignment  
moved 98% of data between ISP's and only 2% within ISP's (and for  
smaller ISP's, more like 0.1%), and that even simple network awareness  
resulted in an average of 34% same-ISP data transfers (i.e. a drop of  
32% in external transit). With ISP involvement, the numbers are even  
better.

>>
>
> You can think of the scheduling process as two independent problems:
> 1. Given a list of all the chunks that all the peers you're connected
> to have, select the chunks you think will help you complete the
> fastest. 2. Given a list of all peers in a cloud, select the peers you
> think will help you complete the fastest.
>
> Traditionally, peer scheduling (#2) has been to just connect to
> everyone you see and let network bottlenecks drive you toward
> efficiency, as you pointed out.
>
> However, as your chunk scheduling becomes more effective, it usually
> becomes more expensive. At some point, its increasing complexity will
> reverse the trend and start slowing down copies, as real-world clients
> begin to block making chunk requests waiting for CPU to make
> scheduling decisions.
>
> A more selective peer scheduler would allow you to reduce the inputs
> into the chunk scheduler (allowing it to do more complex things with
> the same cost). The idea is, doing more math on the best data will
> yield better overall results than doing less math on the best + the
> worse data, with the assumption that a good peer scheduler will help
> you find the best data.

Interesting approach. IMO, given modern computers, CPU is highlu  
underutilized (PC's are 80% idle, and rarely CPU-bound when in use),  
while bandwidth is relatively scarce, so using more CPU to optimize  
bandwidth usage seems like a great tradeoff!

> As seems to be a trend, Michael appears to be fixated on a specific
> implementation, and may end up driving many observers into thinking
> this idea is

[Nanog] P2P traffic optimization Was: Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Laird Popkin

On Apr 23, 2008, at 2:17 PM, Christopher Morrow wrote:
> On Wed, Apr 23, 2008 at 11:39 AM, Alexander Harrowell
> <[EMAIL PROTECTED]> wrote:
>>
>>
>>
>> On Wed, Apr 23, 2008 at 3:47 PM, Christopher Morrow
>> <[EMAIL PROTECTED]> wrote:
>>>
>>> It strikes me that often just doing a reverse lookup on the peer
>>> address would be 'good enough' to keep things more 'local' in a
>>> network sense. Something like:
>>>
>>> 1) prefer peers with PTR's like mine (perhaps get address from a
>>> public-ish server - myipaddress.com/ipchicken.com/dshield.org)
>>> 2) prefer peers within my /24->/16 ?
>>>
>>> This does depend on what you define as 'local' as well, 'stay off my
>>> transit links' or 'stay off my last-mile' or 'stay off that godawful
>>> expensive VZ link from CHI to NYC in my backhaul network...
>>
>> Well. here's your problem; depending on the architecture, the IP  
>> addressing
>> structure doesn't necessarily map to the network's cost structure.  
>> This is
>> why I prefer the P4P/DillTorrent announcement model.
>>
>
> sure 80/20 rule... less complexity in the clients and some benefit(s).
> perhaps short term something like the above with longer term more
> realtime info about locality.

For the applications, it's a lot less work to use a clean network map  
from ISP's than it is to in effect derive one from lookups to ASN, / 
24, /16, pings, traceroutes, etc. The main reason to spend the effort  
to implement those tactics is that it's better than not doing  
anything. :-)

Laird Popkin
CTO, Pando Networks
520 Broadway, 10th floor
New York, NY 10012

[EMAIL PROTECTED]
c) 646/465-0570


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Crypto export restricted prefix list

2008-04-23 Thread Kevin Blackham
For the archives, my google skills returned and I found this:
http://www.countryipblocks.net/

On Tue, Apr 22, 2008 at 4:05 PM, Kevin Blackham <[EMAIL PROTECTED]> wrote:
> Is there a prefix list available listing the IP space of cryptographic
>  export restricted countries?  My google skills are failing me.  I'm
>  required to apply a ban on North Korea, Iran, Syria, Sudan and Cuba.
>

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


[Nanog] [NANOG-announce] Program Committee vacancy: call for volunteers

2008-04-23 Thread Todd Underwood

Ted Seely has decided to step down from the NANOG Program Committee.
On behalf of the Program Committee, and the NANOG community as a
whole, I'd like to thank Ted for his many years of service to NANOG,
not just in his capacity as a member of the Program Committee, but as
a presenter and active participant.  

The Steering Committee has directed me to solicit candidates to fill
the remainder of his term, which ends with the fall 2008 meeting.  

As some of you know, the Program Committee is responsible for the
content of the NANOG conference.  It is important work and we really
need a diversity of backgrounds and perspectives to make the
conference relevant for conference attendees.  If you ever thought
that the content of a conference wasn't as good as it should be, this
is your chance to step up and work to make it better.

** Procedure **

To nominate yourself or someone else, please send mail to
[EMAIL PROTECTED] with the following, no later than Monday,
May 5, 2008

 - Your name
 - Nominee's name (if not you)
 - Nominee's email address
 - Nominee's phone number
 - Reasons why you believe the nominee is qualified to
   serve on the Program Committee.

A committee member will contact each of the nominees to verify
interest and possibly request additional information.

The Steering Committee, with input from the current Program Committee,
will then select the person to fill the position.

** Eligibility **

The charter states:

   "To be eligible to be appointed as a member of the Program
   Committee, an individual must have attended one NANOG meeting
   within the prior calendar year (12 months). "

** Duties **

Again quoting the charter:

   "The Program Committee is responsible for motivating/soliciting
   people to submit interesting talks, selecting the submissions which
   seem most appropriate (with some attention to presentation skills),
   and following up with speakers after acceptances to ensure that
   presentations are completed in time, with ample warning of potential
   problems with the presentation."

   "Each member of the Program Committee must review all presentations
   submitted for each meeting.   The Chair may excuse a member from
   one meeting's review cycle due to extenuating circumstances, but
   if a member misses two meetings in a row, he or she may be removed
   from the committee."

** Length of term **

This position is for the remainder of a two year term, which began after
the Fall 2006 meeting, and ends with the Fall 2008 meeting.  So, yes,
this term is only for a single NANOG meeting.  But this will not
impact the successful candidate's ability to serve two full terms.

If you have any further questions, please post to the nanog-futures list,
or contact the Steering Committee at [EMAIL PROTECTED]


Todd Underwood, Chair
Program Committee

[1] The full charter is available at http://www.nanog.org/charter.html


-- 
_
todd underwood +1 603 643 9300 x101
renesys corporationgeneral manager babbledog
[EMAIL PROTECTED]   http://www.renesys.com/blog

___
NANOG-announce mailing list
[EMAIL PROTECTED]
http://mailman.nanog.org/mailman/listinfo/nanog-announce

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


[Nanog] Fwd: IANA Update: Project to convert registries to XML

2008-04-23 Thread Martin Hannigan
Knowing that some here use this data for ops purposes.FYI



-- Forwarded message --
From: Michelle Cotton <[EMAIL PROTECTED]>
Date: Wed, 23 Apr 2008 12:49:44 -0700
Subject: IANA Update: Project to convert registries to XML
To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>

IETF Community:

IANA is currently engaged in a project to convert the IETF related
registries to XML to provide the community with multiple ways of
viewing registry information.  When conversion to XML is done, XML
will become the source format for the registries and the current
formats of html and plain text will be generated from the XML
source.  Stylesheets and schemas will also be made available together
with XML. Users will be able to access the registries in new and
useful ways, while still having the ability to see the registries in
the original style.

Part of the conversion requires IANA to "clean-up" the registries in
order to fit with the XML schemas.  IANA is not changing the data in
the registries.  IANA is cleaning up the formatting including
regularizing spacing and providing consistent display of titles,
references and registration procedures.

For those registries that need extensive format changes, IANA will be
working with the appropriate working groups and area directors to
make sure that the format changes do not affect the content of the
registry.

Those registries that are required to be in specific formats, for
example the MIBs and language subtags registries, will still be
produced in the existing formats.

IANA has consulted with the IETF XML directorate to make sure that
the XML schemas are properly formulated. Certain decisions on schemas
reflect the needs of IANA in maintaining the registries moving forward.

In the coming months, cleaned-up versions of the registries will
begin appearing on the IANA website.  If you notice any content
issues with the updated versions, or if they are not accessible,
please notify IANA staff immediately and we will work with the
appropriate parties to correct any inconsistencies.

We look forward to providing the XML versions of the registries to
better serve the community's needs.  IANA will announce in advance when
the registry conversion will be completed.  After the
conversion is complete, we intend to introduce new services such as
the ability to subscribe to be notified when specific registries are
updated


Thank you,

Michelle Cotton
IANA IETF Liaison
Email: [EMAIL PROTECTED]

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] P2P traffic optimization Was: Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Christopher Morrow
On Wed, Apr 23, 2008 at 3:50 PM, Laird Popkin <[EMAIL PROTECTED]> wrote:
>
>  On Apr 23, 2008, at 2:17 PM, Christopher Morrow wrote:
>
> > On Wed, Apr 23, 2008 at 11:39 AM, Alexander Harrowell
> > <[EMAIL PROTECTED]> wrote:
> >
> > >
> > >
> > >
> > > On Wed, Apr 23, 2008 at 3:47 PM, Christopher Morrow
> > > <[EMAIL PROTECTED]> wrote:
> > >
> > > >
> > > > It strikes me that often just doing a reverse lookup on the peer
> > > > address would be 'good enough' to keep things more 'local' in a
> > > > network sense. Something like:
> > > >
> > > > 1) prefer peers with PTR's like mine (perhaps get address from a
> > > > public-ish server - myipaddress.com/ipchicken.com/dshield.org)
> > > > 2) prefer peers within my /24->/16 ?
> > > >
> > > > This does depend on what you define as 'local' as well, 'stay off my
> > > > transit links' or 'stay off my last-mile' or 'stay off that godawful
> > > > expensive VZ link from CHI to NYC in my backhaul network...
> > > >
> > >
> > > Well. here's your problem; depending on the architecture, the IP
> addressing
> > > structure doesn't necessarily map to the network's cost structure. This
> is
> > > why I prefer the P4P/DillTorrent announcement model.
> > >
> > >
> >
> > sure 80/20 rule... less complexity in the clients and some benefit(s).
> > perhaps short term something like the above with longer term more
> > realtime info about locality.
> >
>
>  For the applications, it's a lot less work to use a clean network map from
> ISP's than it is to in effect derive one from lookups to ASN, /24, /16,
> pings, traceroutes, etc. The main reason to spend the effort to implement
> those tactics is that it's better than not doing anything. :-)
>

so.. 'not doing anything' may or may not be a good plan.. bittorrent
works fine today(tm). On the other hand, asking network folks to turn
over 'state secrets' (yes some folks, including doug's company)
believe that their network diagrams/designs/paths are  in some way
'secret' or a 'competitive advantage', so that will be a blocking
factor. While, doing simple/easy things initially (most bittorrent
things I've seen have <50 peers certainly there are more in some
cases, but average? > or < than 100? so dns lookups or bit-wise
comparisons seem cheap and easy) that get the progress going seems
like a grand plan.

Being blocked for the 100% solution and not making
progress/showing-benefit seems bad :(

-Chris

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


[Nanog] Routing Policy Information

2008-04-23 Thread Fouant, Stefan


 

Hi folks,

 

Wondering if there is a good repository of information somewhere which
outlines the various major ISPs routing policies such as default
local-pref treatment for customers vs. peers, handling of MED, allowed
prefix-lengths from customers, etc. or would one have to contact each
ISP one was a customer of to ascertain this information.

 

Thanks in advance.

 

Stefan Fouant

Principal Network Engineer

NeuStar

 



 

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


[Nanog] Outlook Junk Email After Migration?

2008-04-23 Thread Matthew Evans
Hello all,

Is anyone else have a problem with the NANOG messages ending up in their 
Outlook Junk Email folder since the migration to nanog.org? It only seems to 
happen when someone replies to another post and CC's nanog@nanog.org rather 
than replying to nanog@nanog.org directly.

If anyone else has run into this, what have you done to resolve it?

Matthew Evans, MCSA
Alpha Theory | "the right decision, every time."

  2201 Coronation Blvd., Suite 140
  Charlotte, NC 28227
  www.alphatheory.com


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Routing Policy Information

2008-04-23 Thread Eric Van Tol
> -Original Message-
> From: Fouant, Stefan [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, April 23, 2008 5:23 PM
> To: nanog@nanog.org
> Subject: [Nanog] Routing Policy Information
>
> 
>
>
>
> Hi folks,
>
>
>
> Wondering if there is a good repository of information somewhere
> which
> outlines the various major ISPs routing policies such as default
> local-pref treatment for customers vs. peers, handling of MED,
> allowed
> prefix-lengths from customers, etc. or would one have to contact each
> ISP one was a customer of to ascertain this information.
>
> Thanks in advance.
> Stefan Fouant

Try this:

http://www.onesc.net/communities/

-evt

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Outlook Junk Email After Migration?

2008-04-23 Thread Raymond L. Corbin
I have outlook spam filtering turned off so I haven't seen a problem. You can 
always create a rule saying if "nanog.org" is in the header and 'my name is not 
in the TO box' to move to the 'nanog' folder...or something similar...

-Ray

-Original Message-
From: Matthew Evans [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 23, 2008 5:52 PM
To: nanog@nanog.org
Subject: [Nanog] Outlook Junk Email After Migration?

Hello all,

Is anyone else have a problem with the NANOG messages ending up in their 
Outlook Junk Email folder since the migration to nanog.org? It only seems to 
happen when someone replies to another post and CC's nanog@nanog.org rather 
than replying to nanog@nanog.org directly.

If anyone else has run into this, what have you done to resolve it?

Matthew Evans, MCSA
Alpha Theory | "the right decision, every time."

  2201 Coronation Blvd., Suite 140
  Charlotte, NC 28227
  www.alphatheory.com


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Routing Policy Information

2008-04-23 Thread Fouant, Stefan
> -Original Message-
> From: Eric Van Tol [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, April 23, 2008 5:56 PM
> To: Fouant, Stefan; nanog@nanog.org
> Subject: RE: Routing Policy Information
> 
> > -Original Message-
> > From: Fouant, Stefan [mailto:[EMAIL PROTECTED]
> > Sent: Wednesday, April 23, 2008 5:23 PM
> > To: nanog@nanog.org
> > Subject: [Nanog] Routing Policy Information
> >
> > 
> >
> >
> >
> > Hi folks,
> >
> >
> >
> > Wondering if there is a good repository of information somewhere
> > which
> > outlines the various major ISPs routing policies such as default
> > local-pref treatment for customers vs. peers, handling of MED,
> > allowed
> > prefix-lengths from customers, etc. or would one have to contact
each
> > ISP one was a customer of to ascertain this information.
> >
> > Thanks in advance.
> > Stefan Fouant
> 
> Try this:
> 
> http://www.onesc.net/communities/
> 
> -evt

Perfect... This rocks!

Thanks,

Stefan Fouant
Principal Network Engineer
NeuStar

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Outlook Junk Email After Migration?

2008-04-23 Thread D'Arcy J.M. Cain
On Wed, 23 Apr 2008 17:52:07 -0400
Matthew Evans <[EMAIL PROTECTED]> wrote:
> Is anyone else have a problem with the NANOG messages ending up in their 
> Outlook Junk Email folder since the migration to nanog.org? It only seems to 
> happen when someone replies to another post and CC's nanog@nanog.org rather 
> than replying to nanog@nanog.org directly.
> 
> If anyone else has run into this, what have you done to resolve it?

Just have your local Microsoft expert redo your filter.

> Matthew Evans, MCSA

Oh, never mind.

Seriously, just fix your filters to recognize the following header:

List-Id: North American Network Operators Group 

-- 
D'Arcy J.M. Cain <[EMAIL PROTECTED]> |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Outlook Junk Email After Migration?

2008-04-23 Thread Matthew Evans
Thanks, didn't realize Outlook filters would look into headers. So far that 
seems to be working.


Matthew Evans, MCSA
Alpha Theory | "the right decision, every time."

  2201 Coronation Blvd., Suite 140
  Charlotte, NC 28227
  www.alphatheory.com

ALPHA THEORY QUICK DEMO (click here)


-Original Message-
From: D'Arcy J.M. Cain [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 23, 2008 6:16 PM
To: Matthew Evans
Cc: nanog@nanog.org
Subject: Re: [Nanog] Outlook Junk Email After Migration?

On Wed, 23 Apr 2008 17:52:07 -0400
Matthew Evans <[EMAIL PROTECTED]> wrote:
> Is anyone else have a problem with the NANOG messages ending up in their 
> Outlook Junk Email folder since the migration to nanog.org? It only seems to 
> happen when someone replies to another post and CC's nanog@nanog.org rather 
> than replying to nanog@nanog.org directly.
>
> If anyone else has run into this, what have you done to resolve it?

Just have your local Microsoft expert redo your filter.

> Matthew Evans, MCSA

Oh, never mind.

Seriously, just fix your filters to recognize the following header:

List-Id: North American Network Operators Group 

--
D'Arcy J.M. Cain <[EMAIL PROTECTED]> |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread michael.dillon
> Well. here's your problem; depending on the architecture, the 
> IP addressing structure doesn't necessarily map to the 
> network's cost structure. This is why I prefer the 
> P4P/DillTorrent announcement model.

What's with these cute cryptic and ultimately meaningless names?

I used the term "topology guru" because I wanted something that
halfway describes what is going on. Coining a word with "torrent" 
in it is wrong because this kind of topology guru can be used with
any P2P protocol. And P4P seems more like a brand name that tries
to leverage off the term P2P.

--Michael Dillon

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] P2P traffic optimization Was: Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Laird Popkin
I would certainly view the two strategies (reverse engineering network 
information and getting ISP-provided network information) as being 
complimentary. As you point out, for any ISP that doesn't provide network data, 
we're better off figuring out what we can to be smarter than 'random'. So while 
I prefer getting better data from ISP's, that's not holding us back from doing 
what we can without that data.

ISP's have been very clear that they regard their network maps as being 
proprietary for many good reasons. The approach that P4P takes is to have an 
intermediate server (which we call an iTracker) that processes the network maps 
and provides abstracted guidance (lists of IP prefixes and percentages) to the 
p2p networks that allows them to figure out which peers are near each other. 
The iTracker can be run by the ISP or by a trusted third party, as the ISP 
prefers.

- Laird Popkin, CTO, Pando Networks
  mobile: 646/465-0570

- Original Message -
From: "Christopher Morrow" <[EMAIL PROTECTED]>
To: "Laird Popkin" <[EMAIL PROTECTED]>
Cc: "Alexander Harrowell" <[EMAIL PROTECTED]>, "Doug Pasko" <[EMAIL 
PROTECTED]>, nanog@nanog.org
Sent: Wednesday, April 23, 2008 5:14:12 PM (GMT-0500) America/New_York
Subject: Re: P2P traffic optimization Was: [Nanog] Lies, Damned Lies, and 
Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

On Wed, Apr 23, 2008 at 3:50 PM, Laird Popkin <[EMAIL PROTECTED]> wrote:
>
>  On Apr 23, 2008, at 2:17 PM, Christopher Morrow wrote:
>
> > On Wed, Apr 23, 2008 at 11:39 AM, Alexander Harrowell
> > <[EMAIL PROTECTED]> wrote:
> >
> > >
> > >
> > >
> > > On Wed, Apr 23, 2008 at 3:47 PM, Christopher Morrow
> > > <[EMAIL PROTECTED]> wrote:
> > >
> > > >
> > > > It strikes me that often just doing a reverse lookup on the peer
> > > > address would be 'good enough' to keep things more 'local' in a
> > > > network sense. Something like:
> > > >
> > > > 1) prefer peers with PTR's like mine (perhaps get address from a
> > > > public-ish server - myipaddress.com/ipchicken.com/dshield.org)
> > > > 2) prefer peers within my /24->/16 ?
> > > >
> > > > This does depend on what you define as 'local' as well, 'stay off my
> > > > transit links' or 'stay off my last-mile' or 'stay off that godawful
> > > > expensive VZ link from CHI to NYC in my backhaul network...
> > > >
> > >
> > > Well. here's your problem; depending on the architecture, the IP
> addressing
> > > structure doesn't necessarily map to the network's cost structure. This
> is
> > > why I prefer the P4P/DillTorrent announcement model.
> > >
> > >
> >
> > sure 80/20 rule... less complexity in the clients and some benefit(s).
> > perhaps short term something like the above with longer term more
> > realtime info about locality.
> >
>
>  For the applications, it's a lot less work to use a clean network map from
> ISP's than it is to in effect derive one from lookups to ASN, /24, /16,
> pings, traceroutes, etc. The main reason to spend the effort to implement
> those tactics is that it's better than not doing anything. :-)
>

so.. 'not doing anything' may or may not be a good plan.. bittorrent
works fine today(tm). On the other hand, asking network folks to turn
over 'state secrets' (yes some folks, including doug's company)
believe that their network diagrams/designs/paths are  in some way
'secret' or a 'competitive advantage', so that will be a blocking
factor. While, doing simple/easy things initially (most bittorrent
things I've seen have <50 peers certainly there are more in some
cases, but average? > or < than 100? so dns lookups or bit-wise
comparisons seem cheap and easy) that get the progress going seems
like a grand plan.

Being blocked for the 100% solution and not making
progress/showing-benefit seems bad :(

-Chris

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Brandon Galbraith
On 4/23/08, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> > Well. here's your problem; depending on the architecture, the
> > IP addressing structure doesn't necessarily map to the
> > network's cost structure. This is why I prefer the
> > P4P/DillTorrent announcement model.
>
>
> What's with these cute cryptic and ultimately meaningless names?
>
> I used the term "topology guru" because I wanted something that
> halfway describes what is going on. Coining a word with "torrent"
> in it is wrong because this kind of topology guru can be used with
> any P2P protocol. And P4P seems more like a brand name that tries
> to leverage off the term P2P.
>
> --Michael Dillon
>
> Perhaps call it TopoMaster, and make it an open protocol that any app that
needs to move lots o' bits around can use.

-brandon
___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread michael.dillon
> However, as your chunk scheduling becomes more effective, it 
> usually becomes more expensive. At some point, its increasing 
> complexity will reverse the trend and start slowing down 
> copies, as real-world clients begin to block making chunk 
> requests waiting for CPU to make scheduling decisions.

This is not a bad thing. The intent is to optimize the whole
system, not provide the fastest copies. Those who promote QoS
often talk of some kind of scavenger level of service that
sweeps up any available bandwidth after all the important users
have gotten their fill. I see this type of P2P system in a similar
light, i.e. it allows the ISP to allow as much bandwidth use
as is economically feasible and block the rest. Since the end 
user ultimately relies on an ISP having a stable network that
functions in the long term (not drives the ISP to bankruptcy)
this seems to be a reasonable tradeoff.

> As seems to be a trend, Michael appears to be fixated on a 
> specific implementation, and may end up driving many 
> observers into thinking this idea is annoying :)  However, 
> there is a mathematical basis for including topology (and 
> other nontraditional) information in scheduling decisions.

There is also precedent for this in manufacturing scheduling
where you optimize your total systems by identifying the prime
bottleneck and carefully managing that single point in the 
chain of operations. I'm not hung up on a specific implementation,
just trying to present a concrete example that could be a starting
point. And until today, I knew nothing about the P4P effort which
seems to be working in the same direction.

--Michael Dillon
 

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Laird Popkin
In case anyone's curious, there's more info on P4P at 
http://cs-www.cs.yale.edu/homes/yong/p4p/index.html.

- Laird Popkin, CTO, Pando Networks
  mobile: 646/465-0570

- Original Message -
From: "michael dillon" <[EMAIL PROTECTED]>
To: nanog@nanog.org
Sent: Wednesday, April 23, 2008 6:40:11 PM (GMT-0500) America/New_York
Subject: Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: 
Internet to hit capacity by 2010]

> However, as your chunk scheduling becomes more effective, it 
> usually becomes more expensive. At some point, its increasing 
> complexity will reverse the trend and start slowing down 
> copies, as real-world clients begin to block making chunk 
> requests waiting for CPU to make scheduling decisions.

This is not a bad thing. The intent is to optimize the whole
system, not provide the fastest copies. Those who promote QoS
often talk of some kind of scavenger level of service that
sweeps up any available bandwidth after all the important users
have gotten their fill. I see this type of P2P system in a similar
light, i.e. it allows the ISP to allow as much bandwidth use
as is economically feasible and block the rest. Since the end 
user ultimately relies on an ISP having a stable network that
functions in the long term (not drives the ISP to bankruptcy)
this seems to be a reasonable tradeoff.

> As seems to be a trend, Michael appears to be fixated on a 
> specific implementation, and may end up driving many 
> observers into thinking this idea is annoying :)  However, 
> there is a mathematical basis for including topology (and 
> other nontraditional) information in scheduling decisions.

There is also precedent for this in manufacturing scheduling
where you optimize your total systems by identifying the prime
bottleneck and carefully managing that single point in the 
chain of operations. I'm not hung up on a specific implementation,
just trying to present a concrete example that could be a starting
point. And until today, I knew nothing about the P4P effort which
seems to be working in the same direction.

--Michael Dillon
 

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] P2P traffic optimization Was: Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Christopher Morrow
On Wed, Apr 23, 2008 at 6:30 PM, Laird Popkin <[EMAIL PROTECTED]> wrote:
> I would certainly view the two strategies (reverse engineering network 
> information and getting ISP-
> provided network information) as being complimentary. As you point out, for 
> any ISP that doesn't
> provide network data, we're better off figuring out what we can to be smarter 
> than 'random'. So while I
> prefer getting better data from ISP's, that's not holding us back from doing 
> what we can without that
> data.

ok, sounds better :) or more reasonable, or not immediately doomed to
blockage :) 'more realistic' even.

>  ISP's have been very clear that they regard their network maps as being 
> proprietary for many good
> reasons. The approach that P4P takes is to have an intermediate server (which 
> we call an iTracker)
> that processes the network maps and provides abstracted guidance (lists of IP 
> prefixes and
> percentages) to the p2p networks that allows them to figure out which peers 
> are near each other. The > iTracker can be run by the ISP or by a trusted 
> third party, as the ISP prefers.

What's to keep the itracker from being the new 'napster megaserver'? I
suppose if it just trades map info or lookup (ala dns lookups) and
nothing about torrent/share content things are less sensitive from a
privacy perspective. and a single point of failure of the network
perspective.

Latency requirements seem to be interesting for this as well... at
least dependent upon the model for sharing of the mapping data. I'd
think that a lookup model served the client base better (instead of
downloading many large files of maps in order to determine the best
peers to use). There's also a sensitivity to the part of the network
graph and which perspective to use for the client -> peer locality
mapping.

It's interesting at least :)

Thanks!
-Chris

(also, as an aside, your mail client seems to be making each paragraph
one long unbroken line... which drives at least pine and gmail a bit
bonkers...and makes quoting messages a much more manual process than
it should be.)

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog