Re: "Smart" hands around Dulles airport / northern VA.

2009-01-17 Thread Laird Popkin
Given the existence of water guns, you shouldn't allow anyone else into the 
entire CoLo. :-)

- Laird Popkin, CTO, Pando Networks
  mobile: 646/465-0570

- Original Message -
From: "Jim Willis" 
To: "Brandon Galbraith" 
Cc: nanog@nanog.org
Sent: Saturday, January 17, 2009 10:37:30 AM (GMT-0500) America/New_York
Subject: Re: "Smart" hands around Dulles airport / northern VA.

"FAQ:
Q: What!  Are you crazy? I'd never let a stranger into my cage!
A: Huh, neither would I, but some people are less paranoid than us and / or
know and trust me."

  I wouldn't allow my wife in my cage let alone a stranger and I hope my
colo would deny you both as well!!! I suppose this may be useful for some as
there have been two responses to your initial posting however, we use locked
cabinets and cages for a reason. I can appreciate wanting to return the
trust and community to the industry even though the outlook looks bleak on
your behalf.

Cheers,
Jim

On Sat, Jan 17, 2009 at 10:56 PM, Brandon Galbraith <
brandon.galbra...@gmail.com> wrote:

> On 1/16/09, Warren Kumari  wrote:
> >
> > Hi all,
> >
> > This is a mail that I have been meaning to send ever since I moved back
> to
> > the NoVA area, but have only gotten around to now...
> >
> > Many years ago I used to provide emergency, smart hands type assistance
> to
> > those in need, but had to give this up when I moved out of the area.
> Anyway,
> > I'm back and am willing to start doing this again
> >
> > This is primarily for those cases where you would normally have to fly
> > someone out to have them replace a line-card or two, hook up a few
> cables,
> > maybe swap a disk in an array, etc. This is not for those cases where you
> > simple need someone to push the reset button, nor for rebuilding your
> entire
> > cage from scratch...
> >
> > Anyway, if you have gear here and think that you might need to take me up
> > on this, drop me a mail and I'll give you my direct contact info...
> >
> > If you like this idea, and are willing to also provide this sort of thing
> > to the community (either in this, or in another area), please let me know
> --
> > I'll look into setting up a website / mailing list / something...
> >
>
> What Warren said. I'm in the Chicagoland area.
>
> -brandon
>
> --
> Brandon Galbraith
> Voice: 630.400.6992
> Email: brandon.galbra...@gmail.com
>



Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-22 Thread Laird Popkin
This raises an interesting issue - should optimization of p2p traffic (P4P) be 
based on "static" network information, or "dynamic" network information. It's 
certainly easier for ISP's to provide a simple network map that real-time 
network condition data, but the real-time data might be much more effective. Or 
even if it's not real-time, perhaps there could be "static" network maps 
reflecting conditions at different times of day?

Since P4P came up, I'd like to mention that the P4P Working Group is putting 
together another field test, where we can quantify issues like the tradeoff 
between static and dynamic network data, and we would love to hear from any 
ISP's that would be interested in participating in that test.  If you'd like 
the details of what it would take to participate, and what data you would get 
out of it, please email me.

Of course, independently of the test, if you're interested in participating in 
the P4P Working Group, we'd love to hear from you!

- Laird Popkin, CTO, Pando Networks
  email: [EMAIL PROTECTED]
  mobile: 646/465-0570

- Original Message -
From: "Alexander Harrowell" <[EMAIL PROTECTED]>
To: "Stephane Bortzmeyer" <[EMAIL PROTECTED]>
Cc: nanog@nanog.org
Sent: Tuesday, April 22, 2008 10:10:28 AM (GMT-0500) America/New_York
Subject: Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: 
Internet to hit capacity by 2010]

Personally I consider P4P a big step forward; it's good to see Big Verizon
engaging with these issues in a non-coercive fashion.

Just to braindump a moment, it strikes me that it would be very useful to be
able to announce preference metrics by netblock (for example, to deal with
networks with varied internal cost metrics or to pref-in the CDN servers)
but also risky. If that was done, client developers would be well advised to
implement a check that the announcing network actually owns the netblock
they are either preffing in (to send traffic via a suboptimal route/through
a spook box of some kind/onto someone else's pain-point) or out (to restrict
traffic from reaching somewhere); you wouldn't want a hijack, whether
malicious or clue-deficient.

There is every reason to encourage the use of dynamic preference.


On Tue, Apr 22, 2008 at 2:54 PM, Stephane Bortzmeyer <[EMAIL PROTECTED]>
wrote:

> On Tue, Apr 22, 2008 at 02:02:21PM +0100,
>  [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote
>  a message of 46 lines which said:
>
> > This is where all the algorithmic tinkering of the P2P software
> > cannot solve the problem. You need a way to insert non-technical
> > information about the network into the decision-making process.
>
> It's strange that noone in this thread mentioned P4P yet. Isn't there
> someone involved in P4P at Nanog?
>
> http://www.dcia.info/activities/p4pwg/
>
> IMHO, the biggest issue with P4P is the one mentioned by Alexander
> Harrowell. After that users have been s.d up so many times by some
> ISPs, will they trust this service?
>
>
> ___
> NANOG mailing list
> NANOG@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
>
___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Laird Popkin
ly scarce, so using more CPU to optimize  
bandwidth usage seems like a great tradeoff!

> As seems to be a trend, Michael appears to be fixated on a specific
> implementation, and may end up driving many observers into thinking
> this idea is annoying :)  However, there is a mathematical basis for
> including topology (and other nontraditional) information in
> scheduling decisions.
>
> ___
> NANOG mailing list
> NANOG@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog

Laird Popkin
CTO, Pando Networks
520 Broadway, 10th floor
New York, NY 10012

[EMAIL PROTECTED]
c) 646/465-0570


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


[Nanog] P2P traffic optimization Was: Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Laird Popkin

On Apr 23, 2008, at 2:17 PM, Christopher Morrow wrote:
> On Wed, Apr 23, 2008 at 11:39 AM, Alexander Harrowell
> <[EMAIL PROTECTED]> wrote:
>>
>>
>>
>> On Wed, Apr 23, 2008 at 3:47 PM, Christopher Morrow
>> <[EMAIL PROTECTED]> wrote:
>>>
>>> It strikes me that often just doing a reverse lookup on the peer
>>> address would be 'good enough' to keep things more 'local' in a
>>> network sense. Something like:
>>>
>>> 1) prefer peers with PTR's like mine (perhaps get address from a
>>> public-ish server - myipaddress.com/ipchicken.com/dshield.org)
>>> 2) prefer peers within my /24->/16 ?
>>>
>>> This does depend on what you define as 'local' as well, 'stay off my
>>> transit links' or 'stay off my last-mile' or 'stay off that godawful
>>> expensive VZ link from CHI to NYC in my backhaul network...
>>
>> Well. here's your problem; depending on the architecture, the IP  
>> addressing
>> structure doesn't necessarily map to the network's cost structure.  
>> This is
>> why I prefer the P4P/DillTorrent announcement model.
>>
>
> sure 80/20 rule... less complexity in the clients and some benefit(s).
> perhaps short term something like the above with longer term more
> realtime info about locality.

For the applications, it's a lot less work to use a clean network map  
from ISP's than it is to in effect derive one from lookups to ASN, / 
24, /16, pings, traceroutes, etc. The main reason to spend the effort  
to implement those tactics is that it's better than not doing  
anything. :-)

Laird Popkin
CTO, Pando Networks
520 Broadway, 10th floor
New York, NY 10012

[EMAIL PROTECTED]
c) 646/465-0570


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] P2P traffic optimization Was: Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Laird Popkin
I would certainly view the two strategies (reverse engineering network 
information and getting ISP-provided network information) as being 
complimentary. As you point out, for any ISP that doesn't provide network data, 
we're better off figuring out what we can to be smarter than 'random'. So while 
I prefer getting better data from ISP's, that's not holding us back from doing 
what we can without that data.

ISP's have been very clear that they regard their network maps as being 
proprietary for many good reasons. The approach that P4P takes is to have an 
intermediate server (which we call an iTracker) that processes the network maps 
and provides abstracted guidance (lists of IP prefixes and percentages) to the 
p2p networks that allows them to figure out which peers are near each other. 
The iTracker can be run by the ISP or by a trusted third party, as the ISP 
prefers.

- Laird Popkin, CTO, Pando Networks
  mobile: 646/465-0570

- Original Message -
From: "Christopher Morrow" <[EMAIL PROTECTED]>
To: "Laird Popkin" <[EMAIL PROTECTED]>
Cc: "Alexander Harrowell" <[EMAIL PROTECTED]>, "Doug Pasko" <[EMAIL 
PROTECTED]>, nanog@nanog.org
Sent: Wednesday, April 23, 2008 5:14:12 PM (GMT-0500) America/New_York
Subject: Re: P2P traffic optimization Was: [Nanog] Lies, Damned Lies, and 
Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

On Wed, Apr 23, 2008 at 3:50 PM, Laird Popkin <[EMAIL PROTECTED]> wrote:
>
>  On Apr 23, 2008, at 2:17 PM, Christopher Morrow wrote:
>
> > On Wed, Apr 23, 2008 at 11:39 AM, Alexander Harrowell
> > <[EMAIL PROTECTED]> wrote:
> >
> > >
> > >
> > >
> > > On Wed, Apr 23, 2008 at 3:47 PM, Christopher Morrow
> > > <[EMAIL PROTECTED]> wrote:
> > >
> > > >
> > > > It strikes me that often just doing a reverse lookup on the peer
> > > > address would be 'good enough' to keep things more 'local' in a
> > > > network sense. Something like:
> > > >
> > > > 1) prefer peers with PTR's like mine (perhaps get address from a
> > > > public-ish server - myipaddress.com/ipchicken.com/dshield.org)
> > > > 2) prefer peers within my /24->/16 ?
> > > >
> > > > This does depend on what you define as 'local' as well, 'stay off my
> > > > transit links' or 'stay off my last-mile' or 'stay off that godawful
> > > > expensive VZ link from CHI to NYC in my backhaul network...
> > > >
> > >
> > > Well. here's your problem; depending on the architecture, the IP
> addressing
> > > structure doesn't necessarily map to the network's cost structure. This
> is
> > > why I prefer the P4P/DillTorrent announcement model.
> > >
> > >
> >
> > sure 80/20 rule... less complexity in the clients and some benefit(s).
> > perhaps short term something like the above with longer term more
> > realtime info about locality.
> >
>
>  For the applications, it's a lot less work to use a clean network map from
> ISP's than it is to in effect derive one from lookups to ASN, /24, /16,
> pings, traceroutes, etc. The main reason to spend the effort to implement
> those tactics is that it's better than not doing anything. :-)
>

so.. 'not doing anything' may or may not be a good plan.. bittorrent
works fine today(tm). On the other hand, asking network folks to turn
over 'state secrets' (yes some folks, including doug's company)
believe that their network diagrams/designs/paths are  in some way
'secret' or a 'competitive advantage', so that will be a blocking
factor. While, doing simple/easy things initially (most bittorrent
things I've seen have <50 peers certainly there are more in some
cases, but average? > or < than 100? so dns lookups or bit-wise
comparisons seem cheap and easy) that get the progress going seems
like a grand plan.

Being blocked for the 100% solution and not making
progress/showing-benefit seems bad :(

-Chris

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-23 Thread Laird Popkin
In case anyone's curious, there's more info on P4P at 
http://cs-www.cs.yale.edu/homes/yong/p4p/index.html.

- Laird Popkin, CTO, Pando Networks
  mobile: 646/465-0570

- Original Message -
From: "michael dillon" <[EMAIL PROTECTED]>
To: nanog@nanog.org
Sent: Wednesday, April 23, 2008 6:40:11 PM (GMT-0500) America/New_York
Subject: Re: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: 
Internet to hit capacity by 2010]

> However, as your chunk scheduling becomes more effective, it 
> usually becomes more expensive. At some point, its increasing 
> complexity will reverse the trend and start slowing down 
> copies, as real-world clients begin to block making chunk 
> requests waiting for CPU to make scheduling decisions.

This is not a bad thing. The intent is to optimize the whole
system, not provide the fastest copies. Those who promote QoS
often talk of some kind of scavenger level of service that
sweeps up any available bandwidth after all the important users
have gotten their fill. I see this type of P2P system in a similar
light, i.e. it allows the ISP to allow as much bandwidth use
as is economically feasible and block the rest. Since the end 
user ultimately relies on an ISP having a stable network that
functions in the long term (not drives the ISP to bankruptcy)
this seems to be a reasonable tradeoff.

> As seems to be a trend, Michael appears to be fixated on a 
> specific implementation, and may end up driving many 
> observers into thinking this idea is annoying :)  However, 
> there is a mathematical basis for including topology (and 
> other nontraditional) information in scheduling decisions.

There is also precedent for this in manufacturing scheduling
where you optimize your total systems by identifying the prime
bottleneck and carefully managing that single point in the 
chain of operations. I'm not hung up on a specific implementation,
just trying to present a concrete example that could be a starting
point. And until today, I knew nothing about the P4P effort which
seems to be working in the same direction.

--Michael Dillon
 

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [NANOG] P2P traffic optimization Was: [Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-24 Thread Laird Popkin
Replies below:

- Laird Popkin, CTO, Pando Networks
  mobile: 646/465-0570

- Original Message -
From: "Christopher Morrow" <[EMAIL PROTECTED]>
To: "Laird Popkin" <[EMAIL PROTECTED]>
Cc: "Alexander Harrowell" <[EMAIL PROTECTED]>, "Doug Pasko" <[EMAIL 
PROTECTED]>, nanog@nanog.org
Sent: Wednesday, April 23, 2008 7:47:57 PM (GMT-0500) America/New_York
Subject: Re: P2P traffic optimization Was: [Nanog] Lies, Damned Lies, and 
Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

On Wed, Apr 23, 2008 at 6:30 PM, Laird Popkin <[EMAIL PROTECTED]> wrote:
> I would certainly view the two strategies (reverse engineering network 
> information and getting ISP-
> provided network information) as being complimentary. As you point out, for 
> any ISP that doesn't
> provide network data, we're better off figuring out what we can to be smarter 
> than 'random'. So while I
> prefer getting better data from ISP's, that's not holding us back from doing 
> what we can without that
> data.

ok, sounds better :) or more reasonable, or not immediately doomed to
blockage :) 'more realistic' even.

- Thanks. Given that there are many thousands of ISP's, an incremental approach 
seemed best. :-)

>  ISP's have been very clear that they regard their network maps as being 
> proprietary for many good
> reasons. The approach that P4P takes is to have an intermediate server (which 
> we call an iTracker)
> that processes the network maps and provides abstracted guidance (lists of IP 
> prefixes and
> percentages) to the p2p networks that allows them to figure out which peers 
> are near each other. The > iTracker can be run by the ISP or by a trusted 
> third party, as the ISP prefers.

What's to keep the itracker from being the new 'napster megaserver'? I
suppose if it just trades map info or lookup (ala dns lookups) and
nothing about torrent/share content things are less sensitive from a
privacy perspective. and a single point of failure of the network
perspective.

- That's a good point. The iTracker never knows what's moving in the P2P 
network. We are comparing two recommendation models, which expose different 
levels of information. In the 'general' model, the iTracker knows nothing about 
the p2p network, but provides a recommendation matrix based purely on the ISP's 
network resources and policies. In the 'per torrent' model, the iTracker 
receives information about peer distribution (e.g. there are many seeds in NYC, 
and many downloaders in Chicago), in which case it can make peering 
recommendations based on that knowledge. The latter approach seems like it 
should be better able to 'tune' communications (to reduce maximum link 
utilization, etc.), but it requires the p2p network to provide real-time 
information about swarm distribution, which involves more communications, and 
exposes more details of the network to the iTracker, raising some privacy 
concerns. Admittedly the iTracker doesn't know what the swarm is delivering, 
but it would know (in network terms) where the users in that swarm are, for 
example.

Latency requirements seem to be interesting for this as well... at
least dependent upon the model for sharing of the mapping data. I'd
think that a lookup model served the client base better (instead of
downloading many large files of maps in order to determine the best
peers to use). There's also a sensitivity to the part of the network
graph and which perspective to use for the client -> peer locality
mapping.

- The network data is loaded into the p2p networks's Tracker, and used locally 
there, so there's no external communications during normal p2p network 
operation. The communication pattern in P4P (current, at any rate - it's still 
evolving) is that the P2P network's Tracker polls the P4P iTracker periodically 
to receive updated map files. In the case of the 'general' weight map, it could 
be one update every few minutes (or every day, etc., depending on how often the 
ISP cares to update network information). In the case of 'per torrent' 
optimization, it's an update per swarm every few minutes, which is much more 
messaging, so it might only make sense to do this for a very small number of 
the most popular swarms.

It's interesting at least :)

Thanks!
-Chris

(also, as an aside, your mail client seems to be making each paragraph
one long unbroken line... which drives at least pine and gmail a bit
bonkers...and makes quoting messages a much more manual process than
it should be.)

- Sorry - I reconfigured to send 'plain text' email. Does it show up OK? I'm 
using Zimbra's web mail interface.

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: [NANOG] [Nanog] P2P traffic optimization Was: Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

2008-04-24 Thread Laird Popkin
ion about what exactly is being transferred, by whom and for
>>> how long.

The P2P network doesn't provide this kind of information to the  
iTracker.

We're comparing two models, "generic' and 'tuned per swarm'.

In the 'generic' model, the P2P network is given one weight matrix,  
based purely on the ISP's network. In this model, the P2P network  
doesn't provide any information to the iTracker at all - they just  
request an updated weight matrix periodically so that when the ISP  
changes network structure or policies it's updated in the P2P network  
automatically.

In the 'tuned per swarm' model, the P2P network provides information  
about peer distribution of each swarm's peers (e.g. there are seeds in  
NYC and downloaders in Chicago). With this information, the iTracker  
can provide a 'tuned' weight matrix for each swarm, which should in  
theory be better. This is something that we're going to test in the  
next field test, so we can put some numbers around it. This model  
requires more communications, and exposes more of the p2p network's  
information to the ISP, so it's important to be able to quantify the  
benefit to decide whether it's worth it.

BTW, if this discussion is getting off topic for the NANOG mailing  
list, we can continue the discussion offline. Does anyone think that  
we should do so?

>>> -Mike Gonnason
>>>
>>> ___
>>> NANOG mailing list
>>> NANOG@nanog.org
>>> http://mailman.nanog.org/mailman/listinfo/nanog
>>>
>>
>>
>> ___
>> NANOG mailing list
>> NANOG@nanog.org
>> http://mailman.nanog.org/mailman/listinfo/nanog
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.5 (Darwin)
>
> iD4DBQFIEK5hK/tq6CJjZQIRAgXqAJd8t3XkmYqo1WYaJP7qOF4W67tYAJ9C5hZ+
> iwVc8ZU8AJ3f98KCFCq8Eg==
> =LEPV
> -END PGP SIGNATURE-
>
> ___
> NANOG mailing list
> NANOG@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog

Laird Popkin
CTO, Pando Networks
520 Broadway, 10th floor
New York, NY 10012

[EMAIL PROTECTED]
c) 646/465-0570


___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: P2P agents for software distribution - saving the WAN from meltdown?!?

2008-06-18 Thread Laird Popkin
To address the original question, there are several p2p companies focusing on 
optimizing p2p for internal distribution of software and rich media. In 
particular, Kontiki and Ignite both offer such services, and between the two 
have many of the Fortune 1000 as customers (Coke, Bank of America, Accenture, 
McDonalds, Canon, Burger King, etc.). Their systems manage not just the (p2p) 
physical delivery of the bits, but also the enterprise management aspects (e.g. 
sending the right versions of the right software to the right desktops, 
managing data flow in a way that works well on a corporate LAN, security, 
running the installs/upgrades, etc.).

Addressing the Revision3 comment in the thread, I don't think that the "RIAA 
and similar organizations" had any problem with Revision3 using the BitTorrent 
protocol, but with them running an (inadvertently) open Tracker that was 
hosting 250K pirate torrents. The "attack" was pretty clearly a MediaDefender 
software bug in their code that monitors pirate torrents, multiplied by the 
large number of servers that they run, which unfortunately kicked in over a 
holiday weekend when nobody was around to fix it. Once MediaDefender was 
notified of the problem, Revision3 said that it was fixed quickly. So while you 
may not like what MediaDefender does for a living, it doesn't look like they 
were trying to DDOS Revision3 for using p2p protocols.

- Laird Popkin, CTO, Pando Networks
  mobile: 646/465-0570

- Original Message -
From: "Blaine Fleming" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Sent: Wednesday, June 18, 2008 12:20:28 PM (GMT-0500) America/New_York
Subject: Re: P2P agents for software distribution - saving the WAN from 
meltdown?!?

Christopher Morrow wrote:
> On Mon, Jun 16, 2008 at 9:53 AM, Netfortius <[EMAIL PROTECTED]> wrote:
>   
>> Has anybody used (and been successful at) a bit-torrent-like agent for fast
>> distribution of LEGAL software (install programs of large-DVD size), across
>> multiple sites, all over the globe, with bad WAN connectivity? I have read a
>> couple of references online (e.g.
>> http://torrentfreak.com/university-uses-utorrent-080306/) about such, but I
>> am a little reluctant to do it in a corporate environment, especially in the
>> light of potential misuse of such ... unless finding a way to install, use
>> and remove the P2P agent, all in one shot ... catch 22, sort of (distributing
>> the P2P agent, that is :)) ...
>> 
>
> revision3.com
>   

And we saw how it worked out for Revision3.com.  MediaDefender 
considered them illegal and launched a Denial of Service attack against 
them over Memorial Day weekend.  P2P is considered illegal and wrong by 
people with lots of money and that makes it hard to use for legitimate 
services.  Because MediaDefender is backed by the RIAA and similar 
organizations they seem to be immune to prosecution.  However, if *I* 
did the same thing then I know I would be locked up right now.

--Blaine







Re: EC2 and GAE means end of ip address reputation industry? (Re: Intrustion attempts from Amazon EC2 IPs)

2008-06-22 Thread Laird Popkin
Normal hosting facilities let you do pretty much anything you want, unless you 
start causing problems for the ISP or their customers. You pay them to provide 
bandwidth, space, power and cooling.

There are more restrictions for shared virtual sites (i.e. the $10/month web 
sites). Usually they just let you upload PHP and HTML pages and access a MySQL 
database. But, as Brandon pointed out, even they usually let you do basic 
things in PHP such as sending email.

- Laird Popkin, CTO, Pando Networks
  mobile: 646/465-0570

- Original Message -
From: "Brandon Galbraith" <[EMAIL PROTECTED]>
To: "Nathan Ward" <[EMAIL PROTECTED]>
Cc: "nanog" <[EMAIL PROTECTED]>
Sent: Monday, June 23, 2008 1:45:03 AM (GMT-0500) America/New_York
Subject: Re: EC2 and GAE means end of ip address reputation industry? (Re: 
Intrustion attempts from Amazon EC2 IPs)

On 6/23/08, Nathan Ward <[EMAIL PROTECTED]> wrote:
>
>
> Do 'normal' web hosting providers allow customer created scripts to create
> TCP sessions out to arbitrary things?
>

Doesn't PHP provide a fair amount of TCP functionality that can be used
simply by uploading the code you need to your shared web hosting account?

-brandon



Re: So why don't US citizens get this?

2008-07-28 Thread Laird Popkin


On Jul 28, 2008, at 9:54 AM, John Levine wrote:

In article <[EMAIL PROTECTED]>  
you write:

Sort of makes one wonder how the US came to have ubiquitous roads, or
power, or water distribution...


Oh, but that's different.  They were important.


Or, to be more specific, people everywhere need power and water and  
were willing to pay for them, so other people started companies to  
provide them everywhere. Roads are a little more complicated - the  
basic roads were there due to demand, but the highways got built  
because the Army argued that without highways they couldn't move  
troops and supplies to defend the country in case of an invasion. The  
same trick got science funded for a while... :-)




Re: Great Suggestion for the DNS problem...?

2008-07-29 Thread Laird Popkin
We mainly use UDP for tracker announces, and only use TCP when we have  
to, and can confirm that the server spends far more time on the TCP  
setup/teardown than on computing the tracker response.


- LP

On Jul 29, 2008, at 12:21 PM, Mikael Abrahamsson wrote:


On Tue, 29 Jul 2008, Steven M. Bellovin wrote:

In this situation, UDP uses one query packet and one reply.  TCP  
uses 3
to set up the connection, a query, a reply, and three to tear down  
the

connection.  *Plus* the name server will have to keep state for
every client, plus TIMEWAIT state, etc.  (Exercise left to TCP geek
readers: how few packets can you do this in?  For example -- send the
query with the SYN+ACK, send client FIN with the query, send server  
FIN

with the answer?  Bonus points for not leaving the server's side in
TIMEWAIT.  Exercise for implementers: how sane can your stack be if
you're going to support that?)


The bittorrent tracker guys seem to run into problems at around 30kk  
tracker requests per second (TCP), and they say it's mostly setup/ 
teardown (sy usage in vmstat), the tracker hash lookup doesn't take  
that much.


They're trying to move to UDP, currently their workload is approx 5%  
UDP.


I guess TCP DNS workload would be similar in characteristics.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]






Re: uTorrent, IPv6

2008-08-19 Thread Laird Popkin
My recollection is that there were complaints about them reconfiguring  
people's TCP stacks and uTorrent stopped enabling IPv6.


- Laird Popkin, CTO, Pando Networks
  http://www.pandonetworks.com
  520 Broadway, 10th Floor, NY, NY, 10012
  [EMAIL PROTECTED], 646/465-0570.

Sent from my iPhone. Apologies in advance for typo's.

On Aug 19, 2008, at 2:34 AM, Nathan Ward <[EMAIL PROTECTED]> wrote:


On 19/08/2008, at 6:28 PM, Mikael Abrahamsson wrote:


On Tue, 19 Aug 2008, Nathan Ward wrote:

uTorrent actively enables IPv6 on XP SP2 and Vista machines in the  
install process (by default, it can be turned off). IPv6 is turned  
on, on lots of PCs.


We looked into this, and IPv6 is not mentioned in the install  
process, and it's not selected by default as far as we can see.  
There is a button in Preferences->general that says "install IPv6/ 
Teredo", so yes, it supports it but it's not on by default, at  
least not here.


Oh? I looked in to this in a beta, and it was on by default.
I'll have another look.

--
Nathan Ward