Fw: new message
Hey! New message, please read <http://forum.onnet.com.vn/now.php?2bv> Justin Shore
Fw: new message
Hey! New message, please read <http://tecmawatco.com.vn/let.php?qd> Justin Shore
Re: NANOG Operational Audit of IPv4+ End-to-End L3 Transport in North America
On 4/27/2010 3:02 PM, IPv3.com wrote: NANOG Operational Audit of IPv4+ End-to-End L3 Transport in North America I haven't been keeping up with NANOG in a while so perhaps I missed the discussion and/or memo. I take it that this spammer is still being allowed to send his shit to the mailing list? Justin
Springnet Underground
Does anyone have any experience with the Springnet Underground in Springfield, MO? In case people don't know it's a working limestone mine. In the areas that have already been mined close to the entrance, they've sold or rented out space between the rock pillars that hold up the mine roof. The city of Springfield put in a data center and started selling services out of it that complimented their city-wide fiber plant. They've been suggested as a site for hosting a cabinet full of gear. I'm wondering what the connectivity options are for that facility and so far haven't been able to get a straight answer from anyone. For the most part it looks like the SprintNet folks want to sell you their own upstream which won't cut it for our needs. AT&T serves the area of course but I will not have them as my last mile (or any mile for that matter). Does L3 or any other large carrier have facilities there? Does anyone have any experience with the facility in general? Thanks Justin
Cogent input
I'm in search of some information about Cogent, it's past, present and future. I've heard bits and pieces about Cogent's past over the years but by no means have I actively been keeping up. I'm aware of some (regular?) depeering issues. The NANOG archives have given me some additional insight into that (recurring?) problem. The reasoning behind the depeering events is a bit fuzzy though. I would be interested in people's opinion on whether or not they should be consider for upstream service based on this particular issue. Are there any reasonable mitigation measures available to Cogent downstreams if (when?) Cogent were to be depeered again? My understanding is that at least on previous depeering occasion, the depeering partner simply null-routed all prefixes being received via Cogent, creating a blackhole essentially. I also recall reading that this meant that prefixes being advertised and received by the depeering partner from other peers would still end up in the blackhole. The only solution I would see to this problem would be to shut down the BGP session with Cogent and rely on a 2nd upstream. Are there any other possible steps for mitigation in a depeering event? I also know that their bandwidth is extremely cheap. This of course creates an issue for technical folks when trying to justify other upstream options that cost significantly more but also don't have a damaging history of getting depeered. Does Cogent still have an issue with depeering? Are there any reasonable mitigation measures or should a downstream customer do any thing in particular to ready themselves for a depeering event? Does their low cost outweigh the risks? What are the specific risks? Thanks Justin
Re: Cogent input
Tore Anderson wrote: advertise loopbacks, and another for the actual feed. The biggest issue we have with them is that they don't allow deaggregation. If you've been allocated a prefix of length yy, they'll accept only x.x.x.x/yy, not x.x.x.x/yy le 24. Yes, sometimes deaggregation is necessary or desirable even if only temporarily. Interesting. I requested exactly that when filling in their BGP questionnaire, and they set it up - no questions asked. It would be a show-stopper for us if they didn't let us deaggregate. We're not really wanting their service for our existing service area. We're wanting to use it to expand to a new service area that is only connected by a much lower-speed service back to the bulk of our current network for specific services like voice. Our PI space is currently broken up to 1) let us effect some measure of load-balancing with Cox (any prefixes we advertised out Cox instead of our much larger tier-1 resulted in a wildly disproportionate amount of preference given to Cox; not sure why) and 2) let this new venture get started with a reasonably-sized allotment of IP. It will be advertised out local providers in that area and also at our main peering point with significant prepending. Visa versa for our other prefixes. We have to deaggregate a little bit to make this work (but not excessively of course). I have been promised, in writing, that they will provide us with native IPv6 transit before the end of the year. I hope at least some SPs make this commitment back in the states. I can't find any tier-1s that can provide us with native v6. Our tier-1 upstream has a best effort test program in place that uses ipv6ip tunnels. The other upstream says that they aren't making any public IPv6 plans yet. It's hard to push the migration to v6 along when native v6 providers aren't readily available. Justin
Re: Cogent input
John van Oppen wrote: NTT (2914) and GBLX (3549) both do native v6... most everyone else on the tier1 list does tunnels. :( There are some nice tier2 networks who do native v6, tiscali and he.net come to mind. Let me rephrase that. :-) I know of no tier-Ns that offer any native v6 services here in the Midwest (central Kansas) including L3 which only has a best effort pilot program using tunnels. There might be more options in KC or OKC but not here that I'm aware of... Justin
Re: Cogent input
Paul Timmins wrote: GlobalCrossing told me today I can order native IPv6 anywhere on their network. Don't know if they count as Tier 1 on your list, though. VZB has given me tunnels for a while, hopefully they'll get their pMTU issue fixed so we can do more interesting things with it. I'd love to have GLBX but I'm positive that they aren't available here. We'd have to pay for transport to a much larger market to go get them. That may be more feasible when the state network gets built but that's a few years off. Until then I'll have to dream about GBLX... Justin
Re: Level 3 - "legacy" Wiltel/Looking Glass bandwidth
Scott Howard wrote: We're looking at getting connectivity via Level 3 in a particular datacenter, but we're being told that it's "legacy Wiltel/Looking Glass" rather than "true" Level 3. Given that both of these acquisitions occurred years ago should I be worried, or is this "legacy" connectivity the same as L3 at any other datacenter? We were initially homed to the old Telcove network. Never had any trouble with those guys. When Level3 bought them they canned all the local IP folks. That forced you to work with the remaining overworked IP folks back on the East coast (Angela and a guy who's name I forget). Their local transport techs are good but there are very few of them left now, as compared to seeing dozens of their trucks roam the streets daily. We eventually asked to be moved off of 19094 and onto 3356. The extra Telcove hop made for some less preferred and inefficient routing. All they did was extend 3356 to the local 7600 though. The single Wichita 7600 gets on the old ring in a very fugly way. Working != correct, proper, reliable or SP-grade. We just turned up a new 200Mbps circuit to them. Wichita was flagged as not allowing any more high-speed circuits so they provisioned our circuit on a new ring to St Louis. I'm actually glad that's the case. I'm hoping that it's more stable than the KC-Wichita-Houston-Dallas ring has been in the past. We had several complete and partial outages (read: dozen plus in 2 years time) on that ring. The most recent was a few months ago when we suddenly lost all but about 2000 routes from L3. I spent close to 6 hours on that problem in the wee hours, trying to get someone to diagnose the issue. When we turned up the new 200Mbps circuit we asked for a way to do a speedtest on it. We got nothing. Apparently L3 forced Telcove to take down their own speedtest site which were pointed to after we turned up the 100Mbps circuit. L3 apparently doesn't offer a speedtest site of their own. I find that to be completely unacceptable. Every time we tried to take this position we got the same old line of "we've got everything in the path configured correctly; you'll get the full 200Mbps" to which I'd reply with a reminder that we got the same assurance when we turned up the 100Mbps with them a year prior only to later discover a cap of around 50Mbps somewhere in the middle. Our account team's hands are tied. There isn't anything that they can do about it. I've got it documented in email so if we suddenly flatline again at some percentage under 100% we'll raise an unholy hell with them. We've also had significant problems getting some planned maintenance notifications after the fact (ie, after the window and what appears like an outage to us). YMMV but if I had the choice I'd try to get connected to the real L3 backbone and not that of an acquisition. The acquisition networks were probably much more reliable before they got bought. Now that they've been stripped of their resources the legacy edges are showing signs of old age, alzheimer's, and senile dementia. It's a shame to see them reduced to that. Justin
Re: Traffic Statistics for Yesterday
Shon Elliott wrote: Does anyone have any data on how the memorial event for Michael Jackson effected the global backbones? This was seen as another inaugural type of traffic day to most of the people I've talked to. 99.99% of my userbase is in the rural Midwest. Needless to say I saw no increase in bandwidth consumption. Now if it was a streaming memorial for George Strait, Garth Brooks, Little Jimmy Dickens or Willie Nelson I suspect the consumption would have been noticeably higher. I'm sure I would have had much higher bandwidth usage if the PBR National Championship was made available for streaming. Justin
Re: BGP Growth projections
Mark Radabaugh wrote: I'm looking for new core routers for a small ISP and having a hard time finding something appropriate and reasonably priced. We don't have huge traffic levels (<1Gb) and are mostly running Ethernet interfaces to upstreams rather than legacy interfaces (when did OC3 become legacy?). Lot's of choices for routers that can handle the existing BGP tables - but not so much in small platforms (1-10Gb traffic) if you assume that IPv6 is going to explode the routing table in the next 5 years.The manufacturers still seem to think low traffic routers don't need much memory or CPU. What projections are you using regarding the default free zone over the next 5 years when picking new hardware? I'll give you the Cisco product answer since that's what I know. I'd go with the ASR 1000 product line. At 1-10Gbps you've exceeded what an 7200 (even the G2) can handle. The largest of the ISR (3845) tops out at 1/2 Gbps at max CPU in theory (far less in reality). You don't want a software router though, especially for a SP and especially not for an Internet edge router. The ASR forwards in hardware. The 1002 with no internal hardware redundancy can handle 5 or 10 Gbps and costs a little more than a 7206 w/ NPE-G2 or a 7201 (with the 5Gbps ESP). This is one consideration for replacing my edge 7200s with. The 1004 version currently scales to 20Gbps and can handle redundant RPs. The 1006 module also currently scales to 20Gbps but can handle redundant RPs and ESPs. All the ASRs have internal software redundancy so crashes should be relatively painless in theory, even with a single RP. http://www.cisco.com/en/US/products/ps9343/index.html I'm looking at using the 1002 for my Internet edge and the 1006 for the core are smaller remote POPs. The platform has been out for a year or so and appears to be fairly solid. Justin
Re: cisco.com
Didn't you hear? Cisco EoLed BGP this time last week. I guess they really meant it! Justin deles...@gmail.com wrote: So cisco has no BGP is that what I'm hearing... Oh the irony :) --Original Message-- From: Aaron Millisor To: R. Benjamin Kessler Cc: nanog@nanog.org Subject: Re: cisco.com Sent: Aug 4, 2009 10:45 AM Not sure the ETA but the network that the address for cisco.com resolves to (198.133.219.0/24) is no longer in BGP.
Re: Follow up to previous post regarding SAAVIS
Jared Mauch wrote: I've come to the conclusion that if someone put a nice web2.0+ interface on creating and managing these objects it would be a lot easier. I've looked into IRR several times, usually after events like PCCW. Each time the amount of work to 1) figure out how to implement IRR and 2) actually implementing it far outweighed the benefit of doing it for my network. I would love to implement it and looking towards the future I will someday have to. Until it becomes much easier to do so, I don't foresee smaller SPs like myself allocating the necessary resources to an IRR project until we're forced to because of our individual growth or an idiot PCCW-type event that generates lots of bad PR. There are lots of leaks all the time, as can be evidenced by the leak detection stuff I set up here: http://puck.nether.net/bgp/leakinfo.cgi Crossing fingers hoping I'm not in that list... Justin
Re: Ready to get your federal computer license?
Steven M. Bellovin wrote: On Sun, 30 Aug 2009 19:46:19 -0400 (EDT) Sean Donelan wrote: On Sun, 30 Aug 2009, Jeff Young wrote: The more troubling parts of this bill had to do with the President, at his discretion, classifying parts of public networks as "critical infrastructure" and so on. Whatever your opinion, get involved. Let your representatives know about your better ideas. I strongly second this. To quote a bumper sticker/slogan I've seen, "if you didn't vote, you shouldn't complain". "Democracy is not a spectator's sport" Justin Shore
Re: Network Ring
Rod Beck wrote: What is EAPS? A joke of a "standard" and something to be avoided at all costs. I would echo the last part about Extreme switches too. Justin
Re: Repeated Blacklisting / IP reputation
Jason Bertoch wrote: Suresh Ramasubramanian wrote: That said most of the larger players already attend MAAWG - that leaves rural ISPs, small universities, corporate mailservers etc etc that dont have full time postmasters, and where you're more likely to run into this issue. I've found the opposite to hold true more often. Smaller organizations can use public blacklists for free, due to their low volume, and so have little incentive to run their own local blacklist. I've typically seen the larger organizations run their own blacklists and are much more difficult to contact for removal. Take for example GoDaddy's hosted email service. They are using a local, outdated copy of SORBS that has one of my personal servers listed in it. It was an open proxy for about week nearly 3 years ago and still they have it listed. The upside is that I've demonstrated GoDaddy's email incompetence to potential customers and gotten them to switch to our own mail services. Their loss, my gain. Justin
Re: Repeated Blacklisting / IP reputation
Wayne E. Bouchard wrote: Best practices for the public or subscription RBLs should be to place a TTL on the entry of no more than, say, 90 days or thereabouts. Best practices for manual entry should be to either keep a list of what and when or periodically to simply blow the whole list away and start anew to get rid of stale entries. Of course, that is probably an unreal expectation. I've had to implement something similar for my RTBH trigger router. After manually-adding nearly 20,000 static routes of hosts that scanned for open proxies or attacked SSH daemons on my network I had to trim the block list considerably because many of my older PEs couldn't handle that many routes without problems. I already named each static with a reason for the block(SSH, Telnet, Proxy-scan, etc) but ended up prepending a date to that string as well: 20090908-SSH-Scan. That way I can parse the config later on and create config to negate everything that's older than 3-4 months. If one of those old IPs is still trying to get to me after 4 months then it will get readded the next time I process my logs entries. If they aren't trying to hit me then they'll no longer be consuming space in my RIB. Justin
Re: Network Ring
sth...@nethelp.no wrote: Rod Beck wrote: What is EAPS? A joke of a "standard" and something to be avoided at all costs. I would echo the last part about Extreme switches too. Disagree. I don't believe anybody would claim EAPS is a "standard" just because an RFC has been published. Pannaway does. That was one of the very arguments I used against their product when they were brought in. They claimed that it was a standard because it had a RFC. I tried to explain the difference between an Information RFC and a Standards Track to no avail. Of course this also came from the Pannaway SE that gave me 3 quotes I repeat as often as possible to as many people as possible. He said: 1) that we didn't need to run an IGP across our network because we weren't big enough to need one. This was in response to my query about their lack of support for IS-IS. He said that he'd seen SP networks many times our size get by perfectly well with static routes. 2) that we didn't need QoS on our network if our links weren't saturated. I won't get into the holy war over serialization delay, micro bursts, and queuing here. It's been hashed out many times before on NANOG I'm sure. 3) that IPv6 was just a fad and that it would never be implemented in the US. I got our /32 in 2008 and am working on the deployment now. I'm certainly not breaking new ground here either. It may not be the most common thing in the US but it is picking up steam for everyone not running Pannaway products since they don't support IPv6 (the BASs and BARs that we ended up buying at least). As for Extreme switches - they have their strengths and weaknesses, just like any other product. We use lots of Summit X450/X450a, for L2 only, and have been generally reasonably happy with them. If I could buy a similarly featured product from Cisco, for a similar price, I might well choose Cisco. But at least in our case Cisco *doesn't* have a competitive product (case in point: ME3400 - too few ports, too few MAC addresses, funky licensing even if you just want to do simple QinQ). I don't have any experience with the ME3400 unfortunately. A mix of vendors isn't a bad thing if you have the knowledge, depth and time to keep up with each of them so you can support the device adequately (adequate staffing is involved here too). When one buys a budget switch just to save a few bucks they tend to get what they paid for and none of what they didn't (training, experience for their staff, printed third-party references, reliable online support groups for example). I'm in a situation right now where a vendor has proposed a basic L2 switch solution to redundantly connect 2 of our sites. They come in cheaper than the Cisco equivalent (4 4948-10GEs) but we also have absolutely no experience with that vendor. That means interopt testing, future finger pointing in the heat of an outage, double training staff, inevitable config errors and typos thanks to the differences between the vendor we're used to and the one that is being proposed for this one-off connection. The better fool-proof solution costs a bit more and I have to convince management not to save a short-term buck which costs of many long-term bucks. Sometimes you really do get what you pay for. Justin
Re: Repeated Blacklisting / IP reputation
Jay Hennigan wrote: By the way, among the members... Experian CheetahMail ExactTarget, Inc Responsys, Inc. Vertical Response, Inc Yesmail Have you been reading from my blacklist again, Jay? Justin
Re: Repeated Blacklisting / IP reputation
Frank Bulk wrote: With scarcity of IPv4 addresses, organizations are more desperate than ever to receive an allocation. If anything, there's more of a disincentive than ever before for ARIN to spend time on netblock sanitization. I do think that ARIN should inform the new netblock owner if it was previously owned or not. But if ARIN tried to start cleaning up a netblock before releasing it, there would be no end to it. How could they check against the probably hundreds of thousands private blocklist? They could implement a process by which they announce to a mailing list of DNSBL providers that a given assignment has been returned to the RIR and that it should be cleansed from all DNSBLs. At this point the RIR has done their due diligence for notifying the blacklist community of the change and the onus is on the DNSBL maintainers to update their records. Of course this does nothing to cleanse the assignment in the hundreds of thousands of MTAs around the world. However this could be a good reason to not blacklist locally (or indefinitely at least) and to instead rely on a DNSBL maintained by people responsible for wiping returned assignments from their records when RIRs give the word. I suppose the mailing list could even be expanded to include mailing list admins if need be so that they could also receive the info and wipe their own internal DNSBLs. The list should be an announcement-only list with only the RIRs being able to post to it in a common and defined format. The announcement should be made as soon as the assignment is returned to the RIR, allowing for the cool off period of time for personal blacklists to catch up to the official ones. I would think that would be a fairly simple process to implement. It's not fool-proof by any means but it's better than doing nothing. It's a thought. Justin
Re: Repeated Blacklisting / IP reputation
Martin Hannigan wrote: Well, I haven't even had coffee yet and... Get the removals: curl -ls http://lists.arin.net/pipermail/arin-issued/2009-September/000270.html | grep Remove | grep -v "" Get the additions: mahannig$ curl -ls http://lists.arin.net/pipermail/arin-issued/2009-September/000270.html | grep Add | grep -v "" That appears to be it. I've also been told that there is a RSS feed of the same thing. My understanding is that a posting is made to the mailing list or RSS feed when a new subnet is assigned. I'd like to see them do something with the assignment is first returned to ARIN, not months later when the assignment is ready to be handed out again. I think the extra time would help those people that download copies of the DNSBL zone files and manually import them once a week or less often. Lots of place still use the zone files. Personally I prefer to do so too, rather than tie my mail system reliability on an outside source that may or may not tell me when they have problems that affect my service. GoDaddy and their hosted mail service would be a great example since they can't be bothered to update their DNSBL zone files. Their mail admins are using a copy of SORBS that is 3 years old. 3 damn years old. How do I know this? 3 years ago a mistake in a Squid configuration turned one of my services into an open proxy for about a week. Even today mail from that server to a domain with mail hosted at GoDaddy results in a bounce citing the ancient SORBS listing as the reason. Thanks for the pointer. Looks like they've already thought of what I suggested and implemented a solution. I still voice for announcing returned assignment instead of announcing when an old assignment gets reassigned. Thanks Justin
Re: Dutch ISPs to collaborate and take responsibility for botted clients
Gadi Evron wrote: Apparently, marketing departments like the idea of being able to send customers that need to pay them to a walled garden. It also saves on tech support costs. Security being the main winner isn't the main supporter of the idea at some places. I would love to do this both for non-pays and security incidents. I'd like to do something similar to let customers update their provisioning information for static IP changes so cable source verify doesn't freak out. Unfortunately I haven't been able to find any open source tools to do this. I can't even think of commercial ones off the top of my head. It's a relatively simple concept. Some measure of integration into the DHCP provisioning system(s) would be needed to properly route the customer's traffic to the walled garden and only to the walled garden. Once the problem is resolved the walled garden fixes the DHCP so the customer can once again pull a public IP and possibly flushes ARP caches if your access medium makes that a problem to be dealt with. I would think that the walled garden portion could be handled well-enough with Squid and some custom web programming to perform tasks to reverse the provisioning issues. I'm sure people have written internal solutions for SPs before but I haven't found anyone that has made that into an OSS project and put it on the Web. I'd love to make this a project but there is little financial gain to my small SP so if it costs much money it won't get management support. Justin
Re: Does Internet Speed Vary by Season?
Hank Nussbacher wrote: http://www.wired.com/gadgets/miscellaneous/magazine/17-10/ts_burningquestion It's an interesting theory, that temperature affects overall throughput. Their assumptions on other conditions that affect bandwidth consumption are off IMHO. Our own data directly refutes what Wired reported in this article. Summertime is our most heavily utilized months on our network on average. For SPs heavily laden with residential subs I think this is probably the norm. Then school starts and you have a pronounced drop in traffic (that includes a major dip when college begins and again when primary school begins). The rates slowly increase back to their summer time highs until the holiday season begins where they either remain steady or taper off slightly. The theory here is that the high-bandwidth users are too busy with holiday affairs to play games, download music/porn, etc. That is until after X-mas when consumption suddenly spikes in a very pronounced way (new computers for X-mas). This also corresponds to our biggest month for new service turnups and speed increases in our bundles. Late winter varies from fairly constant to slight growth. Our single biggest days are the ones proceeding a major winter storm, or if the storm doesn't cut power to large swaths of our service area then the days in the middle of the winter storms come out on top. Spring growth depends on the weather. Good weather means less consumption for us. Bad weather means more consumption. Our least busy month is May when the kids are the most busy. June and July again show a major turn around. Bandwidth consumption is directly tied to your user demographics. If your SP is primarily business circuit then your traffic patterns will vary wildly from that of a SP with primarily residential circuits. Every SP is a little bit different. That's why some SPs set personal records for bandwidth consumption when Michael Jackson's memorial service was broadcast (including SPs less than an hour away from me) and other SPs (mine for example) didn't have a single user stream the broadcast and otherwise had a normal bandwidth day. Other than Wired making an assumption that all SPs have nearly identical traffic patterns, the article is otherwise ok. Justin
Re: ISP customer assignments
Dan White wrote: How are other providers approaching dial-up? I would presume we are in the same boat as a lot of other folks - we have aging dial-up equipment that does not support IPv6 (3com Total Control). Our customer base has dropped quite a bit, and we have even kicked around the idea dropping that service and forcing customers to purchase broadband service or go elsewhere. What are other providers doing, or considering doing? I'd like to beat this dead horse some more if you gents don't mind. I think there's still some life left in the beast before we haul it off to the glue factory. I'm actually taking an IPv6 class right now and the topic of customer assignments came up today (day 1). The instructor was suggesting dynamically allocating /127s to residential customers. I relayed the gist of this thread to him (/48, /56 and /64). I expect to dive deeper into this in the following days in the class. What are providers doing for residential customers and how does it different from business customers? At what point are you assigning bigger blocks? To go along with Dan's query from above, what are the preferred methods that other SPs are using to deploy IPv6 with non-IPv6-capable edge hardware? We too have a very limited number of dialup customers and will never sink another dollar in the product. Unfortunately I also have brand-new ADSL2+ hardware that doesn't support IPv6 and according to the vendors (Pannaway) never will. We also have CMTSs that don't support IPv6, even though they too are brand-new. Those CMTSs top out at DOCSIS 2.0 and the vendor decided not to allow IPv6 to the CPEs regardless of the underlying CM's IPv6 support or lack thereof (like Cisco allowed for example). Are providers implementing tunneling solutions? Pros/cons of the various solutions? Thanks Justin
Re: ISP customer assignments
Doug Barton wrote: Out of curiosity who is conducting this class and what was their rationale for using /127s? It's a GK class. The instructor seems to be fairly knowledgeable and has a lengthy history consulting on and deploying IPv6. The class seems to be geared much more towards enterprise though instead of service providers. That's very unfortunate considering that every one of the 15 people in this class either works for or contracts to a SP. Still the instructor has some SP knowledge so he fills in the blanks when possible. We're all thinking like SPs though and we ask the SP-oriented questions which usually helps us steer the course the direction we want. I wish GK and other training companies would start offering classes geared towards SPs. They could easily take the existing courseware, add a couple days at the end and interject useful SP info into the earlier days. All that extra info could be specifically aimed at SPs. He didn't really give much of a reason for the /127s yet. I think it's coming up in a later session. I think it basically boiled down to whether or not the customer would actually use anything bigger. I'll write back when we get into that discussion. Justin
Re: ISP customer assignments
George Michaelson wrote: As a point of view on this, a member of staff from APNIC was doing a Masters of IT in the last 3-4 years, and had classfull A/B/C addressing taught to her in the networks unit. She found it quite a struggle to convince the lecturer that reality had moved on and they had no idea about CIDR. I'm ok with teaching it to beginners to explain where we came from but that should be it. It should be made excruciatingly clear in the training that it's no longer done that way because we found a MUCH better way of doing things. That said I still occasionally refer to networks in classful terms and I can think of several network engineers who have years of enterprise experience that still don't understand CIDR. Justin
Re: ISP customer assignments
Dan White wrote: I don't recall if Pannaway is a layer 3 or layer 2 DSLAM, but we have a mix of Calix C7 (ATM) and Calix E5 (Ethernet) gear in our network. We're kinda in the same boat, but we expect to be able to gracefully transition to dual stacked IPv4/IPv6 without having to replace DSL modems, by reconfiguring the modems into bridged mode and leaving the layer 3 up to the customer's router. We're also in the process of budgeting for a new broadband aggregation router next year that will handle IPv6. Ask Pannaway if they can bridge traffic (either ATM PVC, or Ethernet QinQ/VLAN per subscriber) up to a broadband aggregator, like a Redback or Cisco. In Pannaway's case they want to pretend to be in the router business and we ended up buying their BARs. Their DSLAMs (BASs) are aggregated into their BARs and the BAR ring terminates on my Cisco core. I would love to eliminate the BARs from the equation but that's not an option. I've been by several (dozen) people that their Minnesota Pannaway office stopped selling the BARs and instead worked with Cisco for aggregation. I've also been told by Cisco folks that numerous sites are trying to get Cisco to take the Pannaway BARs in on trade-in. It would appear that no one like the BARs. Occam did it right. They didn't try to pretend to be in the router business. They stuck with L2. We're also in a bit of a pickle because we're using "smart" DSL modems/routers. I've argued for years for dumb DSL modems that had no routing/NAT capabilities at all. Unfortunately I didn't win that argument. Now we have what amount to CPEs that do not support IPv6. Whether they'll even pass IPv6 packets in a bridging mode is yet to be determined. Since some of the modems are Pannaway and given my experience with Pannaway and trying to bridge things over it without Pannaway messing with it in the middle (VLAN carrying IS-IS for example), I fully expect problems... It's no secret; I do not recommend Pannaway products based on my firsthand experience. Their SE actually told us 2 years ago that IPv6 was a fad and would never be adopted. Justin
Re: ISP customer assignments
Dan White wrote: Occam did it partially right. They're half-bridging only - not true layer 2 to an aggregator (which is not necessary in their scenario). The problem with the access vendor doing half-bridging is that they have to be very layer-3 smart, and Occam was not quite there for IPv6 last time I worked with them (about 6 months ago). When we did a RFP with them they didn't support v6 yet but they also wouldn't get in the way of passing v6 over them (minus the DHCP snooping/learning features of course). That was 2 years ago. I haven't looked at them since but they said that they'd work on it. I haven't really been happy with any DSL vendor's response to my questions about IPv6. We happened to choose Calix, which is not particularly IPv6 friendly, but were successful in getting commitments from them to support IPv6 pass through. None of the FTTH vendors we vetted supported v6 but at least a few said that they'd work on it. Pannaway's response though was priceless. I have little doubt that Pannaway could implement IPv6, they just need to get enough demand from customers to make it worth their while. Pannaway was bought a while back by Enablence. Hopefully they will drive a bit more clue into the products. Hopefully that SE isn't there anymore or if he is hopefully he's not driving product development. His other 2 answers about QoS not being needed because our links were sustaining saturation (microbursts anyone?) and that we didn't need an IGP because our network wasn't big enough and that static routing would do (for just shy of 100 routing devices in 3 POPs) was the icing on the cake. Unfortunately the decision was made to eat the cake anyway. Justin
Re: DreamHost admin contacts
Andy Ringsmuth wrote: Barring that, what recommendations might the NANOG community have for an extremely rock-solid e-mail hosting company? I realize that may mean self-promotion, but hey, bring it on. I would strongly recommend against GoDaddy's hosted email. See my earlier post on 9/8 about their idiotic use of ancient SORBS data. Justin
Webcasts of NANOG47
Does anyone know if there will be video streams of the events from rooms other than what's in the Grand room? For example I would like to see the ISP Security Track BOF or the one tomorrow on Peering. I don't see a way to select those specific feeds though. Thanks Justin
Re: Webcasts of NANOG47
I'd love to! However it's sometimes hard enough getting funds allocated for training events. Industry meetings aren't the easiest thing to convince people to let you attend... :-( Justin Warren Bailey wrote: Or just fly out ;) lol - Original Message - From: Leigh Porter To: Justin Shore ; NANOG Sent: Mon Oct 19 14:06:17 2009 Subject: RE: Webcasts of NANOG47 Hey, I don't know for sure but I think only the Grand Room is televised. Get somebody there with a webcam to do ustream.tv or livestream.com or whatever ;-)
Re: ISP port blocking practice
Zhiyun Qian wrote: Hi all, What is the common practice for enforcing port blocking policy (or what is the common practice for you and your ISP)? More specifically, when ISPs try to block certain outgoing port (port 25 for instance), they could do two rules: 1). For any outgoing traffic, if the destination port is 25, then drop the packets. 2). For any incoming traffic, if the source port is 25, then drop the packets. I block on both generally. I block inbound and outbound for residential customers in dynamic pools. I block inbound only for residential with statically-assigned IPs. That way a customer can request (and pay for) a static IP and be able to get around out outbound SMTP block. Few companies use the MSP port (tcp/587). I'm not sure why more don't make the effort but most don't. To make up for that we allow static residential customers to evade that filter with a static IP. We still block inbound though. We also allow them to use our SMTP servers and SmartHosts if they want with no requirements on source domains (like some providers require, essentially requiring the customer to advertise for you). The inbound block isn't really all that useful as you elude to. However I use it more often than not to look for people scanning out ranges for open relays. I use that data for feed my RTBH trigger router and drop the spammer's traffic on the floor (or the poor, unfortunate owner of the compromised PC that's been 0wned. We block several other things too. Netbios traffic gets dropped both ways. MS-SQL traffic gets dropped both ways (a few users have complained about this but very few stick to their guns when you point out that their traffic is traversing the web completely unencrypted). I block default and common proxy ports such as 3128, 7212, and 6588 in both directions. Squid is too easy to misconfigure (done it myself). GhostSurf and WinGate have both been abusable as open proxies in various releases. I also block 8000, 8080 and 8081 towards the customers. These are some of our most commonly scanned ports (usually all 3 at once plus some or all of the 80xx ones). I've encountered many compromised residential CPEs that the users either enabled themselves or were enabled by default. I don't block those 3 ports on outbound flows though; too many false positives. And finally we also block several different types of ICMPs. First off we block ICMP fragments. Then we permit echo, echo-reply, packet-too-big, and time-exceeded. The rest get explicitly dropped. IPv6 will change this list dramatically. I haven't had time to research ICMPv6 thoroughly enough to say any more than that. Basically I just pick out some of the really bad ports and block them. This gives me a wealth of data with which to null-route compromised PCs scanning my networks. Also, is it common that the rules are based on tcp flags (e.g. SYN, SYN-ACK)? One would think block SYN packet is good enough. I don't get that deep into it. Forged packets of types other than SYN can still reek havoc on existing flows. I think it's better to block all and move on. Justin
Re: ISP port blocking practice
Zhiyun Qian wrote: 1). For any outgoing traffic, if the destination port is 25, then drop the packets. 2). For any incoming traffic, if the source port is 25, then drop the packets. It's been pointed that I glossed over the wording of #2, specifically missing the "source port" part of it, thus giving the right answer to the wrong question. :-) To answer your question, all our tcp/25 filters are based on destination port. I could use source port but really I'm more concerned with my customers not running SMTP servers in one direction and them not being able to send spam in the other. Using source port needlessly complicates those goals IMHO. Someone might have a specific reason to use it but I don't in my case at least. Justin
Re: ISP port blocking practice
Lyndon Nerenberg (VE6BBM/VE7TFX) wrote: Few companies use the MSP port (tcp/587). Can you elaborate. Is this based on analysis you've conducted on your own network? And if so, is the data (anonymized) available for the rest of us to look at? My experience is that port 587 isn't used because ISPs block it out-of-hand. Or in the case of Rogers in (at least) Vancouver, hijack it with a proxy that filters out the AUTH parts of the EHLO response, making the whole point of using the submission service ... pointless. I can't speak for Rogers but I have analyzed our netflow captures on a semi-regular basis for several things before flushing it, use of the MSP port being one of them. I've never seen any MSP port traffic on my SP network that didn't fall into 1 of 2 categories: 1) inbound scanning for MTAs listening on the MSP port, or 2) my own MSP traffic or that of family members traffic running across my SP network that happen to use one of my personal servers for their own email hosting. I can also speak from experience from the enterprise customers of the consulting side of my SP that I worked with before returning to the SP. Not a one of them made use of the MSP port. The vast majority, I'm sorry to say, used Microsoft Exchange which to the best of my knowledge doesn't support RFC2476. I did a little Googling just now and couldn't find any hits to say they did either. Some utilized RPC-over-HTTP. Most at the time didn't, requiring direct SMTP access or VPN. I wish more people would use it though. My users wouldn't have cause to get so upset when I tell them that they have to pay monthly for a static IP to use tcp/25. It would reduce my hassles a wee bit. Justin
Re: ISP port blocking practice
Joe Maimon wrote: You can configure exchange to use additional smtp virtual servers and bind them to specific ports. You can also require authentication to access the ports and you can restrict it to users. You can also enable it for STARTTLS. That I did not know. Last time I'd looked there wasn't a decent work around unless you wanted to run a 2nd Exchange server in a cluster of sorts on a 2nd box and change it's default port to 587. Then let Exchange clustering move the mail around on the back end. This is good to know. I have many a time recommended consulting customers to follow up with their mail provider to see if they has any plans to support the rfc standard, but I dont share much enthusiasm for complete adoption. I do believe it is getting better. I'm sorry to say that the larger SP that we outsourced our customer mail service to doesn't support MSP. They don't support much of anything outside of the very basics. They require SMTP AUTH but until relatively recently they didn't support any AUTH options other than plaintext (I was actually shocked just now when I doublechecked because I have looked before). No, I'm not kidding. They do rDNS checks on every IP list in a Received line. The also do DNSBL DUL checks on all IPs on the Received lines (dumb because of course the first one will match if the SP has their customer dynamic pools listed in a DUL-type list). Things will change on their end and the way we find out is because of user complaints. The decision to switch to them wasn't a technical one I'm afraid. If you're an Internet *Service* Provider you should probably provide you own services. Justin
Re: ISP port blocking practice
Owen DeLong wrote: Blocking ports that the end user has not asked for is bad. I was going to ask for a clarification to make sure I read your statement correctly but then again it's short enough I really don't see any room to misinterpret it. Do you seriously think that a typical residential user has the required level of knowledge to call their SP and ask for them to block tcp/25, tcp & udp/1433 and 1434, and a whole list of common open proxy ports? While they're at it they might ask the SP to block the C&C ports for Bobax and Kraken. I'm sure all residential users know that they use ports 447 and 13789. If so then send me some of your users. You must be serving users around the MIT campus. Doing it and refusing to unblock is worse. How you you propose we pull a customer's dynamically-assigned IP out of a DHCP pool so we can treat it differently? Not all SPs use customer-facing AUTH. I can think of none that do for CATV though I'm sure someone will now point an oddball SP that I've never heard of before. Some ISPs have the even worse practice of blocking 587 and a few even go to the horrible length to block 465. I would call that a very bad practice. I haven't personally seen a mis-configured MTA listening on the MSP port so I don't think they can make he claim that the MSP port is a common security risk. I would call tcp/587 a very safe port to have traverse my network. I think those ISPs are either demonstrating willful ignorance or marketing malice. A few hotel gateways I have encountered are dumb enough to think they can block TCP/53 which is always fun. The hotel I stayed in 2 weeks ago that housed a GK class I took had just such a proxy. It screwed up DNS but even worse it completely hosed anything trying to tunnel over HTTP. OCS was dead in the water. My RPC-over-HTTP Outlook client couldn't work either. Fortunately they didn't mess with IPSec VPN or SSH. Either way it didn't matter much since the network was unusable (12 visible APs from room, all on overlapping 802.11b/g channels). The average throughput was .02Mbps. Lovely for you, but, not particularly helpful to your customers who may actually want to use some of those services. I take a hard line on this. I will not let the technical ignorance of the average residential user harm my other customers. There is absolutely no excuse for using Netbios or MS-SQL over the Internet outside of an encrypted tunnel. Any user smart enough to use a proxy is smart enough to pick a non-default port. Any residential user running a proxy server locally is in violation of our AUP anyway and will get warned and then terminated. My filtering helps 99.99% of my userbase. The .001% that find this basic security filter intolerable can speak with their wallets. They can find themselves another provider if they want to use those ports or pay for a business circuit where we filter very little on the assumption they as a business have the technical competence to handle basic security on their own. (The actual percentage of users that have raised concerns in the past 3 years is .0008%. I spoke with each of them and none decided to leave our service.) We've been down the road of no customer-facing ingress ACLs. We've fought the battles of getting large swaths of IPs blacklisted because of a few users' technical incompetence. We've had large portions of our network null-routed in large SPs. Then we got our act together and stopped acting like those ISPs who we all love to bitch about, that do not manage their customer traffic, and are poor netizens of this shared resource we call the Internet. Our problems have all but gone away. Our residential and business users no longer call in on a daily basis to report blacklisting problems. We no longer have reachability issues with networks that got fed up with the abuse coming from our compromised users and null-routed us. I stand by our results as proof that what we're doing is right. Our customers seem to agree and that's what matters. Justin
Re: ISP port blocking practice
Dan White wrote: On 23/10/09 17:58 -0400, James R. Cutler wrote: Blocking the well known port 25 does not block sending of mail. Or the message content. It does block incoming SMTP traffic on that well known port. Then the customer should have bought a class of service that permits servers. I think the relevant neutrality principle is that traffic is not blocked by content. My personal definition doesn't quite gel with that. You're deciding for the customer how they can use their connection, before you have any evidence of nefarious activity. They decided for themselves when they bought a residential connection instead of a business circuit. Just because someone bought themselves a Camry doesn't mean that Toyota is deciding for them that they can't haul 1000lbs of concrete with it. The customer did when they decided to buy a car and not a pickup. Would you consider restricting a customer's outgoing port 25 traffic to a specific mail server a step over the net neutrality line? I do this all the time. For example I don't let my customers send or receive mail (or any traffic for that matter) from prefixes originating from AS32311 (Colorado spammer Scott Richter). Now if I was blocking mail to dnc.org, gop.com, greenpeace.org, etc or restricting Vonage to .05% of my bandwidth then yeah that would violate net neutrality principles. The difference is one stifles speech and is anti-competitive. The other mitigates a network security and stability risk. I see this same argument on Slashdot all too often. It's usually bundled with an argument against providers doing any sort of traffic aggregation ("if I buy 1.5Mbps then it should be a dedicated pipe straight to the Internet!") Unfortunately that's simply not reality. You can either live with a small level of controls on your traffic for the sake of stability and security or you can have wide-open ISPs with no security prohibitions whatsoever. The support costs for the ISPs go through the roof and of course that gets passed onto the customer. Your 5 9s SLA gets replaced with "use it while you can before it goes down again". Everyone pays a penalty for having a digital Wild West. Not to start another thread on a completely OT topic but the same concept can be applied to other things like health care. Either everyone can pay a little bit for all to have good service or many average consumers can pay lots to make up the losses for those that can't pay at all. Justin
Re: dealing with bogon spam ?
Michiel Klaver wrote: I would suggest to report that netblock to SpamHaus to have it included at their DROP list, and also use that DROP list as extra filter in addition to your bogon filter setup at your border routers. The SpamHaus DROP (Don't Route Or Peer) list was specially designed for this kind of abuse of stolen 'hijacked' netblocks and netblocks controlled entirely by professional spammers. As a brief off-shoot of the original topic, has anyone scripted the use of Spamhaus's DROP list in a RTBH, ACLs, null-routes, etc? I'm not asking if people think it's safe; that's up to the network wanting to deploy it. I'm wondering if anyone has any scripts for pulling down the DROP list, parsing it into whatever you need (static routes on a RTBH trigger router or ACLs on a border router and then deployed the config change(s). I don't want to reinvent the wheel is someone else has already done this. Thanks Justin
Re: Go daddy mail services admin
Jeff Kinz wrote: Based on their long term refusal to adjust their policy to conform to PBL intended usage of the list I suspect this issue cannot be corrected. The only answer I have found is to inform the affected people they have to move from GoDaddy to a company that does a better job to correct the problem. GoDaddy is about as worthless of a mail provider and it gets. I can't count the number of times I've had customers get themselves blacklisted by GoDaddy and not be able to get unlisted. Finding a contact number for them used to be damn near impossible. Finding a competent mail admin on the other end actually was impossible. My own company got blacklisted by GoDaddy a little over a year ago. A user with an infected laptop relayed infected email out through the corporate firewall's NAT pool (no longer blindly permitted). GoDaddy's response? The entire /24 used by our corporate firewall was blacklisted intermittently for about 6 months. Our recommendation to our clients and our SP customers is to not use GoDaddy's mail services. Pick a mail provider that's known for being responsive. Justin
Re: Go daddy mail services admin
Raymond Corbin wrote: Yeah they usually simply do /24 blocks. From what I remember in the blacklist 550 response it says a removal link? Something like http://unblock.secureserver.net/?ip=x.x.x.x right? I believe that's correct. It's a shame it doesn't accomplish anything (or it never has for me before). I always had to dig until I found a number for them to call and complain. Even then I only succeeded 1 out of every 10 tries or so. Justin
Re: Sprint / Cogent
Nick Hilliard wrote: And they'll do it to others in future peering spats. It's just a bullying tactic - entertaining if you're on the sideline; irritating if you're Sprint. Cogent reminds me of Ethan Coen's poem, which starts: The loudest has the final say, The wanton win, the rash hold sway, The realist's rules of order say The drunken driver has the right of way. So why do SPs keep depeering Cogent? Serious question, why? I'm not aware of any Intercage-like issues with them. I've actually considered them as a potential upstream when we expand into a market they serve. Justin
Re: McColo: Are the 'Lights On" at Telia?
If we all dropped routes from 26780 at the edge, I wonder how long it would be before their prefixes popped up somewhere else. Justin Paul Ferguson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sat, Nov 15, 2008 at 7:22 PM, Paul Ferguson <[EMAIL PROTECTED]> wrote: If they are, then I sure wish that someone would explain reconnecting McColo: http://www.cidr-report.org/cgi-bin/as-report?as=as26780 With all of the evidence of criminal activity there, I would like to assume that this is just a case of ignorance. - - ferg Apparently, the badness is still located in the same SJ Data Center: %traceroute 208.66.194.22 Tracing route to 208.66.194.22 over a maximum of 30 hops 1 2 ms 5 ms 1 ms 208.66.194.22 [snip] 614 ms14 ms13 ms xe-11-1-0.edge1.sanjose1.level3.net [4.79.43.133 ] 714 ms16 ms16 ms vlan69.csw1.sanjose1.level3.net [4.68.18.62] 821 ms36 ms37 ms ae-63-63.ebr3.sanjose1.level3.net [4.69.134.225] 924 ms21 ms33 ms ae-2.ebr3.losangeles1.level3.net [4.69.132.10] 1036 ms21 ms31 ms ae-73-73.csw2.losangeles1.level3.net [4.69.137.3 8] 11 * 22 ms27 ms ae-23-79.car3.losangeles1.level3.net [4.68.20.69 ] 1227 ms26 ms41 ms telia-level3-ge.losangeles1.level3.net [4.68.110 .222] 1335 ms39 ms35 ms sjo-bb1-link.telia.net [213.248.80.18] 1435 ms36 ms36 ms giglinx-ic-122068-sjo-bb1.c.telia.net [213.248.8 4.210] 1535 ms35 ms35 ms vl-701.rt02.sjc.mccolo.com [208.66.192.26] 16 *** Request timed out. 17 * ^C FYI, - - ferg -BEGIN PGP SIGNATURE- Version: PGP Desktop 9.6.3 (Build 3017) wj8DBQFJH501q1pz9mNUZTMRAm/xAKCV0zAnL3hQkgrT+i/UANCXGziz5gCfYJLd MXnaUIk8IXy1VBjXD+UDrXw= =+RoU -END PGP SIGNATURE-
Managing CE eBGP details & common/accepted CE-facing BGP practices
Does anyone have any preferred ways to manage their customer-facing BGP details? I'm thinking about the customer's ASN (SP assigned private ASN or RIR assigned ASN), permitted prefixes, etc? While I'm sure this could be easily stored in a spreadsheet I'm not sure if there is any merit to storing some of these details outside of the configuration on the PE (assuming of course that the PE's config is regularly archived). Now if the PE's BGP config was auto-generated via a script then it would make sense for all the details to be stored off in a DB in the NOC. Beyond that is there a good reason to do archive it in a textual format off of the PE and if there is a sound reason to do it, is therea good or preferred way to do accomplish this? We're moving beyond our typical residential and very small SMB service to larger customers over the next few months. These areas have larger, more advanced customers and I'm sure we'll run into multi-homed environments and customer who will expect BGP peering options. I would like to be prepared with sound practices before we get our first customer that wants to get a default route via BGP, wants full tables, or has their own ASN and is bringing their own PI space with them. Some of this of course implies multiple processes to confirm that the ASN belongs to the customer in question, that the PI space belongs to the customer in question, notifying our upstreams to accept the customer's PI space, etc. It's hammering out the scalable and best practice config details that I'm concerned with at the moment. When assigning private ASNs to customers, are there any gotchas to be aware of? Is it possible to use the same private ASN for more than one customer on the same PE? What are common and accepted CE-facing BGP practices? MD5 AUTH, GTSM, max prefix limits? Which is preferred, route-maps or prefix-lists for controlling advertised and/or received routes? Do any SPs utilize AS-Path ACLs to check that prefixes from an customer's ASN are claimed to originate from there? Are there any SPs out there offering BFD support for BGP or CE-facing peering sessions? Should we have the customer announce their PA space to us or do we advertise it for them (redist a static)? Do SPs restrict access to tcp/179 on the CE from the Internet in the CE-facing ACL? Do SPs block access to the PE-CE subnet from the outside world like what was described in the Router Security Strategies book (pages 189-193)? What about dropping incoming traffic to everything but the CE IP? While I don't predict our CE-facing BGP load to be terribly significant at this point, I would like to establish sound practices now rather than down the road once we're neck deep in temporarily production workarounds. Is there any consensus on what's best practice for CE-facing BGP? I imagine most SP engineer's BGP practices could be better equated to a religious holy war on par with Chevy vs Ford or Mac vs PC. I would be interested in hearing what they are though and learning from the group's expertise. Thanks Justin
Re: Managing CE eBGP details & common/accepted CE-facing BGP practices
Suresh Ramasubramanian wrote: Heck, you could store all that in Rancid .. even cvs/svn I should have said it earlier when I mentioned config backups. I'm already a heavy user of RANCID, archiving my configs hourly. Been using it since right around v2.0-2.1 which would be several years ago (feels like a lifetime). So my config backups are more than taken care of. What I'm interested in is if I should also document the PE-CE BGP config details elsewhere or if I should just leave them in the PE and let my backups cover me. Part of what's driving this is my desire to create a book of templates for our assorted product offerings that covers both PEs and CEs. Eventually I won't be able to handle everything myself and staff will have to be added. Eventually we'll have to separate operations, engineering, security and maybe even install/turnup tasks. I'd like there to be a solid practices established and documented in a solutions bible of sorts before that happens. My brain can only store so much info and I can only do so much in a day. Plus having all these details ironed out sooner rather than later, and documented, will help keep me honest (ie, no band-aides that I plan on removing when I get time ). The other added benefit is that as I figure out how to do something rather fancy or in a simple and elegant manner I can document it for my own benefit and others. So back to the original topics, does anyone have suggestions for CE-facing BGP config or the management and documentation of the CE details? I'm experimenting with peer-policy and peer-session templates right now. I'm sure with dozens or hundreds of peers their benefits would be more evident. So far they only seem to reduce my default-only test peers by 3 lines of config each. I'm sure this would be more saved lines of config if I was doing something more fancy. Thanks for the input Justin
Re: Managing CE eBGP details & common/accepted CE-facing BGP practices
Evening, Justin. Thanks for the reply. Justin M. Streiner wrote: You could certainly store all of the relevant config details in a database of some sort, and it certainly can't hurt to do so. Same goes for backing up your device configurations - always a good idea. As far as storing things like ASNs, allowed prefixes, etc, you may want to look at storing that information in an RPSL format. Many providers require their customers to register route/AS/policy objects either in one of the Internet Routing Registries (RADB, AltDB, etc...), or in a similar system that's operated by the provider. They then use this information for pushing out configuraiton changes such as the list of prefixes that a customer is allowed to announce, etc. RPSL could definitely be useful when we starting reaching that class of customer. It's probably too grand for our users at this point (especially having them register anything on their own). I'll definitely do more research on this though. It's good that you're thinking about this now, particularly the "is this customer legitimately allowed to announce prefix ABC, or source their traffic from AS XYZ"? Most larger providers put at least some limitations on what they will allow a customer to announce, though the level of investigation done to attempt to establich legitimacy for those announcements varies. Some providers require customers to provide some sort of Letter of Authorization/Agency for the prefixes they want to announce. The process hasn't been established yet for validating a request to permit a prefix announcement. I expect it will be a manual verification process involving WHOIS lookups on the prefix, route-view queries to see if it's currently being advertised and probably a historical check on that prefix to see if it was previously advertised and from where. The best way to avoid network abuse issues is to not let them happen to begin with. I think we will also require some sort of written legal agreement stating that the prefix in question belongs to the customer and that we're authorized to permit it's advertisement across our network. If anyone has any sample documents for use in this process I would be interested in seeing them. Yes. As long as the organizations that are using the private AS aren't 1. trying to advertise the same space, 2. possibly connect directly to each other directly, or 3. expecting to be able to connect to multiple upstream providers, then you should be OK. VZB (former UUNET) did something similar, using AS7046 (not a private AS, but the principle is the same), and I believe other carriers have had similar arrangements for customers. Ah, yes connecting to each other could be a problem. I would think that it would only be an issue if I carried the private ASN across my iBGP infrastructure, each customer received full routes from me and I let them see the private ASNs as well. I could mitigate that problem with remove-private-as, I believe. I'd need to think on that some more. If the customer wants to be multi-homed and expect reachability then they should get an ASN. Otherwise both SPs are advertising their prefixes and the customer won't have much or possibly any control over which inbound path was preferred. MD5 is good, but most providers I've seen make this an opt-in feature - they don't force their customers to use it. Setting a reasonable max-prefix limit and adjusting it as the number of prefixes a customer announces is always a good idea, and I'd consider it to be a best practice. Prefix lists and route maps can do different things, or accomplish the same task in different ways. It also depends on what functionality you want to offer your customers. Do you plan to publish and support a consistent set of customer-settable BGP communities for doing things like selective AS prepends? Do you plan to tag incoming advertisements with communities to identify them as customer routes, and pass those communities to your customers and peers? Some providers use AS path ACLs, and others avoid them at all costs. I think I'll make MD5 part of the default config and let the customer ask for it to be removed if they choose to not have it. Same for GTSM. I'm a fan of max-prefixes. I think double the routes I expect to receive, 75% warning and a restart interval of 5m would be a good place to start. That would let me catch things happening before they got out of hand (in normal circumstances) and give me a fail-safe in case they decide to get crazy. I do plan on implementing a BGP community solution but for now I'm going to keep it simple. I have bigger fish to fry at the moment but I'll try to get it done before we get asked for it by a customer. I will tag transit and customer routes. The ISP Essentials book had some good insight on that if memory serves me correctly. How fancy it gets will depend on my time and customer demand. I've s
Re: IPv6: IS-IS or OSPFv3
Kevin Oberman wrote: I would hope you have a backbone well enough secured that you don't need to rely on this, but it does make me a bit more relaxed and makes me wish we were using ISIS for IPv4, as well. The time and disruption involved in converting is something that will keep us running OSPF for IPv4 for a long time, though. I remember the 'fun' of converting from IGRP to OSPF about 13 years ago and I'd prefer to retire before a repeat. I did the OSPF --> IS-IS migration some time back and here's some of the info I found at the time. http://www.nanog.org/meetings/nanog29/abstracts.php?pt=Njg2Jm5hbm9nMjk=&nm=nanog29 Vijay did a nice presentation on AOL's migration to IS-IS. IIRC AOL migrated everything in 2 days. Day 1 was to migrate their test POP and hone their script. All remaining POPs were migrated on Day 2. I believe he said it went well. There have been several other documented migrations too: http://www.geant.net/upload/pdf/GEANT-OSPF-to-ISIS-Migration.pdf http://www.ripe.net/ripe/meetings/ripe-47/presentations/ripe47-eof-ospf.pdf I migrated my SP from a flat OSPF network (end to end area 0) to IS-IS. The OSPF setup was seriously screwed up. Someone got the bright idea to changes admin distances on some OSPF speakers, introduce a default in some places with static defaults in others, redistributing like it was going out of style, redisting a static for a large customer subnet on P2 instead of P1 which is what PE1 actually connected to (and not advertising the route from PE1 for some unknown reason), etc. The old setup was a nightmare. The IS-IS migration went fairly well after I got some major bugs worked out on our 7600s. I implemented IS-IS overtop of OSPF. Some OSPF speakers had admin distances of 80 and some were default. IS-IS slipped in over top with no problems. I raised IS-IS to 254 for the initial phase anyway just to be safe. Once I had IS-IS up I verified it learned all the expected routes via IS-IS. Then I lowered its admin distance back to the default and bumped OSPF up to 254. Shortly thereafter I nuked OSPF from each device. It was hitless. I never could get IS-IS to work with multiple areas. The 7600s made a smelly mess on the CO floor every time I tried. In the end I went with a L2-only IS-IS network. Still it works well for the most part. I've had about as much trouble with IS-IS as I have had with OSPF. Occasionally some random router will get a burr under it's saddle and jack up the MTU on the CLNS packets beyond the interface's max. The receiving router will drop the padded frame as too big. Fixing this can sometimes happen with a shut/no shut. Sometimes I can nuke the entire IS-IS config and re-add the config. Other times I simply have to reboot. This doesn't happen too often; it's usually several hours after I rock the IS-IS boat so to speak. Still, I wouldn't go back to OSPF for this SP. Justin
Re: Ethical DDoS drone network
David Barak wrote: Consider for a moment a large retail chain, with several hundred or a couple thousand locations. How big a lab should they have before deciding to roll out a new network something-or-other? Should their lab be 1:10 scale? A more realistic figure is that they'll consider themselves lucky to be between 1:50 and 1:100, and that lab is probably understaffed at best. Having a dedicated lab manager is often seen as an expensive luxury, and many businesses don't have the margin to support it. At the very least they should have a complete mock location (for an IT perspective) in a lab. Identical copies of all local servers and a carbon copy of their official template network. This is how AOL does it. Every change is tested in the mock remote site before the official template is changed and the template is pushed out to all the production sites. Justin
Re: Global Blackhole Service
Jens Ott - PlusServer AG wrote: Therefore I had the following idea: Why not taking one of my old routers and set it up as blackhole-service. Then everyone who is interested could set up a session to there and I do something similar on our network with a RTBH trigger router. I peer with it from my edges that are capable of handling that many BGP routes. I feed into it hosts that scan our networks looking for running SSH daemons and open proxies on specific default ports. With uRPF on all our edges it will drop traffic whether the target IP is the source or the destination. Works slick. The Cisco Press "Router Security Strategies" book has good examples. A trustworthy source for BGP blacklists of sorts would be an excellent thing IMHO. I'd love to be able to reliably drop traffic from malicious hosts before they scan our network and end up in my netflow logs. Trust would be a big issue though. Justin
Re: IPv6 Confusion
Steven Lisson wrote: Hi, I find it a shame that NAT-PT has become depreciated, with people talking about carrier grade NATS I think combining these with NAT-PT could help with the transition after we run out of IPv4 space. For me the bigger problem is how do I enable IPv6 on my assorted CE-facing edges when management is still buying edge hardware that can not and will not ever support IPv6. Our CATV solution can't support IPv6 (it's not a DOCSIS 3 thing; it's that the specific model of CMTS we're buying can not do IPv6). Our shiny new ADSL2+ solution can't do IPv6 and support in hardware can't be added down the road. For that matter our FTTH solution which is L2 only doesn't really support IPv6 with their pseudo L3 security options. So 1) how does one get management of a US SP to realize that IPv6 is coming well within the lifetime of the brand-new equipment that we're purchasing and should be a major show-stopping purchasing decision, and 2) how do we get common equipment manufacturers to build in IPv6 support to the equipment we're buying? During our FTTH dog and pony show process with half a dozen different vendors, I asked each of them about their IPv6 support and they all unanimously claimed that there was no demand for it from their customers. At this point I'm looking at doing 6to4 tunnels far into the future. Justin
Re: IPv6 Confusion
Mikael Abrahamsson wrote: Well, considering how very few vendors actually support IPv6, it's hard to find proper competition. Even the companies who do support IPv6 very well in some products, not all their BUs do on their own products (you know who you are :P ). Even worse is when the BU charges an insane price ($40k for 20 GigE ports for example) for a license to give a piece of their equipment IPv6 capabilities. I'm looking at you ES20 linecards. Adoption of IPv6 would be better in my opinion if vendors didn't force us to pay a premium to use IPv6. It's hard enough to convince management that there is a need to implement IPv6. It's even harder when you tell them how much it costs. And when they ask what they're getting for their dollars, they are none to pleased to hear that the bulk of it is going to a damn license. Justin
Re: [NANOG] fair warning: less than 1000 days left to IPv4 exhaustion
Suresh Ramasubramanian wrote: > Let's think smaller. /16 shall we say? > > Like the /16 here. Originally the SRI / ARPANET SF Bay Packet Radio > network that started back in 1977. Now controlled by a shell company > belonging to a shell company belonging to a "high volume email > deployer" :) > > http://blog.washingtonpost.com/securityfix/2008/04/a_case_of_network_identity_the_1.html Which leads me to ask an OT but slightly related question. How do other SPs handle the blacklisting of ASNs (not prefixes but entire ASNs). The /16 that Suresh mentioned here is being originated by a well-known spam factory. All prefixes originating from that AS could safely be assumed to be undesirable IMHO and can be dropped. A little Googling for that /16 brings up a lot of good info including: http://groups.google.com/group/news.admin.net-abuse.email/msg/5d3e3f89bb148a4c Does anyone have any good tricks for filtering on AS path that they'd like to share? I already have my RTBH set up so setting the next-hop for all routes originating from a given ASN to one of my blackhole routes (to null0, a sinkhole or srubber) would be ideal. Not accepting the route period and letting uRPF drop traffic would be ok too. Justin ___ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
NANOG 43 presentation video content?
Is there an ETA for the recordings of the presentations to be posted to the website? It's possible that I'm just missing them though I've found the presentation docs. No rush, but I'm itching to see what I missed. Thanks Justin
Re: DNS problems to RoadRunner - tcp vs udp
Jon Kibler wrote: Various hardening documents for Cisco routers specify the best practices are to only allow 53/tcp connections to/from secondary name servers. Plus, from all I can tell, Cisco's 'ip inspect dns' CBAC appears to only handle UDP data connections and anything TCP would be denied. From what you are saying, the hardening recommendations are wrong and that CBAC may break some DNS responses. Is this correct? A number of Cisco default from years gone by would break DSN, today, in it's current form. Such as how PIXs and ASAs with fixup/DPI would block udp/53 packets larger than 512 bytes, not permitting EDNS packets through. Also, other than "That's what the RFCs call for," why use TCP for data exchange instead of larger UDP packets?
Re: DNS problems to RoadRunner - tcp vs udp
Justin Shore wrote: Jon Kibler wrote: Various hardening documents for Cisco routers specify the best practices are to only allow 53/tcp connections to/from secondary name servers. Plus, from all I can tell, Cisco's 'ip inspect dns' CBAC appears to only handle UDP data connections and anything TCP would be denied. From what you are saying, the hardening recommendations are wrong and that CBAC may break some DNS responses. Is this correct? A number of Cisco default from years gone by would break DSN, today, in it's current form. Such as how PIXs and ASAs with fixup/DPI would block udp/53 packets larger than 512 bytes, not permitting EDNS packets through. Thunderbird apparently thought that I was ready to send my message before I did. I was going to add some ASA config as an example. policy-map type inspect dns migrated_dns_map_1 parameters message-length maximum 2048 I don't have an IOS CBAC example but there's surely something similar. Justin
Re: Latest instalment of the "hijacked /16s" story
Is the whole AS (33302) rogue like the AS advertising the SF Bay Packet Radio block is? Looking at the WHOIS for some of the prefixes advertised by both ASs, I see some common company names. That would lead me to believe that 33302 is no better than 33211 but I can't confirm that. Any takers? Justin Suresh Ramasubramanian wrote: Another legacy /16, after the previous one - the sf bay packet radio /16 http://www.47-usc-230c2.org/chapter3.html This time 128.168/16 - and by the same group that seems to have acquired control of the earlier one. --srs
Re: P2P agents for software distribution - saving the WAN from meltdown?!?
Nathan Ward wrote: There was a product around that would keep track of torrents and fudge the tracker responses to direct you to on-net peers where possible. Not sure what it's called. Inline box thing, much like Sandvine, Allot, etc. I imagine you could either inject the details of a local seed you're running, or keep track of on-net users and inject those. Out of curiosity, how many SPs out there have local Akamai servers on their network? I inquired about it last Fall and our average bandwidth to Akamai wasn't enough at the time to warrant placing hardware on our site, from their perspective anyway. The bandwidth though accounted for roughly 1/10th of our overall bandwidth. I wonder what it would be today. Our Internet bandwidth is just over 4x what it was last Fall. Justin
Re: Techniques for passive traffic capturing
I stumbled across these last night. http://www.dovebid.com/assets/display.asp?ItemID=cne11811 I don't know anything about them and haven't done any research. The auction description would however lead me to believe that they might be useful in this case. There are many of them listed in the main auction catalog. Justin Ross Vandegrift wrote: Hello everyone, Over the past two years, there's been a trend toward doing more and more analysis and reporting based on passive traffic analysis. We started out using SPAN sessions to produce an extra copy of all of our transit links for these purposes. But the Cisco limits of two SPAN sessions per device (on our platforms) is a major limitation. Does anyone have a better soultion for more flexible data collection? I've been thinking about a move to a system based on optical taps of each of the links. I'd aggregate these links into something like a 3750 and use remote-span VLANs to pass the traffic onto servers that sniffing on their interface on that 3750. Do products like the NetOptics Matrix Switches offer a substantial advantage? Comments or suggestions?
Re: easy way to scan for issues with path mtu discovery?
Darden, Patrick S. wrote: Hi all, Does anyone know of an easy way to scan for issues with path mtu discovery along a hop path? E.g. if you think someone is ICMP black-holing along a route, or even on the endpoint host, could you use some obscure nmap flag to find out for sure, and also to identify the offending hop/router/host? What tool would you use to test for this, and how would you do such a test? Is there any probing tool that does checks like this automatically? Seems to me this happens often enough that someone has probably already figured it out, so I am trying not to reinvent the wheel. All I can think of would be to handcraft packets of steadily increasing sizes and look for replies from each hop on the route (which would be laborious at best). Google has not been kind to my researches so far. Take a look at tracepath. http://www.google.com/search?hl=en&q=tracepath&btnG=Google+Search I haven't done much of anything with it but it may be of use to you. Justin
Re: REJECT-ON-SMTP-DATA (Re: Mail Server best practices - was: Pandora's Box of new TLDs)
Chris Owen wrote: The lack of a spam folder is one of the problems with such a solution. Having a middle ground quarantine is actually quite nice. However, the biggest problem is these solutions are global in nature. We let individual customers considerable control over the process. They can each set their own block and quarantine levels, configure their own white and blacklists and even turn the spam controls completely off. For various reasons none of that would be possible with this solution and all the implementations you link to all run with a single global configuration. Chris, I can think of one spam filter that does give both you and your users individual control over all of these settings while still rejecting mail during the SMTP dialog including the DATA phase: CanIt-Pro. http://www.roaringpenguin.com/ CanIt-Pro is a mail filter or 'milter' in Sendmail-speak. It essentially connects into Sendmail from the side. Sendmail calls on it during the SMTP dialog with the remote MTA, giving CanIt-Pro the opportunity to work its magic before the message is accepted for delivery which allows from rejecting mail right up until the last second RFC 2821 permits it. I use CanIt-Pro for this very reason. Each user can have their own individual mail "stream" in CanIt terminology. Each user can define white/blacklists by senders, domains and hosts. Users can block or permit by MIME types or perform actions based on attachment suffixes. They can write their own rules with regexs against the headers or body as well as checking to see if a sending domain matches that of the relaying MTA (not always accurate but often is; ebay.com is a good example). Users can enable or disable individually configured DNSBLs or change the score. They can even define rules based on SPF values. Each user gets their own bayesian DB as well. You as an admin can disable any of the above features on a per-user basis so you can make it as simple or as complex as you want. You can also pre-define streams with specific settings that users can subscribe to if they don't want the more fine-grained control. I created a stream that only tags suspect spam. I also created 3 streams with varying levels of aggressiveness. Have you ever heard the phrase "a pilot's plane"? Well I would liken CanIt to being the equivalent for mail admins and their spam filters. I first started using the OSS predecessor to CanIt back in late 2000 or so called MIMEDefang. MD is still the underpinnings of CanIt. When you buy CanIt you also get the source code so you have the ability to code in custom things if you have the need and desire. It's perfect for SPs. BTW, I'm not a Roaring Penguin employee. I'm just an impressed user of their products so they've earned my loyalty. Justin
Re: REJECT-ON-SMTP-DATA (Re: Mail Server best practices - was: Pandora's Box of new TLDs)
Phil Vandry wrote: On Tue, Jul 01, 2008 at 11:54:46AM +0200, Jeroen Massar wrote: The magic keyword: REJECT-ON-SMTP-DATA. [snip description on how to reject during DATA phase] Unfortunately there is also a side-effect, partially, one has to have all inbound servers use this trick, and it might be that they need to be a bit heavier to process and scan all that mail. Then again, you can More than that: you also need to have all users in the domain (indeed all users who share an MX server) agree on the accept/reject policy. If users are free to use different spam filtering techniques and tune them to their liking (e.g. someone uses SpamAssassin with a low threshold, someone else uses it with a high threshold, someone else uses bogofilter instead) then what do you do with mails that are addresses to more than one user? You can have some users reject the message during the RCPT phase and others accept it, but if you've waited until the DATA phase, it's too late for that. Phil, This is a non-problem if you use the right spam filter. I mentioned CanIt earlier in the thread. It individually applies filtering rules to incoming mail and can apply different rules and take actions on a per-user basis. It handles messages with multiple recipients by feeding copies of the message into an individual user's stream where that user's settings dictate what actions are taken. A user may have an aggressive spam score or an extremely conservative score, message rejection with SpamHaus and SORBS or no DNSBLs at all, tons of custom rules and lots of bells and whistles or spam filtering disabled completely. They've already anticipated all the possible problems that have been brought up in this thread. Arrange for a demo and give it a try. I don't think you'd be disappointed. http://mailman.nanog.org/pipermail/nanog/2008-July/001884.html Justin
Re: REJECT-ON-SMTP-DATA (Re: Mail Server best practices - was: Pandora's Box of new TLDs)
I'd have to think of this one. I'm not sure what CanIt would do in such a case. A NDR may be the only way in that scenario. I'll sleep on it. Justin Skywing wrote: I think the problem that was being raised here was that past the DATA phase, if one recipient is going to receive the message and another is going to reject it, you have lost the ability to communicate this back to the sender (at least without an NDR). Thus the problem of mails disappearing into spam folder black holes is back in the multirecipient case when one is dealing with DATA and recipients have differing spam policies. - S
Re: REJECT-ON-SMTP-DATA (Re: Mail Server best practices - was: Pandora's Box of new TLDs)
Jean-François Mezei wrote: Blocking messages as early as possible also greatly reduces the load on your system, disk storage requirements etc. Rejecting during the SMTP dialog but before you signal that you've accepted the DATA output also also pushes the responsibility for sending a DSN to the sending MTA. It's is a spammer then they'll drop the DSN. If it's a compromised PC running Storm Worm or the like it won't generate DSNs anyway. If it's a legit but poorly-configured MTA acting as an open relay it will generate the DSN and eventually get itself blacklisted. Sending a DSN to a spoofed envelope From is considered spam in and of itself and will get an MTA blacklisted. You could always not send DSNs in which case the sender of a legit message that had a few to many !!!s in it will not get a bounce and will not know that there message was blocked. It disappears into an email blackhole. Few things piss off users like disappearing email. It's best all around to force the sending MTA to send the bounce. Your MTA doesn't get blacklisted, spammers' relays are forced to do a little extra work, and senders of legit mail that's a false-positive get a DSN telling them that their message didn't go through (and hopefully why). Everyone wins. Block early and block often. Justin
OT: 2-post rack security covers
Somewhere I've seen what amounts to a concave cover that you can mount over the face of gear racked in a 2-post. The cover I saw had a bracket that mounted to the 2-post before any equipment was installed and it had a couple knobs sticking out (basically consuming a U on each end). Then you racked up the equipment through the holes in the bracket you mounted earlier. Finally the cover (which sticks out about 6") slips over the front of all the gear, down onto the knobs and locks in place with a mechanism that grasps the metal knobs. I've also seen ones that don't lock. The point of this is to provide additional security in a cageless co-lo environment like what you'd find in most COs providing RLEC services as well as prevent accidental damage to the cabling on the front of equipment. Does anyone have a specific name for what I'm talking about? Vendor or product reference? I've looked through all my catalogs and can't find what I'm looking for. I'd like to think that no one would mess with out wiring or devices in those COs but of course I can't guarantee that. The damage someone could do to our cross-connects in a few seconds time is something I'd like to avoid. A SFP or GBIC can be stolen quickly too with minimal effort. No sense in tempting people if it can be helped. Thanks Justin
Re: Is it time to abandon bogon prefix filters?
Randy Bush wrote: serious curiosity: what is the proportion of bad stuff coming from unallocated space vs allocated space? real measurements, please. and are there longitudinal data on this? are the uw folk, gatech, vern, ... measuring? I still have 2 of my borders using an inbound ACL to filter BOGONs vs null routes. For the ACLs I've broken down the BOGONs to nothing larger than a /8. I see a number of hits on those entries, especially on 94/8. and 0/8. While some of the other hits are accidental I'm sure, I would seriously doubt if those 2 /8s are. Justin
Re: Is it time to abandon bogon prefix filters?
Leo Bicknell wrote: Have bogon filters outlived their use? Is it time to recommend people go to a simpler bogon filter (e.g. no 1918, Class D, Class E) that doesn't need to be updated as frequently? In my opinion no; BOGON filters are still very useful. Back when only 5% of the IP space was allocated we didn't have the same kinds of serious threats to our networks and our users that we have today. We didn't have spammers hijacking unallocated space (can if be considered hijacking when the block hasn't been allocated yet?) to mass mail our users, host phishing servers, run C&C servers for botnets, etc. Today we do and the use of what few networks are still unallocated for bad purposes are prevalent. For my users I only recommend that they use dynamic methods of keeping up to date with changes in the BOGON list. While I still do much of my BOGON work manually, as I'm sure many of us do, I have my local BOGON lists updated within a few hours of learning of a new allocation (sometimes even before the bogon-announce email arrives). For those that aren't uber network geeks I recommend using something automated. Look at it this way: you have what's essentially a mostly static list of netblocks from which all traffic is unquestionably malicious. Wouldn't you block it if you could for the sake of your network security and that of your users? Justin
Re: Is it time to abandon bogon prefix filters?
Rob Evans wrote: I see a number of hits on those entries, especially on 94/8. and 0/8. You do know that 94/8 has been assigned to the RIPE NCC, right? :-) I knew I should have logged into a production box to look at the ACL counters. But no, I thought the former border that I was already logged into was good enough. Apparently not! :-) I stopped updating it's BOGON list when it was decommissioned and retasked. I could have sworn that was just this past April and the only change since then was 112 and 113/8 which I accounted for mentally. Apparently it was longer ago than I thought! Justin
Re: Hardware capture platforms
Jay R. Ashworth wrote: And, note carefully: some "dual-speed hubs" are actually a 10BT hub and a 100BT hub *with a switch between them*. I forget which brand I caught this on, but it bit me a couple of years back. 3COM Dual-Speed 10/100 hubs were this way. Got bit by that too back in the day. Technically I think all hubs supporting both 10 and 100 would have to do this. I can't think of any technical way of getting around the problem without doing this. Justin
Re: impossible circuit
Laurence F. Sheldon, Jr. wrote: George Carey wrote: > I have not pencil-and-papered this to see if there is anything to it, but I was wondering what would happened if you put a layer-two bridge into a back-bone fabric and turned off "learning" so every packet is flooded to every port. Though not the same circumstances on having the same symptoms as the OP's problem, I saw this happen once at a University I used to work for. A system's administrator insisted on having a hub between the SP's router and our core campus switch so he could sniff traffic. Since the hub was there and I couldn't eliminate it I went ahead and used it myself for my own traffic capture point in the network with an OS X box running EtherPeek. I did an OS update on the box one morning and went to a meeting. During the meeting it was reported that the network was down. I started looking into the problem at that point. All Internet traffic was dead except SSH connections. So I started sniffing on my NOC server for that server's traffic. All my outbound TCP connections from the NOC were getting a RST from one L2 host and a SYN-ACK from another. The MAC address sending the RST looked familiar but I couldn't identify it. After searching through the network for the MAC I found it on the interface facing our border router and that damn hub. The MAC was my OS X sniffing box. The other MAC was the backside of the provider's router. The OS X update I applied was the one that installed a host-based firewall. The update automatically turned on the FW and permitted all local servers that were configured to run, in my case SSH, with everything else being denied. The FW on the OS X box normally wouldn't see packets not destined for it until you put a nic in promisc mode such as what happens when you run EtherPeek. The OS X box's FW was getting hits from traffic denied by it's ACL and was sending TCP RSTs faster than hosts on the 'Net could respond. It did this for everything except SSH which it permitted (but higher up the IP stack it ignored because the IP packet was address to the local box). This isn't in any way related to the problem at hand but it does demonstrate that weird things happen when devices in unusual places flood out all ports. Justin
Re: impossible circuit
This is just a WAG but what the hell. Jon Lewis wrote: I've got this private line DS3. It connects cisco 7206 routers in Orlando (at our data center) and in Ocala (a colo rack in the Embarq CO). According to the DLR, it's a real circuit, various portions of it ride varying sized OC circuits, and then it's handed off to us at each end the usual way (copper/coax) and plugged into PA-2T3 cards. Are you sure that they are not crossing some channels in the middle and accidentally handing them to a different customer? You mention above that various portions of the DS3 ride different transport circuits in the middle. That always creates the potential for someone to not put it back together correctly on either end. I've seen DLCs get crossed before. I could easily see a transport provider crossing portions of a circuit, especially if they break it into pieces in the middle and have to put it back together on the ends. I think it makes sense too. Somebody's getting traffic off a T1 that isn't destined for them. Their router sees it, says WTF and sends a ICMP dest unreachable via their default route through Sprint. Same thing goes for a traceroute; it simply follows its default route to reply to your packets with the expiring TTL. Taking a path through a different provider would be expected since it doesn't have a connected route to the source of the traceroute (since it's not the far end of your T1 that you're expecting). The site getting your crossed T1 could be using the T1 as a PtP to a branch office and has Internet through a different circuit that hasn't been hosed. I would be curious to hear if Sprint is having any problems with a circuit connected to sl-bb20-dc-6-0-0.sprintlink.net, what the router is and if any directly connected customers are having T1 problems. If nothing else Sprint should be able to track down the source of the traceroute return packets and contact the customer. The T1 could be part of a bundle at their site and they may not even realize that the bundle dropped a path. Last Tuesday, at about 2:30PM, "something bad happened." We saw a serious jump in traffic to Ocala, and in particular we noticed one customer's connection (a group of load sharing T1s) was just totally full. We quickly assumed it was a DDoS aimed at that customer, but looking at the traffic, we couldn't pinpoint anything that wasn't expected flows. Are you sure that the traffic being received by each of the T1s is their's? Do you have any way to getting flows or packets off of individual T1s and not the bundle as a whole? Tracing through you to your upstream... 7 andc-br-3-f2-0.atlantic.net (209.208.9.138) 47.951 ms 56.096 ms 56.154 ms 8 ocalflxa-br-1-s1-0.atlantic.net (209.208.112.98) 56.199 ms 56.320 ms 56.196 ms 9 * * * Circuit gets crossed onto the wrong customer. Wrong site received a packet with an expiring TTL and goes to send a reply. Destination IP isn't on a connected route so the site sends the reply via it's default route on Sprint. 10 sl-bb20-dc-6-0-0.sprintlink.net (144.232.8.174) 80.774 ms 81.030 ms 81.821 ms 11 sl-st20-ash-10-0.sprintlink.net (144.232.20.152) 75.731 ms 75.902 ms 77.128 ms Reply traverses Sprint to L3 and on to you. 12 te-10-1-0.edge2.Washington4.level3.net (4.68.63.209) 46.548 ms 53.200 ms 45.736 ms 13 vlan69.csw1.Washington1.Level3.net (4.68.17.62) 42.918 ms vlan79.csw2.Washington1.Level3.net (4.68.17.126) 55.438 ms vlan69.csw1.Washington1.Level3.net (4.68.17.62) 42.693 ms 14 ae-81-81.ebr1.Washington1.Level3.net (4.69.134.137) 48.935 ms ae-61-61.ebr1.Washington1.Level3.net (4.69.134.129) 49.317 ms ae-91-91.ebr1.Washington1.Level3.net (4.69.134.141) 48.865 ms 15 ae-2.ebr3.Atlanta2.Level3.net (4.69.132.85) 59.642 ms 56.278 ms 56.671 ms 16 ae-61-60.ebr1.Atlanta2.Level3.net (4.69.138.2) 47.401 ms 62.980 ms 62.640 ms 17 ae-1-8.bar1.Orlando1.Level3.net (4.69.137.149) 40.300 ms 40.101 ms 42.690 ms 18 ae-6-6.car1.Orlando1.Level3.net (4.69.133.77) 40.959 ms 40.963 ms 41.016 ms 19 unknown.Level3.net (63.209.98.66) 246.744 ms 240.826 ms 239.758 ms 20 andc-br-3-f2-0.atlantic.net (209.208.9.138) 39.725 ms 37.751 ms 42.262 ms 21 ocalflxa-br-1-s1-0.atlantic.net (209.208.112.98) 43.524 ms 45.844 ms 43.392 ms 22 * * * 23 sl-bb20-dc-6-0-0.sprintlink.net (144.232.8.174) 63.752 ms 61.648 ms 60.839 ms 24 sl-st20-ash-10-0.sprintlink.net (144.232.20.152) 66.923 ms 65.258 ms 70.609 ms 25 te-10-1-0.edge2.Washington4.level3.net (4.68.63.209) 67.106 ms 93.415 ms 73.932 ms 26 vlan99.csw4.Washington1.Level3.net (4.68.17.254) 88.919 ms 75.306 ms vlan79.csw2.Washington1.Level3.net (4.68.17.126) 75.048 ms 27 ae-61-61.ebr1.Washington1.Level3.net (4.69.134.129) 69.508 ms 68.401 ms ae-71-71.ebr1.Washington1.Level3.net (4.69.134.133) 79.128 ms 28 ae-2.ebr3.Atlanta2.Level3.net (4.69.132.85) 64.048 ms 67.764 ms 67.704 ms 29 ae-71-70.ebr1.Atlan
Re: SLAAC(autoconfig) vs DHCPv6
Charles Wyble wrote: This was especially a question when L2 was "in" and routing was out: how do you ping a MAC address? l2ping works on bluetooth devices on Linux. Might work for other stuff as well. Not sure what Cisco offers in this regard. The ideal solution would be OAM. Of course not everything supports that and it's not on by default either. Of all the things to turn off by default, this is one thing that I'd like to see on. Justin
Re: Native v6 with Level(3)?
That's good to know. Do you know if there are any rate-limits that would apply to this trial service? Any idea where the tunnel head-end is? Will they do a backup tunnel to another router? I'll have to give them a holler as soon as I'm ready to make the IPv6 jump. Thanks Justin Craig Pierantozzi wrote: No native service available but there is a trial tunneled IPv6 service with best effort support with *no SLA* available to current Level 3 Internet customers. IPv6 is currently being provided via IPv4 tunnels to the customer's existing router and supported by a handful of engineers. There is a simple service agreement addendum and form to fill out for relevant config bits. -Craig On Aug 22, 2008, at 5:22 PM, Kyle Murray wrote: Here is the response I got from L3 when I inquired about IPV6: "The answer to your questions is "no", we have not yet inplemented IPV6 for our customers yet. IPV4 is the de facto on our backbone nad alledge router on which customers connectc." Poor spelling aside, it seems they have not implemented it yet. If someone manages to get them to implement, I would really like to hear about it. -kyle Kyle Murray Network Manager Digital Forest, Inc.
Re: GLBX De-Peers Intercage [Was: RE: Washington Post: Atrivo/Intercag e, w hy are we peering with the American RBN?]
Paul Ferguson wrote: My next question to the peanut gallery is: What do you suggest we should do on other hosting IP blocks are are continuing to host criminal activity, even in the face of abuse reports, etc.? Seriously -- I think this is an issue which needs to be addressed here. ISPs cannot continue to sweep this issue under the proverbial carpet. Is this an issue that network operations folk don't really care about? IMHO policy should only be dictated by the edge, never upstream of that point. Now whether the edge is defined as the edge provider or the actual end-user is up for debate. I don't want my upstreams to make a decision what my SP and thus my customers can get to. My customers can't contact my upstream and argue for listing or delisting a given IP like they can with me. They can't speak with their dollars to my upstream like that can with me, their edge provider. Then again should I as the edge provider filter for my customers? Value-add service or a bonus service? It depends on your point of view. Justin
Re: Teleglobe appears to be spam-source zombie network?
Randy Bush wrote: why don't we just have dick cheney bomb them? We could send in the Trojan Moose. Justin
Re: InterCage, Inc. (NOT Atrivo)
Patrick W. Gilmore wrote: There is no law or even custom stopping me from asking you to prove you are worthy to connect to my network. There may not be a law preventing you from asking him for proof of legitimate customers, but there is a law preventing him from answering you. Google for CPNI and "red flag". Justin
Re: prefix hijack by ASN 8997
Looking up some of my prefixes in PHAS and BGPPlay, I too see my prefixes being advertised by 8997 for a short time. It looks like it happened around 1222091563 according to PHAS. Was this a mistake or something else? Justin Christian Koch wrote: I received a phas notification about this today as well... I couldn't find any relevant data confirming the announcement of one of my /19 blocks, until a few minutes ago when i checked the route views bgplay (ripe bgplay turns up nothing) and can now see 8997 announcing and quickly withdrawing my prefix On Mon, Sep 22, 2008 at 9:06 PM, Scott Weeks <[EMAIL PROTECTED]> wrote: I am hoping to confirm a short-duration prefix hijack of 72.234.0.0/15 (and another of our prefixes) by ASN 8997 ("OJSC North-West Telecom" in Russia) in using ASN 3267 (Russian Federal University Network) to advertise our space to ASN 3277 (Regional University and Scientific Network (RUSNet) of North-Western and Saint-Petersburg Area of Russia). Is that what I'm seeing when I go to "bgplay.routeviews.org/bgplay", put in prefix 72.234.0.0/15 and select the dates: 22/9/2008 9:00:00 and 22/9/2008 15:00:00 If so, am I understanding it correctly if I say ASN 3267 saw a shorter path from ASN 8997, so refused the proper announcement from ASN 36149 (me) it normally hears from ASN 174 (Cogent). If the above two are correct, would it be correct to say only the downstream customers of ASN 3267 were affected? scott
Re: InterCage, Inc. (NOT Atrivo)
[EMAIL PROTECTED] wrote: On Mon, 22 Sep 2008 17:00:35 CDT, Justin Shore said: There may not be a law preventing you from asking him for proof of legitimate customers, but there is a law preventing him from answering you. Google for CPNI and "red flag". Hmm... I'm not sure how "Yes, XYZ is a customer of mine" qualifies as a "red flag" question for identity theft. I'm also not sure how "XYZ is a customer" qualifies as CPNI, which (according to the first few pages of Google hits) comprises things like calling/billing records. Nope. Doesn't seem like "xyz is a customer" qualifies there... Hmm... "xyz is a customer" doesn't seem to run afoul of that either. Feel free to enlighten me about what I missed here? Given the unfortunate vagueness of the FCC on their directive, consultants have interpreted CPNI differently and have given their customers (SP and CS organizations) wildly varying instructions. However every interpretation that I've been privy to extends far beyond call records like many people believe CPNI is limited to. Our CPNI consultants instructed us to not even reveal that Company X is a customer (which is laughable given the size of the communities we serve, but I digress). They did however tell us that we can trust all phone numbers listed on an account both for instant information providing and for callbacks. Cox's interpretation is that only the primary number listed on the account is valid for callbacks and that the PIN is required regardless (something our consultants told us was only required if the caller couldn't be reached on a valid callback number). Everybody has different instructions to work with. To answer the question the list is asking, the SP isn't simply stating that Company X is a customer of SP ABC. They are stating that Company X is a customer and that they believe Company X is a valid, not malicious customer in good standing. While that's not a call record that implies certain things about Company X's relationship with the SP. They essentially stating that they haven't received spam or other abuse complaints regarding the customer. They're implying that they are a customer in good standing. That could even be construed to imply that their account is in good standing. That's more than just saying that Company X is a customer of SP ABC. Our consultants advised us against saying anything of the sort. Think of it like HIPAA for SPs. It's splitting hairs but that's the unfortunate situation that CPNI has put all of us in. Instead of a common sense response we get to deal with the knee-jerk response from the FCC thanks chiefly to the Patty Dunn scandal. Justin
Where to move the Intercage/Atrivo discussion (was: the Intercage mess)
David W. Hankins wrote: I think the current state of the art in civilized, peaceful, extralegal negotiation of reasonable behaviour expected of businessmen and their peers is a form of social ostracism given its name in 1880 when the Irish Land League bade everyone in Mayo county, Ireland not to engage economically or otherwise with Captain Charles Boycott...a land owner who had set his rent very high, and was evicting anyone who deigned to complain of it (fully within his legal authority, but outside the realms of what the people saw as reasonable). If anyone can think of better, we'll have to call it "Intercaging". Since the usefulness of this thread to NANOG is becoming less and less as the thread wears on, where would the NANOG community suggest that it be moved to? What are the good SP operational security mailing lists? What groups or forums would one find threads like this? The NANOG ISP security BOF group? I would like to do a much better job of keeping up on things of this nature. I already spend a great deal of time on it but I know that I'm missing a plethora of other security issues. What group would be interested in knowing that whois.estdomains.com (83.171.76.99) is now being hosted by as31353 via as8997 (didn't we have a small problem with 8997 the other day?)? I'd love to find the good lists and forums for this type of discussion, preferably with a SP slant. Perhaps that info will help move the discussion to more appropriate places. Thanks Justin
Re: rackmount managed PDUs
Justin M. Streiner wrote: I have some Tripp Lite PDUMH30NETs that work well and are reasonably priced, but they have a few quirks (no RS-232 console port, web interface seems to be a little shaky with Firefox, etc) that would become more annoying when scaled up to several rows of new rack footprints. I'm also open to using managed vertically mounted PDUs. The plan is for each footprint to have "A" and B" feeds, so two PDUMH30NETs would take up 4U per footprint, which is a bit much... One thing to be aware of with the vertical PDUs is where they get mounted. A number of vertical Emerson PDUs were purchased for our DC. However only one of the Liebert cabinets was purchased with the 6" extension on the rear. The PDUs mount on each side of the cabinet door frame with the receptacles facing the opposing PDU. Ie, both PDUs face inward towards each other, not towards the front or rear of the cabinet. They stick out about 2" plus the power cords stick out at least another 2", more depending on how hard you fight to force the power cables into bundles and wire-tie them off to the frame. The rails of the servers barely clear the PDUs. The cabling on the back of the servers is made all the harder by the bundle of power cables in the way. It's a difficult physical problem to work around. The 6" cabinet extension would have allowed the PDUs to be moved further from the servers and would have allowed for a little more robust form of wire management for the power cables. Now in our CO we bought a cabinet thats 28" wide (Cooper/B-Line). There's enough space on each side for 4" of vertical wire management. Vertical PDUs would be viable in that scenario. Just something to consider. Justin PS==> The Emerson PDUs are supposed to be manageable, though I've never seen the GUI.
L3 route flapping
Is anyone else seeing 72.237.248.0/22 flapping? As of about 10 minutes ago Oregon-IX reported that it had flapped 8 times in 50 minutes. We have a production phone system on that network that's going crazy. Thanks Justin
Re: Who has AS 1712?
Hank Nussbacher wrote: At 18:29 24/11/2009 +0900, Randy Bush wrote: > RIS Routing History for AS1712 since 2001: on what date was AS1712 assigned to the current RIPE holder? Based on: ftp://ftp.ripe.net/pub/stats/ripencc/delegated-ripencc-latest it doesn't show AS1712 ever being allocated to Renater (probably why the inter-RIR mistake happened) but the surrounding ASNs give you an idea of the timeframe: ripencc|IL|asn|1680|1|19930901|allocated ripencc|EU|asn|1707|1|19930901|allocated ripencc|EU|asn|1729|1|19930901|allocated ripencc|EU|asn|1732|1|19930901|allocated Since IANA says that the ASN is ARIN's to assign wouldn't that preclude another RIR from assigning it? http://www.iana.org/assignments/as-numbers/ Of course if it was already assigned when IANA said that (no dates on the link above) then maybe the fault is more IANA's for telling another RIR that they could allocate an ASN that another RIR already allocated. Who knows. It should be an interesting one to watch play out though. Justin
Re: Ethernet over DS3 Converters
Brad Fleming wrote: My company is searching for some Ethernet over DS3 converters / adaptors for a specific installation. I see several options from Adtran, RAD-Direct, and a couple other (smaller) vendors and was wondering if anyone out there has suggestions or insights. Our needs are pretty simple: We'll need to pass multiple VLANs unless that's simply not possible. We'll need copper 10/100 interfaces on each side. Hey Brad. We're doing this with Overture 2200s and 5100s. However, as others have pointed out, they have some issues. Their redundant PSUs for models below the 5x00s are a joke. A single DC PSU for those models requires 1U of space. The PSU has 2 power sources but is still a single PSU. A redundant PSU requires another 1U of space. Inside the 19" x roughly 8" 1U chassis is a PCB that's about the size of a wallet. Why they couldn't incorporate that into a modular PSU or make the external PSU chassis modular so a 2nd PSU didn't take up any more space I do not know. The CLI in the 2200 and 5100 can do a lot. I must admit that I still do not understand it. They just work and I don't have to mess with them very often so I struggle each time I get into one. I found their VLAN grooming to be confusing. Even tech support wasn't able to help in some cases. The ISG models (34, 45, 140, 180 for example) are completely different than the 2200s and 5x000s (I don't know about the ISG 2x models). They were an acquired from another company. What the others said about there being no CLI is right. They only have a web GUI. You can't pull off their config with common CLI tools like RANCID, CatTools, COSI tools, etc. That's a big deal for us. That to me makes them feel like non-telco grade equipment. You can certainly book-end the back to back but be absolutely certain that you get a config dump from each end every time a tech gets into one. I believe the 34, 45 and 140 models use the same PSU as the 2200 above. They can only connect to a single PSU though (the 180 supports 2). Same caveats as above. They are generally feature rich; I'll give that to them. They could be an excellent solution if the product was more mature and honed. Anyone wanting to bond T1s with MLPPP on the 140 and 180 back to a router BEWARE. They require BCP. On most platforms (anything that doesn't use a SPA) that requires disabling routing (research BCP configs on Cisco.com). They will work but understand the caveats before trying them. I'm sure that OV will send you demo units if you ask. I'll send you a picture of a 2200 with the PSU setup later tonight. Justin
Re: AT&T SMTP Admin contact?
Brad Laue wrote: Ah, very true. Still really hoping to get in touch with someone from AT&T. :-) Good luck. You might be a better response from posting a video complaint on Youtube. "AT&T Breaks Guitars" perhaps. :-) Justin
Re: I got a live one! - Spam source
Russell Myba wrote: Let's say our direct customer is CustomerA. They seem to buy rackspace from BusinessB. CustomerA seem to retain BusinessC for "IT Solutions" even though all three entities purport to be IT solutions providers. BusinessC came into the picture after the spamming started saying a wholly different /24 (Different from the spam source) "doesn't work". It routes fine on our end. I have a feeling they've been added to some RBLs but I haven't found them listed yet. Just a simple ethernet handoff in a colo. We delegated rDNS to the servers of their choice and haven't heard a peep out of them until now. I think it's an absolute crying shame that a freak bolt of lighting somehow fried their rackspace in the colo and didn't affect any of the surrounding neighbors. I hate it when that happens. It's karma I think... Justin
Re: FTTH Active vs Passive
Luke Marrott wrote: I'm wondering what everyones thoughts are in regards to FTTH using Active Ethernet or Passive. I work for a FTTH Provider that has done Active Ethernet on a few networks so I'm always biased in discussions, but I don't know anyone with experience in PON. Active is the way to go. Passive is merely a stepping stone on the way to active. Passive only makes sense (in some cases) if you are 1) fiber poor and 2) not doing a greenfield deployment. If you have the fiber to work with or if you are building a FTTH plant from scratch go with active. The only real proponents of PONs are the RBOCs who are exceedingly cheap, slow to react, and completely unable to think ahead (ie, putting in an abundance of fiber for future use instead of just enough to get by) and some MSOs who don't dread and loathe shared network mediums like CATV and PON (whereas those from a networking background would never ever pick such a technology). I've read before that almost all PON technology is proprietary, locking you into a specific hardware vendor. However I think this is changing or has already changed, opening PON up for interoperability. Can anyone confirm this? There are several actual PON standards out there: http://en.wikipedia.org/wiki/Passive_optical_network Few vendors will ever admit that they interop with another vendor's gear though. They don't want you to buy their optical switches (which have a small markup) and someone else's ONTs (which typically have a much greater markup). In some cases even though that adhere to the standards to a point they diverge and go proprietary for things like integrating voice or video into the system. That could cause management and/or support issues for you at some point in the life of the product. Personally I'd go with a vendor that offers the complete solution instead of piecing one together. PON has some popularity in MDUs. The splits are easy to manage because they're all in one location. Bandwidth needs are typically on the low end in MDUs due to a lack of businesses (bandwidth being a severe future-proofing problem for PON). PON's biggest limitations for us is the distance limitations. We're deploying FTTH in the rural countryside, not in a dense residential neighborhood. PON has very specific distance limitations for each split and cumulative across all splits that make rural deployments extremely difficult. The price difference between Active and PON is negligible at this point and in many cases cheaper for active. Go with active for FTTH. You won't regret it. Justin
Re: FTTH Active vs Passive
Dan White wrote: All valid points. Deploying a strand to each customer from the CO/Cabinet is a good way to future proof your plant. However, there are some advantages to GPON - particularly if you're deploying high bandwidth video services. PON ONTs share 2.4Gb/s of bandwidth downstream, which means you can support more than a gig of video on each PON, if deploying in dense mode. That's true but I'd hope it wouldn't be needed. A single residence wouldn't get anywhere near needing 1Gbps of video bandwidth. Even with MPEG2 and 50 HD STBs @ 19Mbps that would still leave 50Mbps for Internet. I don't know of anyone needing that much BW for video. PON does present the possibility of doing and RF Overlay though which makes traditional RF possible. That's something our CATV guy talks about often. The RF wavelength gets spun off at the NID and outputted as traditional RF on coax. I've heard of similar things with limited WDM from the egress side of the active Ethernet switch to the NID but I haven't seen any in production. Another big advantage is in CO equipment. A 4-PON blade in a cabinet is going to support on the order of 256 ONTs. This is something that I don't think many people have dealt with before. In our rural Active FTTH environment we're not hubbing all the fiber out of COs. Most of it hubs back to cabinets on the side of the road and from there gets put on an Ethernet ring which ultimately terminates in the COs. Because of this while we may have tens of thousands of strands out in the field we don't have anywhere near that amount in a single cabinet or CO. A lot of people think that Active FTTH means home-running ever strand back to a single CO and that's not generally the case. LECs usually deploy a distributed model with aggregation out in the field in cabinets or huts and then backhaul that back to the COs. This also means that fewer individual fiber ports get served out of any one location. So a cabinet might have 3-4 blades in individual chassis or it might have a 13-slot chassis with as many slots populated to meet the demand. It seems to work well. I see what you mean though with the port density and space savings. I think most deployments manage to avoid the hassle but I can see where extremely dense locations could run into trouble. Good points Justin
Historical traceroute logging
Does anyone know of any tools that can do repeated traceroutes over time to a remote IP and log the results for later viewing/comparison? I'd like to do a traceroute several times a day and store the details in CVS or somewhere accessible down the road. Alerting to major path changes would be nice but not critical. The ability to compare traceroute output down the road would also be nice but also not critical. I'm more interested in the path than the individual hops' RTTs. What's prompting this is a major change in RTTs for several hours yesterday to an ITSP with a site in the south. We share a common upstream (L3) and have in the past always transited that provider to get to each other. I showed a route change for the specific /23 in question in my border routers' RIBs. The adjacent /23 originating from the same ITSP but in a different part of the country did not change (and neither did RTTs to the hosts we monitor in that /23). The site claimed nothing changed on their end and that they know of no changes upstream. BGP Play shows a route change from Level3 to Internap during the time in question (thought the times don't line up exactly) which most likely caused the more than double RTTs we were seeing. My Cacti Advanced Ping graphs caught the problem in all its glory. Nagios alerted me to the high RTT times as well. What I didn't get during that period of time was a traceroute to the site in question. I'd like to run a traceroute several times a day and find some way to store the output and work with it later if needed. I'd prefer OSS but commercial apps would be considered too. I'm sure I'm not the first to have a need to check traceroutes like that. How do the rest of you handle it? Thanks Justin