RE: Seattle NANOG 88 things to see
I'm also willing to try and fit in a few real CO tours around the event if people are so inclined. We operate the ILEC territory to both the north and east of the venue and it is somewhat unique as it is former GTE territory and not bell system. I also recommend the telecom museum. -Original Message- From: NANOG On Behalf Of William Herrin Sent: Wednesday, May 31, 2023 4:59 PM To: nanog@nanog.org Subject: Seattle NANOG 88 things to see Howdy, We're a couple weeks out from NANOG 88 so I thought I'd repost a list of things I think folks with computer and engineering backgrounds might enjoy doing up here in Seattle. 1. The Connections Museum is a must-see for telecom enthusiasts (which I assume you are since you're attending a NANOG meeting). Six different phone switches (some electromechanical) and a boatload of other telecom stuff taking up a floor and a half of a "central office" building. In good working order. You can see and, to some extent, touch. https://www.telcomhistory.org/connections-museum-seattle/ Beware: It's only open on Sundays from 10 am to 3 pm, so if you want to check it out, you'll have to come in early for it. 2. The monorail (https://www.seattlemonorail.com/) is a well maintained German-engineered 1960s vision of the future. Departs from Westlake Center about 3 blocks from the hotel. Runs to the Space Needle and MoPop (the Museum of Popular Culture) which are also worth seeing. Both the monorail and space needle were built for the 1962 World's Fair. Buy tickets for the Space Needle the day before. Sunset is particularly nice. 3. Snoqualmie Falls Hydroelectric Museum and power plant https://www.pse.com/en/pages/tours-and-recreation/snoqualmie-tours Beware that Snoqualmie Falls is a half hour or so outside of the city. 4. Northwest Railway Museum (also near Snoqualmie Falls) https://www.trainmuseum.org/ 5. Museum of Flight (this is Boeing's home town, so it's a high quality aircraft museum) https://www.museumofflight.org/ 6. Pike Place Market, about 10 blocks from the hotel, is a Seattle icon. 7. Mt. Rainer, if you want to check it out, is a full-day trip: 2.5 hours to get there, 2.5 hours to get back plus the time you spend in the park. They finally cleared the snow from the roads last weekend so it's open but it's too far to catch it in an afternoon. Decent odds of getting a shirtsleeves on the snow pack picture like this one: https://bill.herrin.us/pictures/20210627-rainier/img-20210627-145745.jpg If you've been to Rainier before, Diablo Lake, Cascades National Park and Washington Pass in the opposite direction are also beautiful. Some things to know about Seattle: * Summer weather is good weather in Seattle. Expect sunshine, mild to warm temperatures in the day, crisp in the morning. Light if any rain. 5 am sunrise, 9 pm sunset. * Downtown Seattle parking spaces are super-tight. If you rent a car, get a small one. * Seattle is -very- dog friendly. You'll encounter our generally well-behaved canine companions on the street, in stores and possibly even in the hotel and event venues. Pack your allergy medication if you need it. Regards, Bill Herrin -- William Herrin b...@herrin.us https://bill.herrin.us/
RE: Do ISP's collect and analyze traffic of users?
As a decent sized north American ISP I think I need totally agree with this post.There simply is not any economically justifiable reason to collect customer data, doing so is expensive, and unless you are trying to traffic shape like a cell carrier has zero economic benefit. In our case we do 1:4000 netflow samples and that is literally it, we use that data for peering analytics and failure modeling. This is true for both large ISPs I've been involved with and in both cases I would have overseen the policy. What I see in this thread is a bunch of folks guessing that clearly have not been involved in large eyeball ISP operations. -Original Message- From: NANOG On Behalf Of Saku Ytti Sent: Tuesday, May 16, 2023 7:56 AM To: Tom Beecher Cc: nanog@nanog.org Subject: Re: Do ISP's collect and analyze traffic of users? I can't tell what large is. But I've worked for enterprise ISP and consumer ISPs, and none of the shops I worked for had capability to monetise information they had. And the information they had was increasingly low resolution. Infraprovider are notoriously bad even monetising their infra. I'm sure do monetise. But generally service providers are not interesting or have active shareholders, so very little pressure to make more money, hence firesales happen all the time due infrastructure increasingly seen as a liability, not an asset. They are generally boring companies and internally no one has incentive to monetise data, as it wouldn't improve their personal compensation. And regulations like GDPR create problems people rather not solve, unless pressured. Technically most people started 20 years ago with some netflow sampling ratio, and they still use the same sampling ratio, despite many orders of magnitude more packets. Meaning previously the share of flows captured was magnitude higher than today, and today only very few flows are seen in very typical applications, and netflow is largely for volumetric ddos and high level ingressAS=>egressAS metrics. Hardware offered increasingly does IPFIX as if it was sflow, that is, 0 cache, immediately exported after sampled, because you'd need like 1:100 or higher resolution, to have any significant luck in hitting the same flow twice. PTX has stopped supporting flow-cache entirely because of this, at the sampling rate where cache would do something, the cache would overflow. Of course there are other monetisation opportunities via other mechanism than data-in-the-wire, like DNS On Tue, 16 May 2023 at 15:57, Tom Beecher wrote: > > Two simple rules for most large ISPs. > > 1. If they can see it, as long as they are not legally prohibited, they'll > collect it. > 2. If they can legally profit from that information, in any way, they will. > > Now, ther privacy policies will always include lots of nice sounding clauses, > such as 'We don't see your personally identifiable information'. This of > course allows them to sell 'anonymized' sets of that data, which sounds great > , except as researchers have proven, it's pretty trivial to scoop up > multiple, discrete anonymized data sets, and cross reference to identify > individuals. Netflow data may not be as directly 'valuable' as other types of > data, but it can be used in the blender too. > > Information is the currency of the realm. > > > > On Mon, May 15, 2023 at 7:00 PM Michael Thomas wrote: >> >> >> And maybe try to monetize it? I'm pretty sure that they can be >> compelled to do that, but do they do it for their own reasons too? Or >> is this way too much overhead to be doing en mass? (I vaguely recall >> that netflow, for example, can make routers unhappy if there is too much >> "flow"). >> >> Obviously this is likely to depend on local laws but since this is >> NANOG we can limit it to here. >> >> Mike >> -- ++ytti
Re: Do ISP's collect and analyze traffic of users?
On Sat, Jun 10, 2023 at 9:46 AM John van Oppen wrote: > > As a decent sized north American ISP I think I need totally agree with this > post.There simply is not any economically justifiable reason to collect > customer data, doing so is expensive, and unless you are trying to traffic > shape like a cell carrier They shape? News to me... >has zero economic benefit. In our case we do 1:4000 netflow samples and >that is literally it, we use that data for peering analytics and failure >modeling. > > This is true for both large ISPs I've been involved with and in both cases I > would have overseen the policy. > > What I see in this thread is a bunch of folks guessing that clearly have not > been involved in large eyeball ISP operations. The smaller (mostly rural) WISPs I work with do not have time or desire to monetize traffic either! Pretty much all of them have their hands full just solving tech support problems. They do collect extensive metrics on bandwidth, packet loss, latency, snmp stats of all sorts, airtime, interference, cpu stats, routing info, (common tools are things like UISP, splynx, opennms), and keep amazingly good (lidar, even) maps of the local terrain. If the bigger ISPs are only doing netflow once in a while, no wonder the little wisps survive. The ones shaping via libreqos.io now are totally in love[1] with our in-band RTT metrics as that is giving them insight into their backhaul behaviors in rain and snow and sleet, instead of out of band snmp, as well as gaining insight into when it is the customer wifi that is the real problem. It is the combination of all these metrics that helps narrow down problems. But the only monetization that happens is the monthly bill. Most of these cats are actually rather ornery and *very* insistent about protecting their customers privacy, from all comers, and resistant to cloud based applications in general. There are some bad apples in the wisp world that do want to rate limit (via dpi) netflix above all else in case of running low on backhaul, but they are not in my customer base. [1] we (and they) *are* passionately interesting in identifying the characteristics of multiple traffic types and mitigating attacks, and a couple are publishing some anonymized movies of what traffic looks like: https://www.youtube.com/@trendaltoews7143/videos > > > -Original Message- > From: NANOG On Behalf Of Saku Ytti > Sent: Tuesday, May 16, 2023 7:56 AM > To: Tom Beecher > Cc: nanog@nanog.org > Subject: Re: Do ISP's collect and analyze traffic of users? > > I can't tell what large is. But I've worked for enterprise ISP and consumer > ISPs, and none of the shops I worked for had capability to monetise > information they had. And the information they had was increasingly low > resolution. Infraprovider are notoriously bad even monetising their infra. > > I'm sure do monetise. But generally service providers are not interesting or > have active shareholders, so very little pressure to make more money, hence > firesales happen all the time due infrastructure increasingly seen as a > liability, not an asset. They are generally boring companies and internally > no one has incentive to monetise data, as it wouldn't improve their personal > compensation. And regulations like GDPR create problems people rather not > solve, unless pressured. > > Technically most people started 20 years ago with some netflow sampling > ratio, and they still use the same sampling ratio, despite many orders of > magnitude more packets. Meaning previously the share of flows captured was > magnitude higher than today, and today only very few flows are seen in very > typical applications, and netflow is largely for volumetric ddos and high > level ingressAS=>egressAS metrics. > > Hardware offered increasingly does IPFIX as if it was sflow, that is, > 0 cache, immediately exported after sampled, because you'd need like > 1:100 or higher resolution, to have any significant luck in hitting the same > flow twice. PTX has stopped supporting flow-cache entirely because of this, > at the sampling rate where cache would do something, the cache would overflow. > > Of course there are other monetisation opportunities via other mechanism than > data-in-the-wire, like DNS > > > On Tue, 16 May 2023 at 15:57, Tom Beecher wrote: > > > > Two simple rules for most large ISPs. > > > > 1. If they can see it, as long as they are not legally prohibited, they'll > > collect it. > > 2. If they can legally profit from that information, in any way, they will. > > > > Now, ther privacy policies will always include lots of nice sounding > > clauses, such as 'We don't see your personally identifiable information'. > > This of course allows them to sell 'anonymized' sets of that data, which > > sounds great , except as researchers have proven, it's pretty trivial to > > scoop up multiple, discrete anonymized data sets, and cross reference to > > identify individuals. Netflo
Re: BGP routing ARIN space in APNIC region
On 6/9/23 21:54, Matt Harris wrote: I would also note that, from an end-user perspective if we're talking about ISP services to customers on both ends here, you may run into geolocation issues where some geolocation providers decide that many/all of your users are in one location or the other, creating problems for them both with performance when they are misdirected to the wrong frontend servers, as well as in terms of convenience if they are being served content in the wrong location, or service issues related to access to streaming services, etc. This is solvable by slicing your IPv4 prefixes into /24's and assigning them the correct country TLD in the ARIN WHOIS database. Yes, you might need to call a few geo-location providers to fix their back-end manually, but this is possible. Even simpler for IPv6. Mark.