On Sun, Nov 07, 2010 at 01:42:33AM -0700, George Bonser wrote:
> >
> > > I guess you didn't read the links earlier. It has nothing to do
> with
> > > stack tweaks. The moment you lose a single packet, you are toast.
> > And
> >
> > TCP SACK.
>
>
> Certainly helps but still has limitations. I
>> The mapping server idea that several proposals use do not appear to keep
>> the smartness at the edges, rather they seem try to make a smarter core
>> network.
>
> Is a DNS server core or edge? ILNP aims to use the DNS as its mapping
> service.
> ---
On 09/11/2010 13:46, Tony Finch wrote:
> Is a DNS server core or edge? ILNP aims to use the DNS as its mapping
> service.
This is one of several reasons that ILNP is destined to fail - imho.
Nick
> If you think peering points are the "middle" portion of the internet that all
> packets have to traverse, then this thread is beyond hope.
>
>
> -- Niels.
Making sweeping generalizations at thin air is fun!
This statement could be easily true, just as it could be easily false.
Nathan
--- d...@dotat.at wrote:
From: Tony Finch
On Mon, 8 Nov 2010, Scott Weeks wrote:
> The mapping server idea that several proposals use do not appear to keep
> the smartness at the edges, rather they seem try to make a smarter core
> network.
Is a DNS server core or edge? ILNP aims to use the DN
--- b...@herrin.us wrote:
really would. Maybe you can tell me the page number, 'cause I just
can't wade through the rest of it.
-
Don't read anything until around chapter 6 or 7. Also, skip the last one.
Thanks for the responses.
scott
On Mon, 8 Nov 2010, Scott Weeks wrote:
>
> The mapping server idea that several proposals use do not appear to keep
> the smartness at the edges, rather they seem try to make a smarter core
> network.
Is a DNS server core or edge? ILNP aims to use the DNS as its mapping
service.
Tony.
--
f.antho
* gbon...@seven.com (George Bonser) [Mon 08 Nov 2010, 17:54 CET]:
I wasn't talking about changing anything at any of the edges. The
idea was just to get the "middle" portion of the internet, the
peering points to a place that would support frames larger than
1500. It is practically impossible
On Mon, Nov 8, 2010 at 6:02 PM, Scott Weeks wrote:
>> And so, "...the first principle of our proposed new network architecture:
>> Layers are recursive."
>
> : Anyone who has bridged an ethernet via a TCP based
> : IPSec tunnel understands that layers are recursive.
>
> WRT the paper I'm having t
--- d...@dotat.at wrote:
The point of a clean slate design is to rethink the foundations of your
architecture, and get rid of constraints that set you up to fail.
--
Yes, and I thought this idea could be the beginning of one way to do that and
became interested
On Mon, 8 Nov 2010, Scott Weeks wrote:
> From: Tony Finch
>
> : I note that he doesn't actually describe how to implement
> : a large-scale addressing and routing architecture. It's all
> : handwaving.
>
> There is more discussed in the book.
I have bought and read the book. It's an interesting a
--- eu...@leitl.org wrote:
From: Eugen Leitl
Networks are much too smart still, what you need is the barest decoration
upon the raw physics of this universe.
--
Yes, that's one thing I note. The mapping server idea that several proposals
use do not appear
Subject: Re: RINA - scott whaps at the nanog hornets nest :-) Date: Mon, Nov
08, 2010 at 10:08:53PM + Quoting Nick Hilliard (n...@foobar.org):
> On 08/11/2010 21:51, valdis.kletni...@vt.edu wrote:
> > So there's empirical data that It Does Indeed Matter (at least to s
--- d...@dotat.at wrote:
From: Tony Finch
: I note that he doesn't actually describe how to implement
: a large-scale addressing and routing architecture. It's all
: handwaving.
There is more discussed in the book. The paper was written by another person
and had to only hit the highlights,
> Been unexpectedly gone for the weekend, apologies for the delay. Wow,
> can subjects get hijacked quickly here. I think it happened within one or two
> emails. It was just for weekend fun anyway...
So... You tossed a cow into a pool (that you knew was) filled with piranhas,
waited a few days
On 11/8/2010 4:08 PM, Nick Hilliard wrote:
Anyway, all of the arguments for it, both pro and con, have been rehashed
on this thread. The bottom line is that for most companies, it simply
isn't worth the effort, but that for some NRENs, it is.
I think a lot of that is misinformation and confus
Been unexpectedly gone for the weekend, apologies for the delay. Wow, can
subjects get hijacked quickly here. I think it happened within one or two
emails. It was just for weekend fun anyway...
--- b...@herrin.us wrote:
From: William Herrin
> And so, "...the first principle of our propos
Once upon a time, valdis.kletni...@vt.edu said:
> That's right up there with the sites that blackhole their abuse@
> address, and then claim they never actually see any complaints.
What about telcos that disable error counters and then say "we don't see
any errors"?
--
Chris Adams
Systems and N
On 08/11/2010 21:51, valdis.kletni...@vt.edu wrote:
> So there's empirical data that It Does Indeed Matter (at least to some
> people).
It certainly does. However, there is lots more empirical data to suggest
that It Does Not Matter to most service providers. We tried introducing it
to INEX se
On Mon, 08 Nov 2010 19:36:49 +0100, Mans Nilsson said:
> Given this empirical data, clearly pointing to the fact that It Does
> Not Matter, I think we can stop this nonsense now.
That's right up there with the sites that blackhole their abuse@
address, and then claim they never actually see any c
On 11/8/2010 12:36 PM, Mans Nilsson wrote:
I'd concur that links where routers exchange very large routing tables
benefit from PMTUD (most) and larger MTU (to some degree), but I'd
argue that most IXPen see few prefixes per peering, up to a few
thousand max. The large tables run via PNI and paid
Subject: RE: RINA - scott whaps at the nanog hornets nest :-) Date: Mon, Nov
08, 2010 at 08:53:47AM -0800 Quoting George Bonser (gbon...@seven.com):
> >
> > Even if larger MTUen are interesting (but most of the time not worth
> > the work) the sole reason I like SDH as my WAN
>
> Even if larger MTUen are interesting (but most of the time not worth
> the work) the sole reason I like SDH as my WAN technology is the
> presence of signalling -- so that both ends of a link are aware of its
> status near-instantly (via protocol parts like RDI etc). In GE it is
> legal to no
On 11/8/2010 9:56 AM, Tony Finch wrote:
I note that he doesn't actually describe how to implement a large-scale
addressing and routing architecture. It's all handwaving.
That's an extremely hard to address problem. While there are many
proposals, they usually do away with features which we u
On Mon, Nov 08, 2010 at 03:56:17PM +, Tony Finch wrote:
> I note that he doesn't actually describe how to implement a large-scale
> addressing and routing architecture. It's all handwaving.
I'm probably vying for nanog-kook status as well, but in high-dimensional
spaces blocking is arbitraril
On Sun, 7 Nov 2010, William Herrin wrote:
>
> > http://www.ionary.com/PSOC-MovingBeyondTCP.pdf
>
> The last time this was discussed in the Routing Research Group, none
> of the proponents were able to adequately describe how to build a
> translation/forwarding table in the routers or whatever passe
Subject: RE: RINA - scott whaps at the nanog hornets nest :-) Date: Sun, Nov
07, 2010 at 12:34:56AM -0700 Quoting George Bonser (gbon...@seven.com):
>
> Yes, I really don't understand that either. You would think that the
> investment in developing and deploying all that SONET
On Sun, 7 Nov 2010 01:49:20 -0600
Richard A Steenbergen wrote:
> On Sun, Nov 07, 2010 at 08:02:28AM +0100, Mans Nilsson wrote:
> >
> > The only reason to use (10)GE for transmission in WAN is the
> > completely baroque price difference in interface pricing. With todays
> > line rates, the comp
On Sun, 7 Nov 2010 01:07:17 -0700
"George Bonser" wrote:
> > >
> > > Yes, I really don't understand that either. You would think that
> the
> > > investment in developing and deploying all that SONET infrastructure
> > > has been paid back by now and they can lower the prices
> dramatically.
> >
On 11/08/2010 07:57 GMT+08:00, William Herrin wrote:
> On Fri, Nov 5, 2010 at 6:32 PM, Scott Weeks wrote:
>> It's really quiet in here. So, for some Friday fun let
>> me whap at the hornets nest and see what happens... >;-)
>>
>> And so, "...the first principle of our proposed new network archit
On Fri, Nov 5, 2010 at 6:32 PM, Scott Weeks wrote:
> It's really quiet in here. So, for some Friday fun let
> me whap at the hornets nest and see what happens... >;-)
>
> And so, "...the first principle of our proposed new network architecture:
> Layers are recursive."
Hi Scott,
Anyone who ha
On 11/7/2010 3:45 AM, Will Hargrave wrote:
> I used to run a large academic network; there was a vanishingly small
> incidence of edge ports supporting >1500byte MTU.
I run a moderately sized academic network, and know some details of our
"other" campus infrastructure (some larger, some smaller)
>
> I used to run a large academic network; there was a vanishingly small
> incidence of edge ports supporting >1500byte MTU. It's possibly even
> more tricky than the IX situation to support in an environment where
> you commonly have mixed devices at different speeds (most 100mbit
> devices will
On 7 Nov 2010, at 08:24, George Bonser wrote:
> It will happen on its own as more and more networks configure internally
> for larger frames and as more people migrate out of academia where 9000
> is the norm these days into industry.
I used to run a large academic network; there was a vanishing
>
> > I guess you didn't read the links earlier. It has nothing to do
with
> > stack tweaks. The moment you lose a single packet, you are toast.
> And
>
> TCP SACK.
Certainly helps but still has limitations. If you have too many packets
in flight, it can take too long to locate the SACKed pa
On 6 Nov 2010, at 20:29, Matthew Petach wrote:
>> There is no reason why we are still using 1500 byte MTUs at exchange points.
> Completely agree with you on that point. I'd love to see Equinix, AMSIX,
> LINX,
> DECIX, and the rest of the large exchange points put out statements indicating
> th
>
> On Sun, 7 Nov 2010, George Bonser wrote:
>
> > I guess you didn't read the links earlier. It has nothing to do
with
> > stack tweaks. The moment you lose a single packet, you are toast.
> And
>
> TCP SACK.
>
> I'm too tired to correct your other statements that lack basis in
> reality
> (
> >
> > Yes, I really don't understand that either. You would think that
the
> > investment in developing and deploying all that SONET infrastructure
> > has been paid back by now and they can lower the prices
dramatically.
> > One would think the vendors would be practically giving it away,
> > p
On Sun, 7 Nov 2010, George Bonser wrote:
I guess you didn't read the links earlier. It has nothing to do with
stack tweaks. The moment you lose a single packet, you are toast. And
TCP SACK.
I'm too tired to correct your other statements that lack basis in reality
(or at least in my realit
>
> Oh, come on. Get real. The world TCP speed record is 10GE right now,
> it'll
> go higher as soon as there are higher interface speeds to be had.
You can buy 100G right now. I also believe there are some 40G
available, too.
Also, check this:
http://media.caltech.edu/press_releases/13216
On Sun, Nov 07, 2010 at 12:34:56AM -0700, George Bonser wrote:
>
> Yes, I really don't understand that either. You would think that the
> investment in developing and deploying all that SONET infrastructure
> has been paid back by now and they can lower the prices dramatically.
> One would th
On Sun, Nov 07, 2010 at 08:02:28AM +0100, Mans Nilsson wrote:
>
> The only reason to use (10)GE for transmission in WAN is the
> completely baroque price difference in interface pricing. With todays
> line rates, the components and complexity of a line card are pretty
> much equal between SDH a
On Sun, 7 Nov 2010, George Bonser wrote:
True, but TCP is what we are stuck with for right now. Different
protocols could be developed to handle the small packets better.
We're not "stuck" with TCP, TCP is being developed all the time.
http://en.wikipedia.org/wiki/TCP_congestion_avoidance_al
I never
even considered this aspect of the mtu issue.
--
Brielle Bruns
http://www.sosdg.org / http://www.ahbl.org
-Original Message-
From: "George Bonser"
Date: Sun, 7 Nov 2010 00:19:03
To:
Subject: RE: RINA - scott whaps at the nanog hornets nest :-)
>
> Also, if we
>
> The only reason to use (10)GE for transmission in WAN is the completely
> baroque price difference in interface pricing. With todays line rates,
> the components and complexity of a line card are pretty much equal
> between SDH and GE. There is no reason to overcharge for the better
> interfac
>
> Also, if we're going to go for bigger MTUs, going from 1500 to 9000 is
> basically worthless, if we really want to do something, we should go
> for
> 64k or even bigger.
I agree but we need to work with what we have. Practically everything
currently appearing at a peering point will support
Subject: RE: RINA - scott whaps at the nanog hornets nest :-) Date: Sat, Nov
06, 2010 at 08:38:33PM -0700 Quoting George Bonser (gbon...@seven.com):
> No wonder there is still so much transport
> using SONET. Using Ethernet reduces your effective performance over
> long distance pa
On Sat, 6 Nov 2010, George Bonser wrote:
And by that I mean using 1500 MTU is what degrades the performance, not
the ethernet physical transport. Using MTU 9000 would give you better
performance than SONET. That is why Internet2 pushes so hard for people
to use the largest possible MTU and t
> > * gbon...@seven.com (George Bonser) [Sun 07 Nov 2010, 04:27 CET]:
> > >It just seems a shame that two servers with FDDI interfaces using
> > SONET
> >
> > Earth to George Bonser: IT IS NOT 1998 ANYMORE.
>
> Exactly my point. Why should we adopt newer technology while using
> configuration p
> -Original Message-
> From: Niels Bakker [mailto:niels=na...@bakker.net]
> Sent: Saturday, November 06, 2010 8:32 PM
> To: nanog@nanog.org
> Subject: Re: RINA - scott whaps at the nanog hornets nest :-)
>
> * gbon...@seven.com (George Bonser) [Sun 07 Nov 2010, 0
On 11/6/2010 10:31 PM, Niels Bakker wrote:
* gbon...@seven.com (George Bonser) [Sun 07 Nov 2010, 04:27 CET]:
It just seems a shame that two servers with FDDI interfaces using SONET
Earth to George Bonser: IT IS NOT 1998 ANYMORE.
We don't fly sr71s or use bigger MTU interfaces. Get with the
* gbon...@seven.com (George Bonser) [Sun 07 Nov 2010, 04:27 CET]:
It just seems a shame that two servers with FDDI interfaces using SONET
Earth to George Bonser: IT IS NOT 1998 ANYMORE.
-- Niels.
> I'd like to order a dozen of those 40ms RTT LA to NYC wavelengths,
> please.
>
> If you could just arrange a suitable demonstration of packet-level
> delivery
> time of 40ms from Los Angeles to New York and back, I'm sure there
> would
> be a *long* line of people behind me, checks in hand.^
>
> I prefer much less packet loss in a majority of my transmissions,
which
> in turn brings those numbers closer together.
>
>
> Jack
True, though t the idea that it greatly reduces packets in flight for a
given amount of data gives a lot of benefit, particularly over high
latency connections.
On Sat, Nov 6, 2010 at 5:21 PM, George Bonser wrote:
...
> (quote)
> Let's take an example: New York to Los Angeles. Round Trip Time (rtt) is
> about 40 msec, and let's say packet loss is 0.1% (0.001). With an MTU of
> 1500 bytes (MSS of 1460), TCP throughput will have an upper bound of
> about 6.
On 11/6/2010 7:21 PM, George Bonser wrote:
(quote)
Let's take an example: New York to Los Angeles. Round Trip Time (rtt) is
about 40 msec, and let's say packet loss is 0.1% (0.001). With an MTU of
1500 bytes (MSS of 1460), TCP throughput will have an upper bound of
about 6.5 Mbps! And no, that i
On Sat, 06 Nov 2010 11:45:01 -0500
Jack Bates wrote:
> On 11/5/2010 5:32 PM, Scott Weeks wrote:
> >
> > It's really quiet in here. So, for some Friday fun let me whap at the
> > hornets nest and see what happens...>;-)
> >
> >
> > http://www.ionary.com/PSOC-MovingBeyondTCP.pdf
> >
>
> SCTP is
> So if you consider >5x performance boost to be "minimal" yeah, I
guess.
> Or being able to operate at todays transfer rates in the face of 36x
> more packet loss to be "minimal" improvement, I suppose.
And those improvements in performance get larger the longer the latency
of the connection. F
>
> On the contrary. You're proposing to fuck around with the one place
> on the whole Internet that has pretty clear and well adhered-to rules
> and expectations about MTU size supported by participants, and
> basically re-live the problems from MAE-East and other shared
> Ethernet/FDDI platform
* gbon...@seven.com (George Bonser) [Sun 07 Nov 2010, 00:30 CET]:
Re: large MTU
One place where this has the potential to greatly improve
performance is in transfers of large amounts of data such as vendors
supporting the downloading of movies, cloud storage vendors, and
movement of other lar
On Nov 6, 2010, at 10:38 AM, Mark Smith wrote:
> On Fri, 5 Nov 2010 21:40:30 -0400
> Marshall Eubanks wrote:
>
>>
>> On Nov 5, 2010, at 7:26 PM, Mark Smith wrote:
>>
>>> On Fri, 5 Nov 2010 15:32:30 -0700
>>> "Scott Weeks" wrote:
>>>
It's really quiet in here. So, for some
On Sat, Nov 06, 2010 at 03:49:19PM -0700, George Bonser wrote:
>
> When the TCP/IP connection is opened between the routers for a routing
> session, they should each send the other an MSS value that says how
> large a packet they can accept. You already have that information
> available. TCP p
Re: large MTU
One place where this has the potential to greatly improve performance is
in transfers of large amounts of data such as vendors supporting the
downloading of movies, cloud storage vendors, and movement of other
large content and streaming. The *first* step in being able to realize
tho
>
> and that verified that the problem was an MTU black hole. A little
> reading revealed why Solaris wasn't having the problem but Linux did.
> Setting the Linux ip_no_pmtu_disc sysctl to 1 resulted in the Linux
> behavior matching the Solaris behavior.
Oops, meant tcp_mtu_probing
>
> The only thing this adds is trial-and-error probing mechanism per
flow,
> to try and recover from the infinite blackholing that would occur if
> your ICMP is blocked in classic PMTUD. If this actually happened in
any
> scale, it would create a performance and overhead penalty that is far
> wor
On 11/6/2010 3:14 PM, George Bonser wrote:
It ships with Microsoft Windows as "Blackhole
Router Detection" and is on by default since Windows 2003 SP2.
The first item returned on a blekko search is the following article
which indicates that it is on by default in Windows
2008/Vista/2003/XP/20
On Sat, Nov 06, 2010 at 02:21:51PM -0700, George Bonser wrote:
>
> That is not a new problem. That is also true to today with "last
> mile" links (e.g. dialup) that support <1500 byte MTU. What is
> different today is RFC 4821 PMTU discovery which deals with the "black
> holes".
>
> RFC 4821
> >
> While it reads well, what implementations are actually in use? As with
> most protocols, it is useless if it doesn't have a high penetration.
>
> Jack
Solaris 10, in use and on by default. Available on Windows for a very
long time as "blackhole router detection" was off by default original
>
> As long as the implementations are few and far between:
>
> https://www.psc.edu/~mathis/MTU/
> http://www.ietf.org/mail-archive/web/rrg/current/msg05816.html
>
> the traditional ICMP-based PMTUD is what most of use face today.
>
> Steinar Haug, Nethelp consulting, sth...@nethelp.no
On 06/11/10 15:56 -0500, Jack Bates wrote:
On 11/6/2010 3:36 PM, Richard A Steenbergen wrote:
#2. The major vendors can't even agree on how they represent MTU sizes,
so entering the same # into routers from two different vendors can
easily result in incompatible MTUs. For example, on Juniper wh
On 11/6/2010 4:52 PM, George Bonser wrote:
That is also somewhat mitigated in that it operates in two modes. The
first mode is what I would call "passive" mode and only comes into play
once a black hole is detected. It does not change the operation of TCP
until a packet disappears. The second
> > > RFC 4821 PMTUD is that "negotiation" that is "lacking". It is there.
> > > It is deployed. It actually works. No more relying on someone sending
> > > the ICMP packets through in order for PMTUD to work!
> >
> > For some value of "works". There are way too many places filtering
> > ICMP f
>
> He was referring to the updated RFC 4821.
>
> " In the absence of ICMP messages, the proper MTU is determined by
> starting
> with small packets and probing with successively larger packets.
> The
> bulk of the algorithm is implemented above IP, in the transport
> layer
> (e.g., T
>
> While I think 9k for exchange points is an excellent target, I'll
> reiterate
> that there's a *lot* of SONET interfaces out there that won't be going
> away any time soon, so practically speaking, you won't really get more
> than 4400 end-to-end, even if you set your hosts to 9k as well.
Ag
On 11/6/2010 4:40 PM, sth...@nethelp.no wrote:
For some value of "works". There are way too many places filtering
ICMP for PMTUD to work consistently. PMTUD is *not* the solution,
unfortunately.
He was referring to the updated RFC 4821.
" In the absence of ICMP messages, the proper MTU is d
> -Original Message-
> From: sth...@nethelp.no [mailto:sth...@nethelp.no]
> Sent: Saturday, November 06, 2010 2:40 PM
> To: George Bonser
> Cc: r...@e-gerbil.net; nanog@nanog.org
> Subject: Re: RINA - scott whaps at the nanog hornets nest :-)
>
> > RFC 4821
> RFC 4821 PMTUD is that "negotiation" that is "lacking". It is there.
> It is deployed. It actually works. No more relying on someone sending
> the ICMP packets through in order for PMTUD to work!
For some value of "works". There are way too many places filtering
ICMP for PMTUD to work consist
> Completely agree with you on that point. I'd love to see Equinix, AMSIX,
> LINX,
> DECIX, and the rest of the large exchange points put out statements indicating
> their ability to transparently support jumbo frames through their
> fabrics, or at
> least indicate a roadmap and a timeline to whe
On Sat, Nov 6, 2010 at 2:21 PM, George Bonser wrote:
>
...
> As for the configuration differences between units, how does that change
> from the way things are now? A person configuring a Juniper for 1500
> byte packets already must know the difference as that quirk of including
> the headers is
Le samedi 06 novembre 2010 à 13:29 -0700, Matthew Petach a écrit :
> On Sat, Nov 6, 2010 at 1:22 PM, George Bonser wrote:
> >> >
> >> > Last week I asked the operator of fairly major public peering points
> >> if they supported anything larger than 1500 MTU. The answer was "no".
> >> >
> >>
> >>
> It's perfectly safe to have the L2 networks in the middle support the
> largest MTU values possible (other than maybe triggering an obscure
> Force10 bug or something :P), so they could roll that out today and
you
> probably wouldn't notice. The real issue is with the L3 networks on
> either end
Le samedi 06 novembre 2010 à 13:01 -0700, Matthew Petach a écrit :
> On Sat, Nov 6, 2010 at 12:32 PM, George Bonser wrote:
> >> I doubt that 1500 is (still) widely used in our Internet... Might be,
> >> though, that most of us don't go all the way to 9k.
> >>
> >> mh
> >
> > Last week I asked the
On 11/6/2010 2:15 PM, George Bonser wrote:
I believe SCTP will become more widely used in the mobile device world. You can have
several different streams so you can still get an IM, for example, while you are
streaming a movie. Eliminating the "head of line" blockage on thin connections
is r
On 11/6/2010 3:36 PM, Richard A Steenbergen wrote:
#2. The major vendors can't even agree on how they represent MTU sizes,
so entering the same # into routers from two different vendors can
easily result in incompatible MTUs. For example, on Juniper when you
type "mtu 9192", this is INCLUSIVE of
>
> Completely agree with you on that point. I'd love to see Equinix,
> AMSIX, LINX,
> DECIX, and the rest of the large exchange points put out statements
> indicating
> their ability to transparently support jumbo frames through their
> fabrics, or at
> least indicate a roadmap and a timeline to
On Sat, Nov 06, 2010 at 12:32:55PM -0700, George Bonser wrote:
> > I doubt that 1500 is (still) widely used in our Internet... Might be,
> > though, that most of us don't go all the way to 9k.
>
> Last week I asked the operator of fairly major public peering points
> if they supported anything la
On Sat, Nov 6, 2010 at 1:22 PM, George Bonser wrote:
>> >
>> > Last week I asked the operator of fairly major public peering points
>> if they supported anything larger than 1500 MTU. The answer was "no".
>> >
>>
>> There's still a metric buttload of SONET interfaces in the core that
>> won't go
> >
> > Last week I asked the operator of fairly major public peering points
> if they supported anything larger than 1500 MTU. The answer was "no".
> >
>
> There's still a metric buttload of SONET interfaces in the core that
> won't go above 4470.
>
> So, you might conceivably get 4k MTU at som
> 1500 was fine for 10G
I meant, of course, 10M ethernet.
>
> There's still a metric buttload of SONET interfaces in the core that
> won't go above 4470.
>
> So, you might conceivably get 4k MTU at some point in the future, but
> it's really, *really* unlikely you'll get to 9k MTU any time in the
> next
> decade.
>
> Matt
Agreed. But even 4470 is bet
On Sat, Nov 6, 2010 at 12:32 PM, George Bonser wrote:
>> I doubt that 1500 is (still) widely used in our Internet... Might be,
>> though, that most of us don't go all the way to 9k.
>>
>> mh
>
> Last week I asked the operator of fairly major public peering points if they
> supported anything larg
> I doubt that 1500 is (still) widely used in our Internet... Might be,
> though, that most of us don't go all the way to 9k.
>
> mh
Last week I asked the operator of fairly major public peering points if they
supported anything larger than 1500 MTU. The answer was "no".
Le samedi 06 novembre 2010 à 12:15 -0700, George Bonser a écrit :
> > Sent: Saturday, November 06, 2010 9:45 AM
> > To: nanog@nanog.org
> > Subject: Re: RINA - scott whaps at the nanog hornets nest :-)
> >
> > On 11/5/2010 5:32 PM, Scott Weeks wrote:
> > &g
> Sent: Saturday, November 06, 2010 9:45 AM
> To: nanog@nanog.org
> Subject: Re: RINA - scott whaps at the nanog hornets nest :-)
>
> On 11/5/2010 5:32 PM, Scott Weeks wrote:
> >
> > It's really quiet in here. So, for some Friday fun let me whap at
> th
On 11/5/2010 5:32 PM, Scott Weeks wrote:
It's really quiet in here. So, for some Friday fun let me whap at the hornets
nest and see what happens...>;-)
http://www.ionary.com/PSOC-MovingBeyondTCP.pdf
SCTP is a great protocol. It has already been implemented in a number of
stacks. With the
On Fri, 5 Nov 2010 21:40:30 -0400
Marshall Eubanks wrote:
>
> On Nov 5, 2010, at 7:26 PM, Mark Smith wrote:
>
> > On Fri, 5 Nov 2010 15:32:30 -0700
> > "Scott Weeks" wrote:
> >
> >>
> >>
> >> It's really quiet in here. So, for some Friday fun let me whap at the
> >> hornets nest and see w
Subject: RINA - scott whaps at the nanog hornets nest :-) Date: Fri, Nov 05,
2010 at 03:32:30PM -0700 Quoting Scott Weeks (sur...@mauigateway.com):
>
>
> It's really quiet in here. So, for some Friday fun let me whap at the
> hornets nest and see what happens...
On Nov 5, 2010, at 7:26 PM, Mark Smith wrote:
> On Fri, 5 Nov 2010 15:32:30 -0700
> "Scott Weeks" wrote:
>
>>
>>
>> It's really quiet in here. So, for some Friday fun let me whap at the
>> hornets nest and see what happens... >;-)
>>
>>
>> http://www.ionary.com/PSOC-MovingBeyondTCP.pdf
>
--- r...@e-gerbil.net wrote:
From: Richard A Steenbergen
On Fri, Nov 05, 2010 at 03:32:30PM -0700, Scott Weeks wrote:
> It's really quiet in here. So, for some Friday fun let me whap at the
> hornets nest and see what happens... >;-)
Arguments about locator/identifier splits aside (which I
--- na...@85d5b20a518b8f6864949bd940457dc124746ddc.nosense.org wrote:
From: Mark Smith
> http://www.ionary.com/PSOC-MovingBeyondTCP.pdf
Who ever wrote that doesn't know what they're talking about. LISP is
not the IETF's proposed solution (the IETF don't have one, the IRTF do),
and streaming me
1 - 100 of 103 matches
Mail list logo