Robert Wessel <[email protected]> writes:
> That's not a very valid comparison.  SDLC is mostly a link level
> protocol; IP, UDP and TCP are not.  In many cases there is
> considerable error recovery on links that IP is run over - if for no
> other reason than the end-to-end error recovery in TCP works poorly if
> there are too many errors.  In any event, the lack of end-to-end error
> checking and recovery in SNA is a major failing, as it requires near
> perfect error management on every link and node between the two
> endpoints.

a big issue with tcp throughput is slow-start as mechanism for
congestion control/avoidance ... aka in enormously large heterogeneous
network with dozens of hops end-to-end and bursty traffic ...  there is
relatively high probability of periodic congestion. dropping a packet
and then restarting slow-start ... can enormously cut throughput.

note that on my internal backbone in hsdt ... we were already doing
rate-based pacing ... well before slow-start was deployed.
http://www.garlic.com/~lynn/subnetwork.html#hsdt

approx. same time that slow-start was presented in IETF meeting ...
there was also acm sigcomm meeting with a couple papers of interest
... one showed how slow-start was non-stable in large, heterogeneous,
real-world bursty network. I've periodically pointed at that rate-based
requires at least some rudimentary system timing facilities ... and in
this time period, tcp/ip stacks were being deployed on low-level
platforms with insufficient timer support ... somewhat accounting for
being forced to fallback to slow-start.

there have been some recent papers claiming that a rate-based tcp
running over 56kbit/sec dial-up link can have higher end-to-end
throughput than standard slow-start tcp running over 1.5mbit/sec (given
various congestion scenarios) ... in any case, the legacy justification
for slow-start is long gone.

another interesting paper from the same acm sigcomm meeting was about
ethernet throughput. it showed a typical 30 station ethernet lan with
all stations having a low-level device driver app constantly
transmitting minimum sized ethernet packets ... and the effective
throughput dropping off to 8mbits/sec. (which is higher effective
throughput than 16mbit/sec token-ring)

this was in the period that the 16mbit token-ring people were publishing
lots of FUD, comparing to some ridiculously low ethernet throughput.
One of my conjectures for the way they came up with the numbers was that
they used the very early ethernet prototype that ran at 3mbits/sec (not
10mbits/sec) and didn't support listen-before-transmit (10mbit ethernet
with csma/cd standard had significant better throughput than earliest
ethernet prototype).

the new almaden ibm research bldg had been extensively wired with CAT5
anticipating 16mbit/sec token-ring ... but they found running 10mbit/sec
ethernet on CAT5 had both higher effective aggregate LAN throughput as
well as lower message latency.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to