On Mon, Jun 16, 2008 at 12:40:47PM +0930, Glen Turner wrote:
> "Enable TCP window scaling and time stamps by using the Registry Editor
> to browse to location
> [HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters]
> and add the key
> Tcp1323Opts
> with value
> 3"
>
> are "h
afaik the limit is 10 *sessions* not 10 *tcp connections*. there should be
nothing limiting you from opening 10,000 tcp connections in a single app.
eg 10 smb shares, 10 sql sessions, etc.
-Dan
On Mon, 16 Jun 2008, Bob Bradlee wrote:
I have tested it with Icecast using audio streams and it i
I have tested it with Icecast using audio streams and it is 100 not 10.
moved to w2k server and the glass wall at 100 streams went away.
Bob
On Mon, 16 Jun 2008 10:25:18 -0700 (PDT), [EMAIL PROTECTED] wrote:
>On Mon, 16 Jun 2008, Glen Turner wrote:
>> Then there's the deliberate nobbling of the
It's 10 half-open (SYN_SENT) outbound TCP connections as I recall.
- S
-Original Message-
From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Sent: Monday, June 16, 2008 12:26
To: Glen Turner <[EMAIL PROTECTED]>
Cc: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Subject: Re: Best u
On Mon, 16 Jun 2008, Glen Turner wrote:
Then there's the deliberate nobbling of the TCP implementation,
such as the restriction to ten of connections to Windows Xp SP3.
Apparently you're meant to buy Windows Server if you are running
P2P applications :-)
are you quite sure it is *10 tcp connect
Glen Turner <[EMAIL PROTECTED]> writes:
> Fedora 8 and 9 and Ubuntu 8.04 include the upstream OpenSSH which include
> large window patches. OpenSSH 4.7 ChangeLog contains:
>
>> Other changes, new functionality and fixes in this release:
> ...
>> * The SSH channel window size has been increased,
[EMAIL PROTECTED] wrote:
Its actually not that hard on windows.
Don't make me laugh. Instructions that start
"Enable TCP window scaling and time stamps by using the Registry Editor
to browse to location
[HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters]
and add the key
Robert E. Seastrom wrote:
As a user of hpn-ssh for years, I have to wonder if there is any
reason (aside from the sheer cussedness for which Theo is infamous)
that the window improvements at least from hpn-ssh haven't been
backported into mainline openssh? I suppose there might be
portability co
On 2008-06-12, Kevin Oberman <[EMAIL PROTECTED]> wrote:
> The idea is to use tuned proxies that are close to the source and
> destination and are optimized for the delay.
OpenBSD has relayd(8), a versatile tool which can be used here.
There is support for proxying TCP connections. These can be mod
> Date: Fri, 13 Jun 2008 12:40:48 -0400
> From: Robert Boyle <[EMAIL PROTECTED]>
>
> At 12:01 PM 6/13/2008, Kevin Oberman wrote:
> >Clearly you have failed to try very hard or to check into what others
> >have done. We routinely move data at MUCH higher rates over TCP at
> >latencies over 50 ms. o
On Fri, 13 Jun 2008, Robert Boyle wrote:
Let me refine my post then...
In our experience, you can't get to line speed with over 20-30ms of latency
using TCP on _Windows_ regardless of how much you tweak it. >99% of the
servers in our facilities are Windows based. I should have been more
specif
Many thanks for great replies on and off-list.
The suggestions basically ranged from these options:
1. tune TCP on all hosts you wish to transfer between
2. create tuned TCP proxies and transfer through those hosts
3. setup a socat (netcat++) proxy and send through this host
4. use an alternativ
Robert Boyle wrote:
At 12:01 PM 6/13/2008, Kevin Oberman wrote:
Clearly you have failed to try very hard or to check into what others
have done. We routinely move data at MUCH higher rates over TCP at
latencies over 50 ms. one way (>100 ms. RTT). We find it fairly easy to
move data at over 4 G
At 12:01 PM 6/13/2008, Kevin Oberman wrote:
Clearly you have failed to try very hard or to check into what others
have done. We routinely move data at MUCH higher rates over TCP at
latencies over 50 ms. one way (>100 ms. RTT). We find it fairly easy to
move data at over 4 Gbps continuously.
Tha
> Date: Thu, 12 Jun 2008 19:26:56 -0400
> From: Robert Boyle <[EMAIL PROTECTED]>
>
> At 06:37 PM 6/12/2008, you wrote:
> >I'm looking for input on the best practices for sending large files
> >over a long fat pipe between facilities (gigabit private circuit, ~20ms RTT).
> >I'd like to avoid modif
"Kevin Oberman" <[EMAIL PROTECTED]> writes:
>> From: "Robert E. Seastrom" <[EMAIL PROTECTED]>
>> Date: Thu, 12 Jun 2008 21:15:49 -0400
>>
>>
>> Randy Bush <[EMAIL PROTECTED]> writes:
>>
>> > and for those of us who are addicted to simple rsync, or whatever over
>> > ssh, you should be aware of
Hi Sean,
from thursday, we have copied some ~300 GB packages from Prague to San
Diego (~200 ms delay, 10 GE flat ethernet end machines connected via
1GE) files using RBUDP which worked great.
Each scenario needs some planning. You have to answer several questions:
1) What is the performance of
> From: "Robert E. Seastrom" <[EMAIL PROTECTED]>
> Date: Thu, 12 Jun 2008 21:15:49 -0400
>
>
> Randy Bush <[EMAIL PROTECTED]> writes:
>
> > and for those of us who are addicted to simple rsync, or whatever over
> > ssh, you should be aware of the really bad openssh windowing issue.
>
> As a use
> Date: Fri, 13 Jun 2008 09:02:31 +0900
> From: Randy Bush <[EMAIL PROTECTED]>
>
> > The idea is to use tuned proxies that are close to the source and
> > destination and are optimized for the delay. Local systems can move data
> > through them without dealing with the need to tune for the
> > del
Randy Bush <[EMAIL PROTECTED]> writes:
> and for those of us who are addicted to simple rsync, or whatever over
> ssh, you should be aware of the really bad openssh windowing issue.
As a user of hpn-ssh for years, I have to wonder if there is any
reason (aside from the sheer cussedness for which
> I'm looking for input on the best practices for sending large files over
> a long fat pipe between facilities (gigabit private circuit, ~20ms RTT).
providing you have RFC1323 type extensions enabled on a semi-decent OS, a 4MB
TCP window should be more than sufficient to fill a GbE pipe over 30m
> Hi,
>
> I'm looking for input on the best practices for sending large
> files
There are both commercial products (fastcopy)
and various "free"(*) products (bbcp, bbftp,
gridftp) that will send large files. While
they can take advantage of larger windows
they also have the capability of using
And while I certainly like open source solutions, there are plenty of
commercial products that do things to optimize this. Depending on the type
of traffic the products do different things. Many of the serial-byte
caching variety (e.g. Riverbed/F5) now also do connection/flow optimization
and pro
Karl Auerbach wrote:
> Randy Bush wrote:
>> and for those of us who are addicted to simple rsync, or whatever over
>> ssh, you should be aware of the really bad openssh windowing issue.
> I was not aware of this. Do you have a pointer to a description?
see the work by rapier and stevens at psc
Take a look at some of the stuff from Aspera.
Mark
On Thu, Jun 12, 2008 at 03:37:47PM -0700, Sean Knox wrote:
> Hi,
>
> I'm looking for input on the best practices for sending large files over
> a long fat pipe between facilities (gigabit private circuit, ~20ms RTT).
> I'd like to avoid modify
> The idea is to use tuned proxies that are close to the source and
> destination and are optimized for the delay. Local systems can move data
> through them without dealing with the need to tune for the
> delay-bandwidth product. Note that this "man in the middle" may not
> play well with many sec
At 06:37 PM 6/12/2008, you wrote:
I'm looking for input on the best practices for sending large files
over a long fat pipe between facilities (gigabit private circuit, ~20ms RTT).
I'd like to avoid modifying TCP windows and options on end hosts
where possible (I have a lot of them). I've seen pr
> Date: Thu, 12 Jun 2008 15:37:47 -0700
> From: Sean Knox <[EMAIL PROTECTED]>
>
> Hi,
>
> I'm looking for input on the best practices for sending large files over
> a long fat pipe between facilities (gigabit private circuit, ~20ms RTT).
> I'd like to avoid modifying TCP windows and options on e
Hi,
I'm looking for input on the best practices for sending large files over
a long fat pipe between facilities (gigabit private circuit, ~20ms RTT).
I'd like to avoid modifying TCP windows and options on end hosts where
possible (I have a lot of them). I've seen products that work as
"transfe
29 matches
Mail list logo