> sockbuf datalen snd_time rcv_time
> -
> 16384 15000 0.0000.617
> 15 14 0.0034.021
> 50 495000 0.01514.083
> 100 995000 0.04228.577
> 150 149000
Mark Allman wrote:
>
> Folks-
>
> Lots of interesting thoughts on this thread already. But, we have
> not yet figured it out. So, a further data point...
>
> I have been playing this evening on my machine at home -- a way old
> p5 running freebsd 4.7. I am seeing the same problem as we see at
Folks-
Lots of interesting thoughts on this thread already. But, we have
not yet figured it out. So, a further data point...
I have been playing this evening on my machine at home -- a way old
p5 running freebsd 4.7. I am seeing the same problem as we see at
GRC on the freebsd 4.1 boxes. As
> Your suggestion of increasing the -l seems to have made a positive
> impact -- tests this morning with a higher buffer length size of 8192
> gave us a better throughput of 44Mbps. Now the time sequence plot
> shows a window usage of 1.5MB as opposed to the previous 1MB usage.
>
> We still don't
Try retrieving a very large file via ftp. The sendfile() code seems
more efficient than ttcp, and if performance improves that may be
a clue that the problem lies in the user/kernel interface. If not,
probably in the stack. Could it conceivably be a resonance effect
between the actual rtt and th
> Are you sure you're not hitting the top of the pipe and bouncing
> around in congestion avoidance ? Unless your window size limits
> your bw at exactly the correct amount, you'll never get the steady
> state
We're not bouncing around. We see no loss, which would indicate
that either we should
> > From: Mark Allman [mailto:mallman@;grc.nasa.gov]
> > Thanks! Other ideas?
>
> What MSS is advertised on each end?
1500 byte packets (from looking at the trace file).
allman
--
Mark Allman -- BBN/NASA GRC -- http://roland.grc.nasa.gov/~mallman/
To Unsubscribe: send mail to [EMAIL PROTECT
> From: Mark Allman [mailto:mallman@;grc.nasa.gov]
> Thanks! Other ideas?
What MSS is advertised on each end?
--don ([EMAIL PROTECTED] www.sandvine.com)
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message
Are you sure you're not hitting the top of the pipe and bouncing
around in congestion avoidance ? Unless your window size limits
your bw at exactly the correct amount, you'll never get the steady state
bw you want.
Mark Allman wrote:
> > Have you checked that both sides are negotiating SACK?
>
> Have you checked that both sides are negotiating SACK?
No SACK in 4.1. But, there is no loss in th connection.
> And both sides are negotiating a window scale option sufficiently
> large? (sounds like you need a window scale option of at least 5
> bits?)
We're seeing a shift of 6.
> And the
> From: Fran Lawas-Grodek [mailto:Fran.Lawas-Grodek@;grc.nasa.gov]
> Well... our development code that we are to ultimately test was
> developed on 4.1, thus we really need to try to stick with 4.1.
> It does not look like either of the above parameters are available
> until 4.7.
No worries.
Have
On Fri, Nov 01, 2002 at 11:18:50AM -0500, Don Bowman wrote:
>
> Perhaps
> sysctl net.inet.tcp.inflight_enable=1
> will help?
>
> you may wish to also change tcp.inflight_max.
> See tcp(4) as of 4.7.
Hello Don,
Well... our development code that we are to ultimately test was
developed on 4.1, th
Hello Bakul,
Your suggestion of increasing the -l seems to have made a positive
impact -- tests this morning with a higher buffer length size of 8192
gave us a better throughput of 44Mbps. Now the time sequence plot
shows a window usage of 1.5MB as opposed to the previous 1MB usage.
We still don'
> From: Fran Lawas-Grodek [mailto:Fran.Lawas-Grodek@;grc.nasa.gov]
Perhaps
sysctl net.inet.tcp.inflight_enable=1
will help?
you may wish to also change tcp.inflight_max.
See tcp(4) as of 4.7.
--don ([EMAIL PROTECTED] www.sandvine.com)
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubs
These are our sysctl settings:
kern.ipc.maxsockbuf=4194304
net.inet.tcp.sendspace=3125000
net.inet.tcp.recvspace=3125000
net.inet.ip.intr_queue_maxlen=500
nmbclusters=32768
After reading your suggestion, we were able to achieve a
slightly better throughput from 32Mbps on the 250ms delayed
ne
you might want to have a look at the sysctl variable
kern.ipc.sockbuf_waste_factor too.
Remember that memory is charged to socket buffers depending on how
many clusters are allocated, even if they are not fully used.
E.g. in your example you are probably doing 1KB writes each of
which consumes a 2
Hello,
Hopefully someone might have some advice on our problem.
We are setting up a testbed consisting of FreeBSD 4.1 on the sender
and receiver machines. (This older version of FreeBSD is necessary
due to subsequent TCP development patches that are to be tested.)
The problem that we are seeing
17 matches
Mail list logo