Re: [9fans] fossil+venti performance question

2015-05-08 Thread David du Colombier
I've enabled tcp, tcpwin and tcprxmt logs, but there isn't anything very interesting. tcpincoming s 127.0.0.1!53150/127.0.0.1!53150 d 127.0.0.1!17034/127.0.0.1!17034 v 4/4 Also, the issue is definitely related to the loopback. There is no problem when using an address on /dev/ether0. cpu% cat /n

Re: [9fans] fossil+venti performance question

2015-05-08 Thread Charles Forsyth
On 8 May 2015 at 17:13, David du Colombier <0in...@gmail.com> wrote: > Also, the issue is definitely related to the loopback. > There is no problem when using an address on /dev/ether0. > oh. possibly the queue isn't big enough, given the window size. it's using qpass on a Queue with Qmsg and if

Re: [9fans] fossil+venti performance question

2015-05-08 Thread David du Colombier
> oh. possibly the queue isn't big enough, given the window size. > it's using qpass on a Queue with Qmsg and if the queue is full, > Blocks will be discarded. I tried to increase the size of the queue, but no luck. -- David du Colombier

Re: [9fans] fossil+venti performance question

2015-05-08 Thread David du Colombier
I've finally figured out the issue. The slowness issue only appears on the loopback, because it provides a 16384 MTU. There is an old bug in the Plan 9 TCP stack, were the TCP MSS doesn't take account the MTU for incoming connections. I originally fixed this issue in January 2015 for the Plan 9

Re: [9fans] fossil+venti performance question

2015-05-08 Thread Steve Simon
I confirm - my old performance is back. Thanks very much David. -Steve

Re: [9fans] fossil+venti performance question

2015-05-08 Thread Bakul Shah
On Fri, 08 May 2015 21:24:13 +0200 David du Colombier <0in...@gmail.com> wrote: > On the loopback medium, I suppose this is the opposite issue. > Since the TCP stack didn't fix the MSS in the incoming > connection, the programs sent multiple small 1500 bytes > IP packets instead of large 16384 IP p

Re: [9fans] fossil+venti performance question

2015-05-08 Thread cinap_lenrek
do we really need to initialize tcb->mss to tcpmtu() in procsyn()? as i see it, procsyn() is called only when tcb->state is Syn_sent, which only should happen for client connections doing a connect, in which case tcpsndsyn() would have initialized tcb->mss already no? -- cinap

Re: [9fans] fossil+venti performance question

2015-05-08 Thread lucio
> do we really need to initialize tcb->mss to tcpmtu() in procsyn()? > as i see it, procsyn() is called only when tcb->state is Syn_sent, > which only should happen for client connections doing a connect, in > which case tcpsndsyn() would have initialized tcb->mss already no? tcb->mss may still ne