This seems weird to me:

cpu% echo $sysname
atom
cpu% ndb/query sys atom ip
192.168.23.25
cpu% ndb/query sys localhost ip
127.0.0.1
cpu% srv atom atom /n/atom
post...
cpu% srv localhost localhost /n/localhost
post...
cpu% time cp /n/atom/386/9pccpuf /dev/null
0.00u 0.02s 0.37r        cp /n/atom/386/9pccpuf /dev/null
cpu% time cp /n/localhost/386/9pccpuf /dev/null
0.00u 0.00s 24.54r       cp /n/localhost/386/9pccpuf /dev/null
cpu% time cp /n/atom/386/9pccpuf /dev/null
0.00u 0.00s 0.14r        cp /n/atom/386/9pccpuf /dev/null
cpu% time cp /n/localhost/386/9pccpuf /dev/null
0.00u 0.00s 24.87r       cp /n/localhost/386/9pccpuf /dev/null

Intuitively I would have thought the loopback interface would
be more efficient than going through the ethernet driver.
Certainly not two orders of magnitude slower.

This is on a 386-based cpu + fossil server. The same experiment
on arm and cwfs gives similar results.

It's not about raw bandwidth; streaming in one direction seems ok:

cpu% tcptest & tcptest -i atom -n 10000
cpu% tcp!192.168.23.25!34061 count 10000; 81920000 bytes in 2016503491 ns @ 
40.6 MB/s (0ms)

cpu% tcptest & tcptest -i localhost -n 10000
cpu% tcp!127.0.0.1!49090 count 10000; 81920000 bytes in 551924026 ns @ 148 MB/s 
(0ms)

So the problem seems to be latency of 9p transactions. Could it be
an artifact of tcp flow control not adapting well to the loopback
interface? Can anyone offer an insight?


------------------------------------------
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Ta94caa59de01adf8-M38bf73b8d568a6eaed182398
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

Reply via email to