Hi,

I've just got my dsl line reprovisioned for Annex M compatibility and the line speed showing currently on the modem is 20Mbps downstream with 2Mbps upstream.

Upon the reprovisioning I got ask to do a speed test by my ISP and only ~>5Mbps was shown for downstream while upstream was shown as ~700kbps.


The system in place is VDSL2+/ADSL2+(Annex M) router running in RFC2684 ATM to Ethernet bridge mode, linked to a Sun Microsystems Netra T105 - 440MHz, 360MB RAM box. With the Netra I have tested various routing capacities including inter-vlan and dynamic running OSPF and it managed between 80-90Mbps transfer rates for large files.

I'm using pppoe(4) with config pretty much taken from the man page:

inet 0.0.0.0 255.255.255.255 NONE mtu 1492 \
        pppoedev hme0 authproto chap \
        authname <authuser> authkey <authkey> \
        up
dest 0.0.0.1
!/sbin/route add default -ifp pppoe0 0.0.0.1

running over the hme0 interface:

up mtu 1500


Everything is working fine and I have internet connectivity.

Packet Filter is of course taking care of the NAT/PAT side of things and the firewalling.

Doing a bit of research on the net I came across a few postings:

http://forums.whirlpool.net.au/archive/481579

http://forums.whirlpool.net.au/forum-replies.cfm?t=116165

http://comments.gmane.org/gmane.os.openbsd.misc/196051


which suggested to edit the /etc/sysctl.conf file with some values that aren't available in OBSD5.1 as they are now allocated dynamically.

This is what I've got towards the bottom of my sysctl.conf file:

net.inet.tcp.mssdflt=1452
#net.inet.tcp.recvspace=131072
#net.inet.tcp.sendspace=131072
net.inet.udp.recvspace=139264
net.inet.udp.sendspace=139264
net.inet.ip.mforwarding=1
ddb.panic=0
kern.bufcachepercent=60
net.inet.ip.ifq.maxlen=512
net.inet.tcp.rfc3390=1

I have also made the necessary adjustments to the pf.conf file:

match on $ext_if scrub (max-mss 1440)

however, my speeds are still really low :-(


The router/modem I'm using is a Cisco 887VA which should be pretty high-performance but I don't trust Cisco's with NAT as they keep running out of memory on me when the number of connections starts going up; then they crash!


Is there anything else I can do to get better performance or this simply it for the Netra?

I've tried checking with all kinds of tools like nload and darkstat for bandwidth which confirms what the speed-test site claimed.

Running netstat -m I get:

# netstat -m
111 mbufs in use:
        20 mbufs allocated to data
        62 mbufs allocated to packet headers
        29 mbufs allocated to socket names and addresses
19/88/4096 mbuf 2048 byte clusters in use (current/peak/max)
0/8/4096 mbuf 4096 byte clusters in use (current/peak/max)
0/8/4096 mbuf 8192 byte clusters in use (current/peak/max)
0/14/4102 mbuf 9216 byte clusters in use (current/peak/max)
0/8/4096 mbuf 12288 byte clusters in use (current/peak/max)
0/8/4096 mbuf 16384 byte clusters in use (current/peak/max)
0/8/4096 mbuf 65536 byte clusters in use (current/peak/max)
536 Kbytes allocated to network (12% in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines


vmstat -i shows:

# vmstat -i
interrupt                       total     rate
com0                             4988        1
hme0                            82113       19
siop0                            9026        2
hme1                            86683       21
clock                          412347       99
Total                          595157      144


Netstat -i doesn't come up with any errors on any interfaces either....... and finally top doesn't show much cpu usage at all either.


Surely the Netra must be able to do more then 4.5Mbps down with 0.7Mbps up??? What about my LAN routing test which got 80-90Mbps?


Can anyone suggest anything?


Thanks.

Reply via email to