Re: vlan patch

2005-10-20 Thread Gleb Smirnoff
  Andrew,

On Wed, Oct 19, 2005 at 11:25:59PM +1300, Andrew Thompson wrote:
A> It has always bugged me how the vlan code traverses the linked-list for
A> each incoming packet to find the right ifvlan, I have this patch which
A> attempts to fix this.
A>   
A> What it does is replace the linear search for the vlan with a constant
A> time lookup. It does this by allocating an array for each vlan enabled
A> parent interface so the tag can be directly indexed.
A>   
A> This has an overhead of ~16kb on 32bit, this is not too bad as there is
A> usually only one physical interface when using a large number of vlans.
A>   
A> I have measured a 1.6% pps increase with 100 vlans, and 8% with 500, and
A> yes, some people use this many in production.
A>   
A> It also has the benefit of enforcing unique vlan tags per parent which   
A> the current code doesn't do.

Although the memory overhead is not noticable on modern i386 and amd64
PCs I don't think that we should waste so much memory. We should keep
in mind the existence of embedded architectures with little memory.

In most cases people use 10 - 30 VLANs. I suggest to use a hash, like it
is already done in ng_vlan(4). This hash makes every sixteenth VLAN to fall
into same slot. Since most people allocate VLAN ids contiguously the hash
distribution should be good.

Moreover, I suggest Yar and Ruslan to work together and make the hash code
shared between vlan(4) and ng_vlan(4), not copy-and-pasted.

-- 
Totus tuus, Glebius.
GLEBIUS-RIPN GLEB-RIPE
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: PPPoE and Radius on 6.0RC1

2005-10-20 Thread Gleb Smirnoff
On Wed, Oct 19, 2005 at 11:51:11PM +0200, Marcin Jessa wrote:
M> It seems like PPPoE stoped working with support for radius on 6.0
M> The log of pppoe and freeradius does not show pppoe attempting to even talk 
to the radius server.
M> Additionally this message pops up when enabling pppoed:
M> WARNING: attempt to net_add_domain(netgraph) after domainfinalize()
M> My setup worked fine before on FreeBSD 5.x
M> Is that a known issue and is it being worked on?

Please show your PPPoE server configuration. Do you use pppoed or mpd?

-- 
Totus tuus, Glebius.
GLEBIUS-RIPN GLEB-RIPE
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Network performance 6.0 with netperf

2005-10-20 Thread Michael VInce
Here is my probably final round of tests that I thought could possible 
be useful to others.


I have enabled polling on the interfaces and discovered some of the 
master secret holy grail sysctls that really make this stuff work.

I now get over 900mbits/sec router performance with polling.

Having sysctl either net.isr.direct=1 or net.inet.ip.fastforwarding=1 
gave roughly an extra 445mbits performance increase according to netperf 
tests, because my tests aren't really lab strict enough I still haven't 
been able to easily see a difference between having net.isr.direct=1 or 
0 while also having net.inet.ip.fastforwarding set to 1, it does appear 
that having net.isr.direct=1 might be stealing the job of the 
net.inet.ip.fastforwarding sysctl because when 
net.inet.ip.fastforwarding=0 and net.isr.direct=1 on the gateway I still 
get the 905.48mbit/sec route speed listed below.


From the client machine (A) through the gateway (B with polling 
enabled) to the server (C)

With net.isr.direct=1 and net.inet.ip.fastforwarding=1
A> /usr/local/netperf/netperf -l 10 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344

Elapsed Throughput - 10^6bits/sec: 905.48

With net.isr.direct=0 and net.inet.ip.fastforwarding=0
Elapsed Throughput - 10^6bits/sec: 460.15

Apache get 'fetch' test.
A> fetch -o - > /dev/null http://server-C/file1gig.iso
- 100% of 1055 MB   67 MBps 
00m00s


Interestingly when testing from the gateway it self (B) direct to server 
(C) having 'net.isr.direct=1'  slowed down performance to 583mbits/sec
B> /usr/local/netperf/netperf -l 10 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344

Elapsed Throughput - 10^6bits/sec: 583.57

Same test with 'net.isr.direct=0'
Elapsed Throughput - 10^6bits/sec: 868.94
I have to ask how can this be possible if when its being used as a 
router with net.isr.direct=1 it passes traffic at over 900mbits/sec
Having net.inet.ip.fastforwarding=1 doesn't affect the performance in 
these B to C tests.


I believe faster performance may still be possible as another rack of 
gear I have that has another AMD64 6.0 RC1 Dell 2850 (Kes) gives me up 
to 930mbits/sec in apache fetch tests, I believe its even faster here 
because its an AMD64 Apache server or its possible it could just have a 
bit better quality ether cables, as I mentioned before the Apache server 
for box "C" in above tests is i386 on 6.0RC1.


This fetch test is only on a switch with no router between them.
spin> fetch -o - > /dev/null http://kes/500megs.zip
- 100% of  610 MB   93 MBps

So far from this casual testing I have discovered these things on my 
servers.
Using 6.0 on SMP servers gives a big boost in network performance over 
5.x SMP using i386 or AMD64
FreeBSD as router on gigabit ethernet with the use of polling gives over 
x2 performance with the right sysctls.
Needs more testing but it appears using AMD64 FreeBSD might be better 
then i386 for Apache2 network performance on SMP kernels.
Single interface speeds tests from the router with polling enabled and 
with 'net.isr.direct=1' appears to affect performance.


Regards,
Mike

Michael VInce wrote:


Robert Watson wrote:



On Fri, 14 Oct 2005, Michael VInce wrote:

I been doing some network benchmarking using netperf and just simple 
'fetch' on a new network setup to make sure I am getting the most 
out of the router and servers, I thought I would post some results 
in case some one can help me with my problems or if others are just 
interested to see the results.




Until recently (or maybe still), netperf was compiled with 
-DHISTOGRAM by our port/package, which resulted in a significant 
performance drop.  I believe that the port maintainer and others have 
agreed to change it, but I'm not sure if it's been committed yet, or 
which packages have been rebuilt.  You may want to manually rebuild 
it to make sure -DHISTOGRAM isn't set.


You may want to try setting net.isr.direct=1 and see what performance 
impact that has for you.


Robert N M Watson



I reinstalled the netperf to make sure its the latest.

I have also decided to upgrade Server-C (the i386 5.4 box) to 6.0RC1 
and noticed it gave a large improvement of network performance with a 
SMP kernel.


As with the network setup ( A --- B --- C  ) with server B being the 
gateway, doing a basic 'fetch' from the gateway (B) to the Apache 
server (C) it gives up to 700mbits/sec transfer performance, doing a 
fetch from server A thus going through the gateway gives slower but 
still decent performance of up to 400mbits/sec.


B> fetch -o - > /dev/null http://server-c/file1gig.iso
- 100% of 1055 MB   69 
MBps 00m00s



A> fetch -o - > /dev/null http://server-c/file1gig.iso
- 100% of 1055 MB   39 
MBps 00m00s


Netperf from the gateway directly to the apache server (C) 916mbits/sec

Re: Network performance 6.0 with netperf

2005-10-20 Thread Michael VInce
Here is my probably final round of tests that I thought could possible 
be useful to others.


I have enabled polling on the interfaces and discovered some of the 
master secret holy grail sysctls that really make this stuff work.

I now get over 900mbits/sec router performance with polling.

Having sysctl either net.isr.direct=1 or net.inet.ip.fastforwarding=1 
gave roughly an extra 445mbits performance increase according to netperf 
tests, because my tests aren't really lab strict enough I still haven't 
been able to easily see a difference between having net.isr.direct=1 or 
0 while also having net.inet.ip.fastforwarding set to 1, it does appear 
that having net.isr.direct=1 might be stealing the job of the 
net.inet.ip.fastforwarding sysctl because when 
net.inet.ip.fastforwarding=0 and net.isr.direct=1 on the gateway I still 
get the 905.48mbit/sec route speed listed below.


From the client machine (A) through the gateway (B with polling 
enabled) to the server (C)

With net.isr.direct=1 and net.inet.ip.fastforwarding=1
A> /usr/local/netperf/netperf -l 10 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344

Elapsed Throughput - 10^6bits/sec: 905.48

With net.isr.direct=0 and net.inet.ip.fastforwarding=0
Elapsed Throughput - 10^6bits/sec: 460.15

Apache get 'fetch' test.
A> fetch -o - > /dev/null http://server-C/file1gig.iso
- 100% of 1055 MB   67 MBps 
00m00s


Interestingly when testing from the gateway it self (B) direct to server 
(C) having 'net.isr.direct=1'  slowed down performance to 583mbits/sec
B> /usr/local/netperf/netperf -l 10 -H server-C -t TCP_STREAM -i 10,2 -I 
99,5 -- -m 4096 -s 57344 -S 57344

Elapsed Throughput - 10^6bits/sec: 583.57

Same test with 'net.isr.direct=0'
Elapsed Throughput - 10^6bits/sec: 868.94
I have to ask how can this be possible if when its being used as a 
router with net.isr.direct=1 it passes traffic at over 900mbits/sec
Having net.inet.ip.fastforwarding=1 doesn't affect the performance in 
these B to C tests.


I believe faster performance may still be possible as another rack of 
gear I have that has another AMD64 6.0 RC1 Dell 2850 (Kes) gives me up 
to 930mbits/sec in apache fetch tests, I believe its even faster here 
because its an AMD64 Apache server or its possible it could just have a 
bit better quality ether cables, as I mentioned before the Apache server 
for box "C" in above tests is i386 on 6.0RC1.


This fetch test is only on a switch with no router between them.
spin> fetch -o - > /dev/null http://kes/500megs.zip
- 100% of  610 MB   93 MBps

So far from this casual testing I have discovered these things on my 
servers.
Using 6.0 on SMP servers gives a big boost in network performance over 
5.x SMP using i386 or AMD64
FreeBSD as router on gigabit ethernet with the use of polling gives over 
x2 performance with the right sysctls.
Needs more testing but it appears using AMD64 FreeBSD might be better 
then i386 for Apache2 network performance on SMP kernels.
Single interface speeds tests from the router with polling enabled and 
with 'net.isr.direct=1' appears to affect performance.


Regards,
Mike

Michael VInce wrote:


Robert Watson wrote:



On Fri, 14 Oct 2005, Michael VInce wrote:

I been doing some network benchmarking using netperf and just simple 
'fetch' on a new network setup to make sure I am getting the most 
out of the router and servers, I thought I would post some results 
in case some one can help me with my problems or if others are just 
interested to see the results.




Until recently (or maybe still), netperf was compiled with 
-DHISTOGRAM by our port/package, which resulted in a significant 
performance drop.  I believe that the port maintainer and others have 
agreed to change it, but I'm not sure if it's been committed yet, or 
which packages have been rebuilt.  You may want to manually rebuild 
it to make sure -DHISTOGRAM isn't set.


You may want to try setting net.isr.direct=1 and see what performance 
impact that has for you.


Robert N M Watson



I reinstalled the netperf to make sure its the latest.

I have also decided to upgrade Server-C (the i386 5.4 box) to 6.0RC1 
and noticed it gave a large improvement of network performance with a 
SMP kernel.


As with the network setup ( A --- B --- C  ) with server B being the 
gateway, doing a basic 'fetch' from the gateway (B) to the Apache 
server (C) it gives up to 700mbits/sec transfer performance, doing a 
fetch from server A thus going through the gateway gives slower but 
still decent performance of up to 400mbits/sec.


B> fetch -o - > /dev/null http://server-c/file1gig.iso
- 100% of 1055 MB   69 
MBps 00m00s



A> fetch -o - > /dev/null http://server-c/file1gig.iso
- 100% of 1055 MB   39 
MBps 00m00s


Netperf from the gateway directly to the apache server (C) 916mbits/sec

Re: vlan patch

2005-10-20 Thread Ragnar Lonn

Gleb Smirnoff wrote:


Although the memory overhead is not noticable on modern i386 and amd64
PCs I don't think that we should waste so much memory. We should keep
in mind the existence of embedded architectures with little memory.

In most cases people use 10 - 30 VLANs. I suggest to use a hash, like it
is already done in ng_vlan(4). This hash makes every sixteenth VLAN to fall
into same slot. Since most people allocate VLAN ids contiguously the hash
distribution should be good.

Moreover, I suggest Yar and Ruslan to work together and make the hash code
shared between vlan(4) and ng_vlan(4), not copy-and-pasted.
 



It looks as if ng_vlan implements a standard hash. Wouldn't a hashtree 
be a good
compromise between speed and memory usage?  Of course, a 16-slot hash is 
a lot

better than no hash at all :-)

 /Ragnar
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vlan patch

2005-10-20 Thread Yar Tikhiy
On Thu, Oct 20, 2005 at 11:00:54AM +0400, Gleb Smirnoff wrote:
> 
> On Wed, Oct 19, 2005 at 11:25:59PM +1300, Andrew Thompson wrote:
> A> It has always bugged me how the vlan code traverses the linked-list for
> A> each incoming packet to find the right ifvlan, I have this patch which
> A> attempts to fix this.
> A>   
> A> What it does is replace the linear search for the vlan with a constant
> A> time lookup. It does this by allocating an array for each vlan enabled
> A> parent interface so the tag can be directly indexed.
> A>   
> A> This has an overhead of ~16kb on 32bit, this is not too bad as there is
> A> usually only one physical interface when using a large number of vlans.
> A>   
> A> I have measured a 1.6% pps increase with 100 vlans, and 8% with 500, and
> A> yes, some people use this many in production.
> A>   
> A> It also has the benefit of enforcing unique vlan tags per parent which   
> A> the current code doesn't do.
> 
> Although the memory overhead is not noticable on modern i386 and amd64
> PCs I don't think that we should waste so much memory. We should keep
> in mind the existence of embedded architectures with little memory.

Agreed.  On amd64 or another 64-bit platform each physical interface
carrying vlans will consume 32k of wired kernel memory, which is a
rather valuable resource.  We may not spend memory in the kernel as
generously as in userland programs because it is usually physical
memory spent in this case, not virtual one.

> In most cases people use 10 - 30 VLANs. I suggest to use a hash, like it

I'd rather not limit our consideration by 10-30 VLANs.  People are
running networks with hundreds of VLANs terminated at a FreeBSD
gateway.  Perhaps, the hash table should be roughly adjustable to
the current number of configured VLANs.

> is already done in ng_vlan(4). This hash makes every sixteenth VLAN to fall
> into same slot. Since most people allocate VLAN ids contiguously the hash
> distribution should be good.

Apropos, XOR-folding used in ng_vlan isn't that dumb, it doesn't
make every 16th VLAN fall into the same slot.  Of course, you will
start getting collisions at most on the 16th VLAN added since 16
is the size of the hash table.

> Moreover, I suggest Yar and Ruslan to work together and make the hash code
> shared between vlan(4) and ng_vlan(4), not copy-and-pasted.

The hash code consists of literally a couple of #define's.  And the
difference between ng_vlan(4) and vlan(4) is that each ng_vlan node
gets its own instance of the hash table.  OTOH, in vlan(4) we need
to decide if the hash table will be per parent interface or a single
global instance.  In the latter case we could hash by a combination
of the VLAN tag and parent's ifindex.  Perhaps this approach will
yield more CPU cache hits during hash table lookups.  In addition,
it will be thriftier in using memory.  Locking the global hash table
should not be an issue as we can use an sx lock in this case for
optimal read access.

-- 
Yar
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vlan patch

2005-10-20 Thread Andrew Thompson
On Thu, Oct 20, 2005 at 11:00:54AM +0400, Gleb Smirnoff wrote:
>   Andrew,
> 
> On Wed, Oct 19, 2005 at 11:25:59PM +1300, Andrew Thompson wrote:
> A> It has always bugged me how the vlan code traverses the linked-list for
> A> each incoming packet to find the right ifvlan, I have this patch which
> A> attempts to fix this.
> A>   
> A> What it does is replace the linear search for the vlan with a constant
> A> time lookup. It does this by allocating an array for each vlan enabled
> A> parent interface so the tag can be directly indexed.
> A>   
> A> This has an overhead of ~16kb on 32bit, this is not too bad as there is
> A> usually only one physical interface when using a large number of vlans.
> A>   
> A> I have measured a 1.6% pps increase with 100 vlans, and 8% with 500, and
> A> yes, some people use this many in production.
> A>   
> A> It also has the benefit of enforcing unique vlan tags per parent which   
> A> the current code doesn't do.
> 
> Although the memory overhead is not noticable on modern i386 and amd64
> PCs I don't think that we should waste so much memory. We should keep
> in mind the existence of embedded architectures with little memory.
> 

I agree. Did you see the revised patch that sets a threshold before
allocating the memory? do you think thats sufficient?


Andrew
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Network performance 6.0 with netperf

2005-10-20 Thread Robert Watson


On Thu, 20 Oct 2005, Michael VInce wrote:

Interestingly when testing from the gateway it self (B) direct to server 
(C) having 'net.isr.direct=1' slowed down performance to 583mbits/sec


net.isr.direct works to improve performance in many cases because it (a) 
reduces latency, and (b) reduces CPU usage.  However, there are cases 
where it can effectively reduce performance because it reduces the 
opportunity for parallelism in those cases.  Specifically, by constraining 
computation in in-bound IP path to occuring in a single thread rather than 
two (ithread vs. ithread and netisr), it prevents that computation from 
being executed on more than one CPU at a time.  Understanding these cases 
is complicated by the fact that there may be multiple ithreads involved. 
Let me propose a scenario, which we may be able to confirm by looking at 
the output of top -S on the system involved:


In the two-host test case, your experimental host is using three threads 
to process packets: the network interface ithread, the netisr thread, and 
the netserver thread.  In the three host test case, where your 
experimental host is the forwarding system, you are also using three 
threads: the two interface ithreads, and the netisr.


For the two-host case with net.isr.direct, work is split over these 
threads usefully, such that they form an execution pipeline passing data 
from CPU to CPU, and getting useful parallelism.  Specifically, you are 
likely seeing significant parallelism between the ithread and the netisr. 
By turning on net.isr.direct, the in-bound IP stack processing occurs 
entirely in the ithread, with no work in the netisr, so parallelism is 
reduced, reducing the rate of work performed due to more synchronous 
waiting for CPU resources.  Another possible issue here is increased 
delays in responding to interrupts due to high levels of work occuring in 
the ithread, and therefore more packets dropped from the card.


In the three host case with net.isr.direct, all work occurs in the two 
ithreads, so IP processing in both directions can occur in parallel, 
whereas without net.isr.direct, all the IP processing happens in a single 
thread, limiting parallelism.


The test to run is to have top -S running on the boxes, and see how much 
CPU is used by various threads in various test scenarios, and what the 
constraining resource is on the boxes.  For example, if in the 
net.isr.direct scenario with two hosts, if the ithread for your ethernet 
interface is between 95% and 100% busy, but with net.isr.direct=0 the work 
is better split over threads, it might confirm the above description.  On 
the other hand, if in both scenarios, the CPUs and threads aren't maxed 
out, it might suggest a problem with responsiveness to interrupts and 
packets dropped in the card, in which case card statistics might be useful 
to look at.


Robert N M Watson

B> /usr/local/netperf/netperf -l 10 -H 
server-C -t TCP_STREAM -i 
10,2 -I 99,5 

-- -m 4096 -s 57344 -S 57344
Elapsed Throughput - 10^6bits/sec: 583.57

Same test with 'net.isr.direct=0'
Elapsed Throughput - 10^6bits/sec: 868.94
I have to ask how can this be possible if when its being used as a router 
with net.isr.direct=1 it passes traffic at over 900mbits/sec
Having net.inet.ip.fastforwarding=1 doesn't affect the performance in these B 
to C tests.


I believe faster performance may still be possible as another rack of gear I 
have that has another AMD64 6.0 RC1 Dell 2850 (Kes) gives me up to 
930mbits/sec in apache fetch tests, I believe its even faster here because 
its an AMD64 Apache server or its possible it could just have a bit better 
quality ether cables, as I mentioned before the Apache server for box "C" in 
above tests is i386 on 6.0RC1.


This fetch test is only on a switch with no router between them.
spin> fetch -o - > /dev/null http://kes/500megs.zip
- 100% of  610 MB   93 MBps

So far from this casual testing I have discovered these things on my servers.
Using 6.0 on SMP servers gives a big boost in network performance over 5.x 
SMP using i386 or AMD64
FreeBSD as router on gigabit ethernet with the use of polling gives over x2 
performance with the right sysctls.
Needs more testing but it appears using AMD64 FreeBSD might be better then 
i386 for Apache2 network performance on SMP kernels.
Single interface speeds tests from the router with polling enabled and with 
'net.isr.direct=1' appears to affect performance.


Regards,
Mike

Michael VInce wrote:


Robert Watson wrote:



On Fri, 14 Oct 2005, Michael VInce wrote:

I been doing some network benchmarking using netperf and just simple 
'fetch' on a new network setup to make sure I am getting the most out of 
the router and servers, I thought I would post some results in case some 
one can help me with my problems or if others are just interested to see 
the results.




Until recently (or maybe still), netperf was compiled with -DHISTOGRAM by 
our port/packag

Re: PPPoE and Radius on 6.0RC1

2005-10-20 Thread Marcin Jessa
On Thu, 20 Oct 2005 11:01:45 +0400
Gleb Smirnoff <[EMAIL PROTECTED]> wrote:

> On Wed, Oct 19, 2005 at 11:51:11PM +0200, Marcin Jessa wrote:
> M> It seems like PPPoE stoped working with support for radius on 6.0
> M> The log of pppoe and freeradius does not show pppoe attempting to
> M> even talk to the radius server. Additionally this message pops up
> M> when enabling pppoed: WARNING: attempt to net_add_domain(netgraph)
> M> after domainfinalize() My setup worked fine before on FreeBSD 5.x
> M> Is that a known issue and is it being worked on?
> 
> Please show your PPPoE server configuration. Do you use pppoed or mpd?

I use pppoed.
Adding 
netgraph_load="YES"
ng_socket_load="YES"
to /boot/loader.conf fixed it on my 6.0RC1 ( thanks Julian Elischer )
Frankly I don't understand why this is needed since pppoed loads those
modules when it starts up.

This is my ppp.conf, just for the record:

default:
 #set log Chat Command Phase #turn on some logging. See man
ppp.conf for info set log Chat Command Phase hdlc lqm ipcp
 enable mschapv2 mschap chap mppe   #turn on chap and pap accounting
 #enable pap mschapv2 mschap chap mppe  #turn on chap and pap accounting
 #enable pap#turn on chap and pap accounting
 allow mode direct  #turn on ppp bridging
 enable proxy   #turn on ppp proxyarping (redundant of
above???) disable ipv6cp #we don't use ipv6, don't want
the errors set mru 1472   #set mru below 1500 (PPPoE
MTU issue) set mtu 1472   #set mtu below 1500 (PPPoE
MTU issue) set timeout 0  #no mins time restriction on
users #set timeout never
 set mppe 128 *
 set ifaddr 192.168.2.8 192.168.2.100-192.168.1.120 255.255.255.255
 set log phase ipcp lcp debug   #additional debugging
 nat enable yes
 set dns 192.168.2.45 192.168.2.8
 #set speed sync
 set cd 3   # checks for the existence of carrier
once per second for 5 seconds #set cd 5!
 #enable echo
 enable lqr
 set reconnect 1 5  # Should the line drop unexpectedly , a
connection will be re-established after the given timeout.

#Specify my wifi gateway IP as well as DHCP pool range
 set radius /etc/ppp/radius.conf#turn on radius auth and use
this file accept dns #turn on dns
cacheing/forwarding

#enable pap mschapv2 mschap chap mppe   #turn on chap and pap accounting
 disable pap pred1 deflate  #disable pred1 and deflate
compression along with pap deny pap pred1 deflate
#refuse when ask for it
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vlan patch

2005-10-20 Thread Yar Tikhiy
On Thu, Oct 20, 2005 at 10:19:34PM +1300, Andrew Thompson wrote:
> On Thu, Oct 20, 2005 at 11:00:54AM +0400, Gleb Smirnoff wrote:
> >   Andrew,
> > 
> > On Wed, Oct 19, 2005 at 11:25:59PM +1300, Andrew Thompson wrote:
> > A> It has always bugged me how the vlan code traverses the linked-list for
> > A> each incoming packet to find the right ifvlan, I have this patch which
> > A> attempts to fix this.
> > A>   
> > A> What it does is replace the linear search for the vlan with a constant
> > A> time lookup. It does this by allocating an array for each vlan enabled
> > A> parent interface so the tag can be directly indexed.
> > A>   
> > A> This has an overhead of ~16kb on 32bit, this is not too bad as there is
> > A> usually only one physical interface when using a large number of vlans.
> > A>   
> > A> I have measured a 1.6% pps increase with 100 vlans, and 8% with 500, and
> > A> yes, some people use this many in production.
> > A>   
> > A> It also has the benefit of enforcing unique vlan tags per parent which   
> > A> the current code doesn't do.
> > 
> > Although the memory overhead is not noticable on modern i386 and amd64
> > PCs I don't think that we should waste so much memory. We should keep
> > in mind the existence of embedded architectures with little memory.
> 
> I agree. Did you see the revised patch that sets a threshold before
> allocating the memory? do you think thats sufficient?

I'm afraid that the simple approach of setting a threshold isn't
much better than no such threshold at all.  The number of vlans in
use tends to grow in time in most cases, but it still will be several
times less than the maximum, 4096, which defines the full tag table
size.  I believe we should keep this in mind.

-- 
Yar
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


mbuf passed to if_output() in ip_output()

2005-10-20 Thread kamal kc
dear everybody,

i have encountered a problem big enough for me to
handle.
Actually i am trying to compress the data contained
in the ip packets. 

i see that the data to be passed by ip to link layer 
output routine is contained in mbuf *m.

Now what i tried to do is I tried to copy the original
mbufs to newer mbufs and passed the newer mbufs to the
if_output() instead of the older ones. And I tried to 
release the older mbufs myself.

For this purpose i wrote the routine below.
But it doesn't seem to work.

The problem I encounter is the mbuf in the ip_output
doesnot seem to be released and gets printed(by my
custom routine) infinitely. 

What is the problem ???

Please help 

kamal

PS. i have put the code with which i tried to replace 
the original mbufs 


void copy_the_memorybuffer(struct mbuf **m)
{   struct mbuf *tmp_mbuf;

struct mbuf *mbuf_pointer=*m;  //original mbuf header
struct mbuf **mnext;
struct mbuf **second_mbuf_pointer;  

struct mbuf **second_packet_pointer;
struct mbuf **next_packet;

unsigned int ipheaderlength;
unsigned int packet_length;
unsigned char *ip_char_buffer;

int i; //loop variable

second_packet_pointer=&mbuf_pointer->m_nextpkt;
next_packet=&mbuf_pointer->m_nextpkt;

for(;*next_packet;*next_packet=(*next_packet)->m_nextpkt)

{   
if(((*next_packet)->m_flags & M_PKTHDR)!=0) 
{ struct ip *dest_ip;
  dest_ip=mtod((*next_packet),struct ip *); 
  dest_ip->ip_tos=66;
  printf("\nDestination ip
(next_packet)=%s\n",inet_ntoa(dest_ip->ip_dst));  
}

second_mbuf_pointer=&(*next_packet)->m_next;
mnext=&(*next_packet)->m_next; //keep the first
packet as it is
for(;*mnext;*mnext=(*mnext)->m_next)  //loop until
the end of memory buffer
{
   printf("\ninside the for loop(mnext)\n");
   if(((*mnext)->m_flags & M_PKTHDR)!=0) 
   //this is the start of the packet
 { MGETHDR(tmp_mbuf,M_WAIT,(*mnext)->m_type);
   
   if(((*mnext)->m_flags & M_EXT)!=0)  //uses a
cluster
  MCLGET(tmp_mbuf,M_WAIT); 
   
   struct ip *my_ip;
   my_ip=mtod(*mnext,struct ip *);
   my_ip->ip_tos=55;   
   
   printf("\nDestination
ip=%s\n",inet_ntoa(my_ip->ip_dst));
   ipheaderlength=my_ip->ip_hl<<2; //4*ip header
length= real header length 
   
   packet_length=(*mnext)->m_len;
   ip_char_buffer=(unsigned char *)my_ip;

   for(i=0;im_data[i]=(*mnext)->m_data[i];
   }
  
   for(i=ipheaderlength;im_data[i]=(*mnext)->m_data[i];
   }
   tmp_mbuf->m_pkthdr.len=(*mnext)->m_pkthdr.len;
   tmp_mbuf->m_pkthdr.rcvif=(struct ifnet *)0;
  
tmp_mbuf->m_pkthdr.header=(*mnext)->m_pkthdr.header;
  
tmp_mbuf->m_pkthdr.csum_flags=(*mnext)->m_pkthdr.csum_flags;
  
tmp_mbuf->m_pkthdr.csum_data=(*mnext)->m_pkthdr.csum_data;
 }
   else  //the mbuf is not the packet header so it
does not contain the ip header
 { MGET(tmp_mbuf,M_WAIT,(*mnext)->m_type);  
   
  
   if(((*mnext)->m_flags & M_EXT)!=0)
  MCLGET(tmp_mbuf,M_WAIT); 

   packet_length=(*mnext)->m_len;
   for(i=0;im_data[i]=(*mnext)->m_data[i];
   }
 }  
   tmp_mbuf->m_len=(*mnext)->m_len;
   tmp_mbuf->m_flags=(*mnext)->m_flags;
   tmp_mbuf->m_nextpkt=(*mnext)->m_nextpkt;
   
   tmp_mbuf->m_next=(*mnext)->m_next;
   *mnext=tmp_mbuf;
   }
   m_freem(*second_mbuf_pointer);
 }
 *m=mbuf_pointer;
 return;
}


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vlan patch

2005-10-20 Thread Yar Tikhiy
On Thu, Oct 20, 2005 at 10:51:00AM +0200, Ragnar Lonn wrote:
> Gleb Smirnoff wrote:
> 
> >Although the memory overhead is not noticable on modern i386 and amd64
> >PCs I don't think that we should waste so much memory. We should keep
> >in mind the existence of embedded architectures with little memory.
> >
> >In most cases people use 10 - 30 VLANs. I suggest to use a hash, like it
> >is already done in ng_vlan(4). This hash makes every sixteenth VLAN to fall
> >into same slot. Since most people allocate VLAN ids contiguously the hash
> >distribution should be good.
> >
> >Moreover, I suggest Yar and Ruslan to work together and make the hash code
> >shared between vlan(4) and ng_vlan(4), not copy-and-pasted.
> 
> It looks as if ng_vlan implements a standard hash. Wouldn't a hashtree 
> be a good
> compromise between speed and memory usage?  Of course, a 16-slot hash is 
> a lot
> better than no hash at all :-)

The only problem with the hash currently used in ng_vlan is that
it is fixed-width.  I think it will be easy to teach it how to cope
with variable bit-width of hash using the same xor-folding technique.
I hope I'll have free time this week-end to test the performance
of the approaches discussed since implementing them is no problem
at all.

-- 
Yar
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: PPPoE and Radius on 6.0RC1

2005-10-20 Thread Marcin Jessa
On Thu, 20 Oct 2005 13:15:49 +0200
Marcin Jessa <[EMAIL PROTECTED]> wrote:

> On Thu, 20 Oct 2005 11:01:45 +0400
> Gleb Smirnoff <[EMAIL PROTECTED]> wrote:
> 
> > On Wed, Oct 19, 2005 at 11:51:11PM +0200, Marcin Jessa wrote:
> > M> It seems like PPPoE stoped working with support for radius on 6.0
> > M> The log of pppoe and freeradius does not show pppoe attempting to
> > M> even talk to the radius server. Additionally this message pops up
> > M> when enabling pppoed: WARNING: attempt to net_add_domain
> > M> (netgraph) after domainfinalize() My setup worked fine before on
> > M> FreeBSD 5.x Is that a known issue and is it being worked on?
> > 
> > Please show your PPPoE server configuration. Do you use pppoed or
> > mpd?
> 
> I use pppoed.
> Adding 
> netgraph_load="YES"
> ng_socket_load="YES"
> to /boot/loader.conf fixed it on my 6.0RC1 ( thanks Julian Elischer )
> Frankly I don't understand why this is needed since pppoed loads those
> modules when it starts up.

Just tested the same setup on 7.0 built tonight and it did not work.
The pppoed daemon never sends any requests to freeradius...
I tried to additionaly with those modules listed in loader.conf with no
luck:
ng_ether_load="YES"
ng_pppoe_load="YES"


> This is my ppp.conf, just for the record:
> 
> default:
>  #set log Chat Command Phase #turn on some logging. See man
> ppp.conf for info set log Chat Command Phase hdlc lqm ipcp
>  enable mschapv2 mschap chap mppe   #turn on chap and pap
> accounting
>  #enable pap mschapv2 mschap chap mppe  #turn on chap and pap
>  #accounting enable pap#turn on chap and pap
>  #accounting
>  allow mode direct  #turn on ppp bridging
>  enable proxy   #turn on ppp proxyarping (redundant of
> above???) disable ipv6cp #we don't use ipv6, don't
> want the errors set mru 1472   #set mru below 1500
> (PPPoE MTU issue) set mtu 1472   #set mtu below 1500
> (PPPoE MTU issue) set timeout 0  #no mins time
> restriction on users #set timeout never
>  set mppe 128 *
>  set ifaddr 192.168.2.8 192.168.2.100-192.168.1.120 255.255.255.255
>  set log phase ipcp lcp debug   #additional debugging
>  nat enable yes
>  set dns 192.168.2.45 192.168.2.8
>  #set speed sync
>  set cd 3   # checks for the existence of carrier
> once per second for 5 seconds #set cd 5!
>  #enable echo
>  enable lqr
>  set reconnect 1 5  # Should the line drop unexpectedly ,
> a connection will be re-established after the given timeout.
> 
> #Specify my wifi gateway IP as well as DHCP pool range
>  set radius /etc/ppp/radius.conf#turn on radius auth and use
> this file accept dns #turn on dns
> cacheing/forwarding
> 
> #enable pap mschapv2 mschap chap mppe   #turn on chap and pap
> #accounting
>  disable pap pred1 deflate  #disable pred1 and deflate
> compression along with pap deny pap pred1 deflate
> #refuse when ask for it
> ___
> freebsd-current@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to
> "[EMAIL PROTECTED]"
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Network performance 6.0 with netperf

2005-10-20 Thread Michael VInce

Sten Daniel Sørsdal wrote:


Michael VInce wrote:

 


I reinstalled the netperf to make sure its the latest.

I have also decided to upgrade Server-C (the i386 5.4 box) to 6.0RC1 and
noticed it gave a large improvement of network performance with a SMP
kernel.

As with the network setup ( A --- B --- C  ) with server B being the
gateway, doing a basic 'fetch' from the gateway (B) to the Apache server
(C) it gives up to 700mbits/sec transfer performance, doing a fetch from
server A thus going through the gateway gives slower but still decent
performance of up to 400mbits/sec.
   



Are you by any chance using PCI NIC's? PCI Bus is limited to somewhere around 1 
Gbit/s.
So if you consider;
Theoretical maxium = ( 1Gbps - pci_overhead )

 

The 4 ethernet ports on the Dell server are all built-in so I am 
assuming they are on the best bus available.


Mike
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: PPPoE and Radius on 6.0RC1

2005-10-20 Thread Gleb Smirnoff
On Thu, Oct 20, 2005 at 12:31:59PM +, Marcin Jessa wrote:
M> > > On Wed, Oct 19, 2005 at 11:51:11PM +0200, Marcin Jessa wrote:
M> > > M> It seems like PPPoE stoped working with support for radius on 6.0
M> > > M> The log of pppoe and freeradius does not show pppoe attempting to
M> > > M> even talk to the radius server. Additionally this message pops up
M> > > M> when enabling pppoed: WARNING: attempt to net_add_domain
M> > > M> (netgraph) after domainfinalize() My setup worked fine before on
M> > > M> FreeBSD 5.x Is that a known issue and is it being worked on?
M> > > 
M> > > Please show your PPPoE server configuration. Do you use pppoed or
M> > > mpd?
M> > 
M> > I use pppoed.
M> > Adding 
M> > netgraph_load="YES"
M> > ng_socket_load="YES"
M> > to /boot/loader.conf fixed it on my 6.0RC1 ( thanks Julian Elischer )
M> > Frankly I don't understand why this is needed since pppoed loads those
M> > modules when it starts up.
M> 
M> Just tested the same setup on 7.0 built tonight and it did not work.
M> The pppoed daemon never sends any requests to freeradius...
M> I tried to additionaly with those modules listed in loader.conf with no
M> luck:
M> ng_ether_load="YES"
M> ng_pppoe_load="YES"

This is very strange. Because I have recently upgraded one router running
pppoed to 6.0-RC1 and I have no problems.

Please read on the pppoed logs and see what is the problem. If everything
is fine in pppoed logs, then go on with ppp logs. "stopped working" is not
really informative.

-- 
Totus tuus, Glebius.
GLEBIUS-RIPN GLEB-RIPE
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


mpd disconnect due to LCP echo not responding

2005-10-20 Thread Dominic Marks
Hello,

I have a number of staff who use pptp links to VPN (Windows to FreeBSD mpd
server) in to the Office. Some times when the link is busy the LCP echos
won't pass over the link quickly enough and the connection will terminate.

Is there anything I can do to prevent this, or at least make it less
likely to happen. Options I have thought of are:

Increasing the number of echos mpd waits before disconnecting.

Increasing the time out for LCP echos.

Using traffic shaping to prioritise the LCP traffic over other traffic.

Are any of these possible, or is there another way of acheiving the same
goal?

Some example log output:

Oct 19 15:18:38 billy mpd: [ng5] LCP: no reply to 1 echo request(s)
Oct 19 15:18:48 billy mpd: [ng5] LCP: no reply to 2 echo request(s)
Oct 19 15:18:58 billy mpd: [ng5] LCP: no reply to 3 echo request(s)
Oct 19 15:19:08 billy mpd: [ng5] LCP: no reply to 4 echo request(s)
Oct 19 15:19:18 billy mpd: [ng5] LCP: no reply to 5 echo request(s)
Oct 19 15:19:18 billy mpd: [ng5] LCP: peer not responding to echo requests

Thanks,
Dominic Marks
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: mpd disconnect due to LCP echo not responding

2005-10-20 Thread Gleb Smirnoff
  Dominic,

On Thu, Oct 20, 2005 at 02:29:19PM +0100, Dominic Marks wrote:
D> server) in to the Office. Some times when the link is busy the LCP echos
D> won't pass over the link quickly enough and the connection will terminate.
D> 
D> Is there anything I can do to prevent this, or at least make it less
D> likely to happen. Options I have thought of are:
D> 
D> Increasing the number of echos mpd waits before disconnecting.
D> Increasing the time out for LCP echos.

So have you tried the two above options or are they not configurable in mpd?

D> Using traffic shaping to prioritise the LCP traffic over other traffic.
D> 
D> Are any of these possible, or is there another way of acheiving the same
D> goal?
D> 
D> Some example log output:
D> 
D> Oct 19 15:18:38 billy mpd: [ng5] LCP: no reply to 1 echo request(s)
D> Oct 19 15:18:48 billy mpd: [ng5] LCP: no reply to 2 echo request(s)
D> Oct 19 15:18:58 billy mpd: [ng5] LCP: no reply to 3 echo request(s)
D> Oct 19 15:19:08 billy mpd: [ng5] LCP: no reply to 4 echo request(s)
D> Oct 19 15:19:18 billy mpd: [ng5] LCP: no reply to 5 echo request(s)
D> Oct 19 15:19:18 billy mpd: [ng5] LCP: peer not responding to echo requests
D> 
D> Thanks,
D> Dominic Marks
D> ___
D> freebsd-net@freebsd.org mailing list
D> http://lists.freebsd.org/mailman/listinfo/freebsd-net
D> To unsubscribe, send any mail to "[EMAIL PROTECTED]"

-- 
Totus tuus, Glebius.
GLEBIUS-RIPN GLEB-RIPE
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


em(4) patch for test

2005-10-20 Thread Gleb Smirnoff
  Colleagues,

  since the if_em problem was taken as a late showstopper for 6.0-RELEASE,
I am asking you to help with testing of the fixes made in HEAD.

  Does your em(4) interface wedge for some time?
  Do you see a lot of errors in 'netstat -i' output? Does these errors
  increase not monotonously but they have peaks?

If the answer is yes, then the attached patch is likely to fix your problem.
If the answer is no, then you are still encouraged to help with testing
and install the patch to check that no regressions are introduced. If you
skip this, then you may encounter regressions after release, so you have
been warned.

  So, in short: please test! Thanks in advance!

The patch is against fresh RELENG_6.

-- 
Totus tuus, Glebius.
GLEBIUS-RIPN GLEB-RIPE
Index: if_em.c
===
RCS file: /home/ncvs/src/sys/dev/em/if_em.c,v
retrieving revision 1.65.2.5
retrieving revision 1.81
diff -u -r1.65.2.5 -r1.81
--- if_em.c 7 Oct 2005 14:00:03 -   1.65.2.5
+++ if_em.c 20 Oct 2005 09:55:49 -  1.81
@@ -31,7 +31,7 @@
 
 ***/
 
-/*$FreeBSD: src/sys/dev/em/if_em.c,v 1.65.2.5 2005/10/07 14:00:03 glebius Exp 
$*/
+/*$FreeBSD: src/sys/dev/em/if_em.c,v 1.81 2005/10/20 09:55:49 glebius Exp $*/
 
 #ifdef HAVE_KERNEL_OPTION_HEADERS
 #include "opt_device_polling.h"
@@ -45,13 +45,6 @@
 int em_display_debug_stats = 0;
 
 /*
- *  Linked list of board private structures for all NICs found
- */
-
-struct adapter *em_adapter_list = NULL;
-
-
-/*
  *  Driver version
  */
 
@@ -326,11 +319,6 @@
adapter->unit = device_get_unit(dev);
EM_LOCK_INIT(adapter, device_get_nameunit(dev));
 
-   if (em_adapter_list != NULL)
-   em_adapter_list->prev = adapter;
-   adapter->next = em_adapter_list;
-   em_adapter_list = adapter;
-
/* SYSCTL stuff */
 SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev),
 SYSCTL_CHILDREN(device_get_sysctl_tree(dev)),
@@ -511,6 +499,7 @@
 err_tx_desc:
 err_pci:
 em_free_pci_resources(adapter);
+   EM_LOCK_DESTROY(adapter);
 return(error);
 
 }
@@ -543,14 +532,11 @@
em_stop(adapter);
em_phy_hw_reset(&adapter->hw);
EM_UNLOCK(adapter);
-#if __FreeBSD_version < 50
-ether_ifdetach(adapter->ifp, ETHER_BPF_SUPPORTED);
-#else
 ether_ifdetach(adapter->ifp);
-   if_free(ifp);
-#endif
+
em_free_pci_resources(adapter);
bus_generic_detach(dev);
+   if_free(ifp);
 
/* Free Transmit Descriptor ring */
 if (adapter->tx_desc_base) {
@@ -564,19 +550,8 @@
 adapter->rx_desc_base = NULL;
 }
 
-   /* Remove from the adapter list */
-   if (em_adapter_list == adapter)
-   em_adapter_list = adapter->next;
-   if (adapter->next != NULL)
-   adapter->next->prev = adapter->prev;
-   if (adapter->prev != NULL)
-   adapter->prev->next = adapter->next;
-
EM_LOCK_DESTROY(adapter);
 
-   ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE);
-   ifp->if_timer = 0;
-
return(0);
 }
 
@@ -637,12 +612,7 @@
 }
 
/* Send a copy of the frame to the BPF listener */
-#if __FreeBSD_version < 50
-if (ifp->if_bpf)
-bpf_mtap(ifp, m_head);
-#else
BPF_MTAP(ifp, m_head);
-#endif
 
 /* Set timeout in case hardware has problems transmitting */
 ifp->if_timer = EM_TX_TIMEOUT;
@@ -797,11 +767,13 @@
struct adapter * adapter;
adapter = ifp->if_softc;
 
+   EM_LOCK(adapter);
/* If we are in this routine because of pause frames, then
 * don't reset the hardware.
 */
if (E1000_READ_REG(&adapter->hw, STATUS) & E1000_STATUS_TXOFF) {
ifp->if_timer = EM_TX_TIMEOUT;
+   EM_UNLOCK(adapter);
return;
}
 
@@ -809,11 +781,10 @@
printf("em%d: watchdog timeout -- resetting\n", adapter->unit);
 
ifp->if_drv_flags &= ~IFF_DRV_RUNNING;
-
-   em_init(adapter);
-
ifp->if_oerrors++;
-   return;
+
+   em_init_locked(adapter);
+   EM_UNLOCK(adapter);
 }
 
 /*
@@ -996,51 +967,57 @@
 static void
 em_intr(void *arg)
 {
-u_int32_t   loop_cnt = EM_MAX_INTR;
-u_int32_t   reg_icr;
-struct ifnet*ifp;
-struct adapter  *adapter = arg;
+   struct adapter  *adapter = arg;
+   struct ifnet*ifp;
+  

Re: Network performance 6.0 with netperf

2005-10-20 Thread Brad Knowles

At 10:49 PM +1000 2005-10-20, Michael VInce wrote:


 The 4 ethernet ports on the Dell server are all built-in so I am assuming
 they are on the best bus available.


	In my experience, the terms "Dell" and "best available" very 
rarely go together.


	Dell has made a name for themselves by shipping the absolutely 
cheapest possible hardware they can, with the thinnest possible 
profit margins, and trying to make up the difference in volume. 
Issues like support, ease of management, freedom from overheating, 
etc... get secondary or tertiary consideration, if they get any 
consideration at all.


But maybe that's just me.

--
Brad Knowles, <[EMAIL PROTECTED]>

"Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety."

-- Benjamin Franklin (1706-1790), reply of the Pennsylvania
Assembly to the Governor, November 11, 1755

  SAGE member since 1995.  See  for more info.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Network performance 6.0 with netperf

2005-10-20 Thread Karl Denninger
I think that's unfair.

I have a couple of Dell machines and my biggest complaint with them has been
their use of proprietary bolt patterns for their motherboards and similar
tomfoolery, preventing you from migrating their hardware as your needs grow.

This also guarantees that your $75 power supply becomes a $200 one once the
warranty ends - good for them, not good for you.

Other than that, I've been pretty happy with their stuff.  Sure beats a lot
of other "PC" vendors out there in terms of reliability, heat management,
BIOS updates, etc.

--
-- 
Karl Denninger ([EMAIL PROTECTED]) Internet Consultant & Kids Rights Activist
http://www.denninger.netMy home on the net - links to everything I do!
http://scubaforum.org   Your UNCENSORED place to talk about DIVING!
http://genesis3.blogspot.comMusings Of A Sentient Mind

On Thu, Oct 20, 2005 at 04:26:31PM +0200, Brad Knowles wrote:
> At 10:49 PM +1000 2005-10-20, Michael VInce wrote:
> 
> > The 4 ethernet ports on the Dell server are all built-in so I am assuming
> > they are on the best bus available.
> 
>   In my experience, the terms "Dell" and "best available" very 
> rarely go together.
> 
>   Dell has made a name for themselves by shipping the absolutely 
> cheapest possible hardware they can, with the thinnest possible 
> profit margins, and trying to make up the difference in volume. 
> Issues like support, ease of management, freedom from overheating, 
> etc... get secondary or tertiary consideration, if they get any 
> consideration at all.
> 
>   But maybe that's just me.
> 
> -- 
> Brad Knowles, <[EMAIL PROTECTED]>
> 
> "Those who would give up essential Liberty, to purchase a little
> temporary Safety, deserve neither Liberty nor Safety."
> 
> -- Benjamin Franklin (1706-1790), reply of the Pennsylvania
> Assembly to the Governor, November 11, 1755


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Network performance 6.0 with netperf

2005-10-20 Thread Brad Knowles

At 9:57 AM -0500 2005-10-20, Karl Denninger wrote:


 Other than that, I've been pretty happy with their stuff.  Sure beats a lot
 of other "PC" vendors out there in terms of reliability, heat management,
 BIOS updates, etc.


	Have you tried Rackable or IronSystems?  I've heard that they've 
been pretty successful at building servers to compete pretty well on 
price with Dell, while also providing much better customer service, 
including custom-building servers to your precise requirements.


--
Brad Knowles, <[EMAIL PROTECTED]>

"Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety."

-- Benjamin Franklin (1706-1790), reply of the Pennsylvania
Assembly to the Governor, November 11, 1755

  SAGE member since 1995.  See  for more info.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Network performance 6.0 with netperf

2005-10-20 Thread Robert Watson


On Thu, 20 Oct 2005, Michael VInce wrote:

Are you by any chance using PCI NIC's? PCI Bus is limited to somewhere 
around 1 Gbit/s. So if you consider; Theoretical maxium = ( 1Gbps - 
pci_overhead )


The 4 ethernet ports on the Dell server are all built-in so I am 
assuming they are on the best bus available.


At the performance levels you're interested in, it is worth spending a bit 
of time digging up the specs for the motherboard.  You may find, for 
example, that you can achieve higher packet rates using specific 
combinations of interfaces on the box, as it is often the case that a 
single PCI bus will run to a pair of on-board chips.  By forwarding on 
separate busses, you avoid contention, interrupt issues, etc.  We have a 
number of test systems in our netperf test cluster where you can measure 
20% or more differences on some tests depending on the combinations of 
interfaces used.


Robert N M Watson
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Network performance 6.0 with netperf

2005-10-20 Thread Michael VInce



On Thu, Oct 20, 2005 at 04:26:31PM +0200, Brad Knowles wrote:
 


At 10:49 PM +1000 2005-10-20, Michael VInce wrote:

   


> The 4 ethernet ports on the Dell server are all built-in so I am assuming
> they are on the best bus available.
 



	In my experience, the terms "Dell" and "best available" very 
rarely go together.


	Dell has made a name for themselves by shipping the absolutely 
cheapest possible hardware they can, with the thinnest possible 
profit margins, and trying to make up the difference in volume. 
Issues like support, ease of management, freedom from overheating, 
etc... get secondary or tertiary consideration, if they get any 
consideration at all.


But maybe that's just me.

--
Brad Knowles, <[EMAIL PROTECTED]>
   


I think that's unfair.


I have a couple of Dell machines and my biggest complaint with them 
has been

their use of proprietary bolt patterns for their motherboards and similar
tomfoolery, preventing you from migrating their hardware as your needs 
grow.


This also guarantees that your $75 power supply becomes a $200 one 
once the

warranty ends - good for them, not good for you.

Other than that, I've been pretty happy with their stuff. Sure beats a lot
of other "PC" vendors out there in terms of reliability, heat management,
BIOS updates, etc.

--
--
Karl Denninger ([EMAIL PROTECTED]) Internet Consultant & Kids Rights 
Activist


I have to agree Karl,
Those slots aren't proprietary there PCI Express.
When I went to open the machine up to put in a PCI multi serial card all 
I saw were those little modern mean looking PCI Express slots which have 
the ability to scare any techie, there are no old PCI slots on it, I had 
to dump my serial card and change over to usb2serial converters by 
loading the ucom and uplcom as kernel modules so I could use tip to 
serial out of usb into the single serial port on the Dell machines when 
the ethernet is down which ended up working out great, I will never need 
clunky old (and price) multi port PCI serial cards again.


If you look at the chipset Intel E7520 of the Dell 1850/2850 (The 2850 
is really just a bigger case machine to hold more drives)

http://www.intel.com/design/chipsets/embedded/e7520.htm
You will see it just only has PCI Express as a minimum which is 
64bit/133mhz which does a minimum of 2.5GBs/sec in 1 direction and its a 
switched based bus technology where there is no sharing of the lanes,

there is no old school PCI 32bit/33mhz buses.
http://www.pcstats.com/articleview.cfm?articleid=1087&page=3

As for service, I actually ordered two much smaller Dell 750's but 
because there were out of them for a couple of weeks due to some big 
company ordering 500 of them I had a bit of an argue with the Dell guy 
on the phone and got 1850s with scsi raid 1 out of him for the same price.
Its been Dell that has shown me how good (and maybe a bit evil) big 
companies can be.



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Trying to make a Host into a gigabit hub for testing

2005-10-20 Thread Shawn Saunders


Chris,

Thanks for the quick response.  It looked good, but when I execute each 
command, I receive an Error on the following:


ngctl connect sf0: o2m lower many0

Returns: ngctl: send msg: No such file or directory

Did I miss something?

Shawn


From: Chris Dionissopoulos <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: Shawn Saunders <[EMAIL PROTECTED]>
CC: freebsd-net@freebsd.org
Subject: Re: Trying to make a Host into a gigabit hub for testing
Date: Thu, 20 Oct 2005 03:27:41 +0300

SS>I am setting up a test environment with multiple IDS's.  ngctl looks 
like a solution but it is not broadcasting all packets to all interfaces as 
the documentation appears to state it should.  I've probably made some 
error in configuration.

SS>
SS>My goal is to put em0 into a spanned port in promiscuous mode and 
broadcast all traffic from that port out the other network interfaces.  I 
plan on having em0 (gigabit) and 6 other gigabit interfaces.  Each will 
then echo the same traffic to six other machines (IDS's) for testing.

SS>
SS>The proof of concept with a gigabit (EM0) and 4 10/100 ethernets (sfx).  
The 10/100's will be replaced for implementation.

SS>
SS>Any help would be appreciated.  My config follows:

Hi,
Why to use ng_fec and ng_one2many together?
how about something simplier, like:

 +--+ -->-sf0:lower--->wire
wire>--em:lower->| one2many | -->-sf1:lower--->wire
|  | -->-sf2:lower--->wire
+--+ -->sf3:lower--->wire

ngctl mkpeer em0: one2many lower one
ngctl name em0:lower o2m
ngctl connect sf0:  o2m lower many0
ngctl connect sf1: o2m lower many1
ngctl connect sf2: o2m lower many2
ngctl connect sf3: o2m lower many3
ngctl msg o2m setconfig "{ xmitAlg=2 failAlg=1 enabledLinks=[1 1 1 1 1] }"

ngctl msg sf0: setpromisc 1
ngctl msg sf0: setautosrc 0
ngctl msg sf1: setpromisc 1
ngctl msg sf1: setautosrc 0
ngctl msg sf2: setpromisc 1
ngctl msg sf2: setautosrc 0
ngctl msg sf3: setpromisc 1
ngctl msg sf3: setautosrc 0
ngctl msg em0: setpromisc 1
ngctl msg em0: setautosrc 0

This keeps kernel-stack isolated from traffic, I think
(and all interfaces involved layer2 unreachable from outsiders).

Just tell us if its working for you.

Chris.


http://www.freemail.gr - äùñåÜí õðçñåóßá çëåêôñïíéêïý ôá÷õäñïìåßïõ.
http://www.freemail.gr - free email service for the Greek-speaking.



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Trying to make a Host into a gigabit hub for testing

2005-10-20 Thread Chris Dionissopoulos



ngctl connect sf0: o2m lower many0

Returns: ngctl: send msg: No such file or directory

Is just a syntax error. Replace "o2m" with "o2m:" in every "connect" 
command (only).

Sorry my fault.

Chris.


http://www.freemail.gr - δωρεάν υπηρεσία ηλεκτρονικού ταχυδρομείου.
http://www.freemail.gr - free email service for the Greek-speaking.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


re(4) works in 5.3-RELEASE, not in 6.0-RC1

2005-10-20 Thread george+freebsd
iI have a RealTek 8169S which works just fine under 5.3-RELEASE.  I've
been experimenting with 6.0-RC1 booting from CD.  It appears to come up
with no difficulty, but ping fails, and shortly thereafter, I get
"re0: watchdog timeout" in my messages.  Although I can't ping other
machines and other machines can't ping me, "arp -a" shows the MAC
address of other machines on the ethernet segment, and arp on the
other machines I tried to ping shows the MAC address of the 6.0
machine.  I see the RCS version has gone from 1.28.2.5 in 5.3-RELEASE
to 1.46.2.7 in 6.0-RC1, and the changes look numerous but not very
big.  Should I try compiling the 1.28.2.5 version into a 6.0 kernel
and seeing what happens? -- George Mitchell

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Dependency between interfaces

2005-10-20 Thread Wojciech A. Koszek
Hello,

Is EVENTHANDLER(9) proper way of notification for standalone driver about
network interface attach/detach operations? I've met simple problem in
ef(4), which causes machine freeze in following situation: load NIC driver
-> load if_ef -> unload NIC driver -> some activity with interface.
Althought driver of network interface no longer exists, if_ef does not know
about it and continues it's operation.

I've seen similar situation for example in ng_fec(4): piece of code needs to
call some cleanup routines in order to keep pointers in valid state. I think
this situation is almost the same like this current in if_bridge(4). Just
repeat situation described above for ef(4), but with if_bridge(4).

Regards,
-- 
* Wojciech A. Koszek && [EMAIL PROTECTED]
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Dependency between interfaces

2005-10-20 Thread Andrew Thompson
On Thu, Oct 20, 2005 at 08:20:34PM +, Wojciech A. Koszek wrote:
> Hello,
> 
> Is EVENTHANDLER(9) proper way of notification for standalone driver about
> network interface attach/detach operations? I've met simple problem in
> ef(4), which causes machine freeze in following situation: load NIC driver
> -> load if_ef -> unload NIC driver -> some activity with interface.
> Althought driver of network interface no longer exists, if_ef does not know
> about it and continues it's operation.
> 
> I've seen similar situation for example in ng_fec(4): piece of code needs to
> call some cleanup routines in order to keep pointers in valid state. I think
> this situation is almost the same like this current in if_bridge(4). Just
> repeat situation described above for ef(4), but with if_bridge(4).
> 

if_bridge(4) now hooks into ether_detach to get notified of a vanishing
interface, as of r1.26 (and now in RELENG_6*)

  Use bridge_ifdetach() to notify the bridge that a member has been
  detached.  The bridge can then remove it from its interface list and
  not try to send out via a dead pointer.

Is it still a problem or did you test on a pre r1.26 kernel?

Andrew
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Dependency between interfaces

2005-10-20 Thread Brooks Davis
On Thu, Oct 20, 2005 at 08:20:34PM +, Wojciech A. Koszek wrote:
> Hello,
> 
> Is EVENTHANDLER(9) proper way of notification for standalone driver about
> network interface attach/detach operations? I've met simple problem in
> ef(4), which causes machine freeze in following situation: load NIC driver
> -> load if_ef -> unload NIC driver -> some activity with interface.
> Althought driver of network interface no longer exists, if_ef does not know
> about it and continues it's operation.
> 
> I've seen similar situation for example in ng_fec(4): piece of code needs to
> call some cleanup routines in order to keep pointers in valid state. I think
> this situation is almost the same like this current in if_bridge(4). Just
> repeat situation described above for ef(4), but with if_bridge(4).

It looks like you could hook into the ifnet_departure_event pretty
easily.  The one gotcha is that it's called during interface renames so
you need to consider that possibility.

-- Brooks

-- 
Any statement of the form "X is the one, true Y" is FALSE.
PGP fingerprint 655D 519C 26A7 82E7 2529  9BF0 5D8E 8BE9 F238 1AD4


pgpYzZ4S3CtRP.pgp
Description: PGP signature


IPSec session stalls

2005-10-20 Thread Volker
Hi!

A few days ago I've managed to setup two IPSec tunnels (3 machines
involved) between FreeBSD 5.4R hosts.

While I do not fully understand all the options and knobs of IPSec, it
was easy to setup (thanks to the handbook guys!).

As the tunnels work properly in the first place, there's one issue (on
both tunnels). Whenever there's a large amount of traffic per tcp or udp
session, the tcp or udp session stalls.

For example, I've tried to scp a 1.4M file through one of these tunnels,
scp starts to transfer the file and stalls exactly at 49152 bytes being
transfered. PcAnywhere (using udp) sessions going through the tunnel
work for a few minutes and then the PcAw connection breaks between host
and remote. I guess both issues are equal as it generates a lot of
traffic in the tunnel.

The tunnel itself seems to be stable. I've tried to scp a huge file and
ping'ed the other host in another session and no packet loss did appear.

what I did:

- gif tunnel created on both sides
- spd policies setup to encrypt (ipencap) traffic between both machines
(in + out)
- racoon installed and key timelife set to 1 hour
- route set into the tunnel

The racoon debug output did not show anything which would lead me to an
issue with racoon.

Where do I have to look for? How do I debug this problem? Did anybody
experience similar problems?

Thanks,

Volker
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Trying to make a Host into a gigabit hub for testing

2005-10-20 Thread Shawn Saunders

Chris,

Now the traffic is going out all the ports, thanks.  Only one issue, is that 
it is also being echo'd back the em0 interface.  When I put this under a 
full GIGABIT load, 6 interfaces feeding back what was just sent them, will 
kill my primary em0 interface.


Is there a way to make the echo from em0 to all other interfaces only go 
one-way, rather than em0 also being part of the group and receiving 
everthing it sends back again?


Shawn



From: Chris Dionissopoulos <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: Shawn Saunders <[EMAIL PROTECTED]>
CC: freebsd-net@freebsd.org
Subject: Re: Trying to make a Host into a gigabit hub for testing
Date: Thu, 20 Oct 2005 20:52:28 +0300



ngctl connect sf0: o2m lower many0

Returns: ngctl: send msg: No such file or directory

Is just a syntax error. Replace "o2m" with "o2m:" in every "connect" 
command (only).

Sorry my fault.

Chris.


http://www.freemail.gr - äùñåÜí õðçñåóßá çëåêôñïíéêïý ôá÷õäñïìåßïõ.
http://www.freemail.gr - free email service for the Greek-speaking.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: IPSec session stalls

2005-10-20 Thread Volker
hmm, I hate replying to myself

I've just checked another thing:

When disabling pf on both IPSec endpoints (even large) file transfer
works fine.

I'm using pf and altq with cbq.

Removing the pf 'scrub' rules didn't solve it. In the firewall I'll let
gif traffic pass with rules like:

pass quick on $if_ext proto { ah, esp } from  to any keep
state queue q_h1
pass quick on $if_ext proto { ah, esp } from any to  keep
state queue q_h1
pass quick on $if_ext proto ipencap from  to any keep state
pass quick on $if_ext proto ipencap from any to  keep state

I guess as all works fine while pf is disabled this is an pf issue, right?

Thanks,

Volker


On 2005-10-20 22:12, Volker wrote:
> Hi!
> 
> A few days ago I've managed to setup two IPSec tunnels (3 machines
> involved) between FreeBSD 5.4R hosts.
> 
> While I do not fully understand all the options and knobs of IPSec, it
> was easy to setup (thanks to the handbook guys!).
> 
> As the tunnels work properly in the first place, there's one issue (on
> both tunnels). Whenever there's a large amount of traffic per tcp or udp
> session, the tcp or udp session stalls.
> 
> For example, I've tried to scp a 1.4M file through one of these tunnels,
> scp starts to transfer the file and stalls exactly at 49152 bytes being
> transfered. PcAnywhere (using udp) sessions going through the tunnel
> work for a few minutes and then the PcAw connection breaks between host
> and remote. I guess both issues are equal as it generates a lot of
> traffic in the tunnel.
> 
> The tunnel itself seems to be stable. I've tried to scp a huge file and
> ping'ed the other host in another session and no packet loss did appear.
> 
> what I did:
> 
> - gif tunnel created on both sides
> - spd policies setup to encrypt (ipencap) traffic between both machines
> (in + out)
> - racoon installed and key timelife set to 1 hour
> - route set into the tunnel
> 
> The racoon debug output did not show anything which would lead me to an
> issue with racoon.
> 
> Where do I have to look for? How do I debug this problem? Did anybody
> experience similar problems?
> 
> Thanks,
> 
> Volker
> 
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Trying to make a Host into a gigabit hub for testing

2005-10-20 Thread Shawn Saunders

Chris,

Ignore the last note.  It is working, with the correction you gave me below. 
 Working Great.


Thanks for all your help.

Shawn



From: Chris Dionissopoulos <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: Shawn Saunders <[EMAIL PROTECTED]>
CC: freebsd-net@freebsd.org
Subject: Re: Trying to make a Host into a gigabit hub for testing
Date: Thu, 20 Oct 2005 20:52:28 +0300



ngctl connect sf0: o2m lower many0

Returns: ngctl: send msg: No such file or directory

Is just a syntax error. Replace "o2m" with "o2m:" in every "connect" 
command (only).

Sorry my fault.

Chris.


http://www.freemail.gr - äùñåÜí õðçñåóßá çëåêôñïíéêïý ôá÷õäñïìåßïõ.
http://www.freemail.gr - free email service for the Greek-speaking.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: urgent: RELENG_5 ipfw/nat/IPSEC Problem..

2005-10-20 Thread Vince Hoffman



On Wed, 19 Oct 2005, Holm Tiffe wrote:


Hi,

I have a currently big problem with the following setup:

A FreeBSD Box, running 5_STABLE is  connected wirh one interface to the
public, with the other to an nated' subnet with private address space.
I need to allow at least one host from inside the private network access
to an outside Cisco VPN concentrator. I've learned in the meantime, that
allowing udp connections from inside to the outside net and vis versa is'nt
doing the job.
(I've struggeled in the meantime over tcpdump, that is showing isakmp
packets leaving the external interface, but they dont't really do this..)

What exactly I have to do get this working?



What I found I needed to do to connect to the work Cisco VPN through my 
FreeBSD NAT firewall was to tell it not to NAT the source port of the 
isakmp packets, as isakmps need to have source and destination port 500.

using pf the command is (taken from man pf.conf)

# Map outgoing packets' source port to an assigned proxy port instead of
# an arbitrary port.
# In this case, proxy outgoing isakmp with port 500 on the gateway.
nat on $ext_if inet proto udp from any port = isakmp to any -> ($ext_if) \
   port 500

Not sure about the same for ipfw/natd but i'm sure its doable.


Vince



The FreeBSD Box is out of reach (around 50km from here), I can't access the
hosts on the inside network and I dont't have access to the cisco
concentrator, so I can't test different setups ..

Can please anyone help?

Regards,

Holm

ps: please Cc me, I've currently not subscribed to this list.
--
L&P::Kommunikation GbR  Holm Tiffe  * Administration, Development
FreibergNet.de Internet Systems phone +49 3731 419010
Bereich Server & Technik fax +49 3731 4196026
D-09599 Freiberg * Am St. Niclas Schacht 13 http://www.freibergnet.de

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: PPPoE and Radius on 6.0RC1

2005-10-20 Thread fooler
- Original Message - 
From: "Marcin Jessa" <[EMAIL PROTECTED]>

To: "Marcin Jessa" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, October 20, 2005 8:31 PM
Subject: Re: PPPoE and Radius on 6.0RC1



Just tested the same setup on 7.0 built tonight and it did not work.
The pppoed daemon never sends any requests to freeradius...


take note that pppoed is the one processing pppoe frames while user ppp is 
the one communicating with your radius server



 set radius /etc/ppp/radius.conf#turn on radius auth and use


what does your radius.conf says?

fooler. 


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: em(4) patch for test

2005-10-20 Thread Gleb Smirnoff
On Thu, Oct 20, 2005 at 06:02:00PM +0400, Gleb Smirnoff wrote:
T>   Colleagues,
T> 
T>   since the if_em problem was taken as a late showstopper for 6.0-RELEASE,
T> I am asking you to help with testing of the fixes made in HEAD.
T> 
T>   Does your em(4) interface wedge for some time?
T>   Do you see a lot of errors in 'netstat -i' output? Does these errors
T>   increase not monotonously but they have peaks?
T> 
T> If the answer is yes, then the attached patch is likely to fix your problem.
T> If the answer is no, then you are still encouraged to help with testing
T> and install the patch to check that no regressions are introduced. If you
T> skip this, then you may encounter regressions after release, so you have
T> been warned.
T> 
T>   So, in short: please test! Thanks in advance!
T> 
T> The patch is against fresh RELENG_6.

   Christian has prepared a patch for RELENG_5:

http://people.freebsd.org/~csjp/if_em.c.1129840898.diff

   This is the most important bit in all changes to em.

-- 
Totus tuus, Glebius.
GLEBIUS-RIPN GLEB-RIPE
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vlan patch

2005-10-20 Thread Gleb Smirnoff
On Thu, Oct 20, 2005 at 12:57:21PM +0400, Yar Tikhiy wrote:
Y> The hash code consists of literally a couple of #define's.  And the
Y> difference between ng_vlan(4) and vlan(4) is that each ng_vlan node
Y> gets its own instance of the hash table.  OTOH, in vlan(4) we need
Y> to decide if the hash table will be per parent interface or a single
Y> global instance.  In the latter case we could hash by a combination
Y> of the VLAN tag and parent's ifindex.  Perhaps this approach will
Y> yield more CPU cache hits during hash table lookups.  In addition,
Y> it will be thriftier in using memory.  Locking the global hash table
Y> should not be an issue as we can use an sx lock in this case for
Y> optimal read access.

The sx lock is slow. We'd better use per interface hash, and thus
get locking instantly, with per-vlan lock. In other case, we will
acquire per-vlan lock + the sx lock on every packet. The sx lock
actually means mtx_lock+mtx_unlock, thus we will make 3 mutex
operations instead of one.

-- 
Totus tuus, Glebius.
GLEBIUS-RIPN GLEB-RIPE
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Trying to make a Host into a gigabit hub for testing

2005-10-20 Thread Gleb Smirnoff
On Thu, Oct 20, 2005 at 09:31:15PM +, Shawn Saunders wrote:
S> Chris,
S> 
S> Now the traffic is going out all the ports, thanks.  Only one issue, is 
S> that it is also being echo'd back the em0 interface.  When I put this under 
S> a full GIGABIT load, 6 interfaces feeding back what was just sent them, 
S> will kill my primary em0 interface.
S> 
S> Is there a way to make the echo from em0 to all other interfaces only go 
S> one-way, rather than em0 also being part of the group and receiving 
S> everthing it sends back again?

I haven't yet understood the graph you built, but the Subject line
tells me that you should use ng_hub(4), not ng_one2many(4) or ng_fec(4).

-- 
Totus tuus, Glebius.
GLEBIUS-RIPN GLEB-RIPE
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vlan patch

2005-10-20 Thread Yar Tikhiy
On Fri, Oct 21, 2005 at 09:30:33AM +0400, Gleb Smirnoff wrote:
> On Thu, Oct 20, 2005 at 12:57:21PM +0400, Yar Tikhiy wrote:
> Y> The hash code consists of literally a couple of #define's.  And the
> Y> difference between ng_vlan(4) and vlan(4) is that each ng_vlan node
> Y> gets its own instance of the hash table.  OTOH, in vlan(4) we need
> Y> to decide if the hash table will be per parent interface or a single
> Y> global instance.  In the latter case we could hash by a combination
> Y> of the VLAN tag and parent's ifindex.  Perhaps this approach will
> Y> yield more CPU cache hits during hash table lookups.  In addition,
> Y> it will be thriftier in using memory.  Locking the global hash table
> Y> should not be an issue as we can use an sx lock in this case for
> Y> optimal read access.
> 
> The sx lock is slow. We'd better use per interface hash, and thus
> get locking instantly, with per-vlan lock. In other case, we will
> acquire per-vlan lock + the sx lock on every packet. The sx lock
> actually means mtx_lock+mtx_unlock, thus we will make 3 mutex
> operations instead of one.

OK, let's forget about sx locks.  However, a per-interface hash is
associated with a _physical_ interface, hence we must find the vlan
to lock using the hash first.  If there were a physical interface
lock held by its driver in each case, it could protect the hash as
well.  Can we rely on this?

-- 
Yar
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vlan patch

2005-10-20 Thread Gleb Smirnoff
On Fri, Oct 21, 2005 at 10:06:55AM +0400, Yar Tikhiy wrote:
Y> On Fri, Oct 21, 2005 at 09:30:33AM +0400, Gleb Smirnoff wrote:
Y> > On Thu, Oct 20, 2005 at 12:57:21PM +0400, Yar Tikhiy wrote:
Y> > Y> The hash code consists of literally a couple of #define's.  And the
Y> > Y> difference between ng_vlan(4) and vlan(4) is that each ng_vlan node
Y> > Y> gets its own instance of the hash table.  OTOH, in vlan(4) we need
Y> > Y> to decide if the hash table will be per parent interface or a single
Y> > Y> global instance.  In the latter case we could hash by a combination
Y> > Y> of the VLAN tag and parent's ifindex.  Perhaps this approach will
Y> > Y> yield more CPU cache hits during hash table lookups.  In addition,
Y> > Y> it will be thriftier in using memory.  Locking the global hash table
Y> > Y> should not be an issue as we can use an sx lock in this case for
Y> > Y> optimal read access.
Y> > 
Y> > The sx lock is slow. We'd better use per interface hash, and thus
Y> > get locking instantly, with per-vlan lock. In other case, we will
Y> > acquire per-vlan lock + the sx lock on every packet. The sx lock
Y> > actually means mtx_lock+mtx_unlock, thus we will make 3 mutex
Y> > operations instead of one.
Y> 
Y> OK, let's forget about sx locks.  However, a per-interface hash is
Y> associated with a _physical_ interface, hence we must find the vlan
Y> to lock using the hash first.  If there were a physical interface
Y> lock held by its driver in each case, it could protect the hash as
Y> well.  Can we rely on this?

Oops. When speaking about per-vlan lock, I really meant per-trunk lock. 

Or are you going to implement per-vlan lock? Is this going to be a benefit?
Since all packets on trunk are serialized by NIC driver, can there be any
benefit in creating a mutex per vlan interface, not per vlan trunk?

-- 
Totus tuus, Glebius.
GLEBIUS-RIPN GLEB-RIPE
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"