Troubles with 'em' driver and UDP packets

2015-03-20 Thread Vaidas Damoševičius
Hello,

I have 2 boxes with FreeBSD 10.1-RELEASE/amd64 and "Intel(R) PRO/1000 Network 
Connection 7.4.2" NIC's directly connected to each other. I noticed strange 
problem - I'm loosing small UDP packets under high load. I've tried to test it 
with iperf and got the following:

---

vd@v0s4:~ % iperf3 -u -c 1.2.3.4
Connecting to host 1.2.3.4, port 5201
[  4] local 1.2.3.3 port 64254 connected to 1.2.3.4 port 5201
[ ID] Interval   Transfer Bandwidth   Total Datagrams
[  4]   0.00-1.01   sec   120 KBytes   976 Kbits/sec  15  
[  4]   1.01-2.01   sec   128 KBytes  1.05 Mbits/sec  16  
[  4]   2.01-3.01   sec   128 KBytes  1.05 Mbits/sec  16  
[  4]   3.01-4.01   sec   128 KBytes  1.05 Mbits/sec  16  
[  4]   4.01-5.01   sec   128 KBytes  1.05 Mbits/sec  16  
[  4]   5.01-6.01   sec   128 KBytes  1.05 Mbits/sec  16  
[  4]   6.01-7.01   sec   128 KBytes  1.05 Mbits/sec  16  
[  4]   7.01-8.00   sec   128 KBytes  1.05 Mbits/sec  16  
[  4]   8.00-9.01   sec   128 KBytes  1.05 Mbits/sec  16  
[  4]   9.01-10.01  sec   128 KBytes  1.05 Mbits/sec  16  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bandwidth   JitterLost/Total 
Datagrams
[  4]   0.00-10.01  sec  1.24 MBytes  1.04 Mbits/sec  0.325 ms  0/159 (0%)  
[  4] Sent 159 datagrams

Any advice how to solve it ?

Thank you.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Troubles with 'em' driver and UDP packets

2015-03-20 Thread Mehmet Erol Sanliturk
On Fri, Mar 20, 2015 at 2:23 AM, Vaidas Damoševičius  wrote:

> Hello,
>
> I have 2 boxes with FreeBSD 10.1-RELEASE/amd64 and "Intel(R) PRO/1000
> Network Connection 7.4.2" NIC's directly connected to each other. I noticed
> strange problem - I'm loosing small UDP packets under high load. I've tried
> to test it with iperf and got the following:
>
> ---
>
> vd@v0s4:~ % iperf3 -u -c 1.2.3.4
> Connecting to host 1.2.3.4, port 5201
> [  4] local 1.2.3.3 port 64254 connected to 1.2.3.4 port 5201
> [ ID] Interval   Transfer Bandwidth   Total Datagrams
> [  4]   0.00-1.01   sec   120 KBytes   976 Kbits/sec  15
> [  4]   1.01-2.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   2.01-3.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   3.01-4.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   4.01-5.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   5.01-6.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   6.01-7.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   7.01-8.00   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   8.00-9.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   9.01-10.01  sec   128 KBytes  1.05 Mbits/sec  16
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval   Transfer Bandwidth   JitterLost/Total
> Datagrams
> [  4]   0.00-10.01  sec  1.24 MBytes  1.04 Mbits/sec  0.325 ms  0/159 (0%)
> [  4] Sent 159 datagrams
>
> Any advice how to solve it ?
>
> Thank you.
> ___
>
>


I think you use Gigabit CROSS cable ( cat 5e or cat 6 ) .
CROSS cable is required if connection is from computer to computer .

Only for remaindering .



Thank you very much .

Mehmet Erol Sanliturk
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Re: Troubles with 'em' driver and UDP packets

2015-03-20 Thread Vaidas Damoševičius
It's not cabling problem :)

Another example with -b and -i :

vd@v0s4:~ % iperf3 -u -c 1.2.3.4 -i4 -b1000m -P1
Connecting to host 1.2.3.4, port 5201
[  4] local 1.2.3.3 port 10672 connected to 1.2.3.4 port 5201
[ ID] Interval   Transfer Bandwidth   Total Datagrams
[  4]   0.00-4.00   sec   446 MBytes   935 Mbits/sec  1761605  
[  4]   4.00-8.00   sec   457 MBytes   958 Mbits/sec  1809551  
[  4]   8.00-10.00  sec   228 MBytes   958 Mbits/sec  900740  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bandwidth   JitterLost/Total 
Datagrams
[  4]   0.00-10.00  sec  1.10 GBytes   949 Mbits/sec  770.668 ms  0/35 (0%)  
[  4] Sent 35 datagrams

Result is totaly different.

> On 20 Mar 2015, at 11:29, Mehmet Erol Sanliturk  
> wrote:
> 
> 
> 
> On Fri, Mar 20, 2015 at 2:23 AM, Vaidas Damoševičius  wrote:
> Hello,
> 
> I have 2 boxes with FreeBSD 10.1-RELEASE/amd64 and "Intel(R) PRO/1000 Network 
> Connection 7.4.2" NIC's directly connected to each other. I noticed strange 
> problem - I'm loosing small UDP packets under high load. I've tried to test 
> it with iperf and got the following:
> 
> ---
> 
> vd@v0s4:~ % iperf3 -u -c 1.2.3.4
> Connecting to host 1.2.3.4, port 5201
> [  4] local 1.2.3.3 port 64254 connected to 1.2.3.4 port 5201
> [ ID] Interval   Transfer Bandwidth   Total Datagrams
> [  4]   0.00-1.01   sec   120 KBytes   976 Kbits/sec  15
> [  4]   1.01-2.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   2.01-3.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   3.01-4.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   4.01-5.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   5.01-6.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   6.01-7.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   7.01-8.00   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   8.00-9.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   9.01-10.01  sec   128 KBytes  1.05 Mbits/sec  16
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval   Transfer Bandwidth   JitterLost/Total 
> Datagrams
> [  4]   0.00-10.01  sec  1.24 MBytes  1.04 Mbits/sec  0.325 ms  0/159 (0%)
> [  4] Sent 159 datagrams
> 
> Any advice how to solve it ?
> 
> Thank you.
> ___
> 
> 
> 
> 
> I think you use Gigabit CROSS cable ( cat 5e or cat 6 ) .
> CROSS cable is required if connection is from computer to computer .
> 
> Only for remaindering .
> 
> 
> 
> Thank you very much .
> 
> Mehmet Erol Sanliturk
> 
> 
> 
> 

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Re: Troubles with 'em' driver and UDP packets

2015-03-20 Thread Olivier Cochard-Labbé
On Fri, Mar 20, 2015 at 10:23 AM, Vaidas Damoševičius  wrote:

> Hello,
>

​Hi,
​


>
> I have 2 boxes with FreeBSD 10.1-RELEASE/amd64 and "Intel(R) PRO/1000
> Network Connection 7.4.2" NIC's directly connected to each other. I noticed
> strange problem - I'm loosing small UDP packets under high load. I've tried
> to test it with iperf and got the following:
>
> ---
>
> vd@v0s4:~ % iperf3 -u -c 1.2.3.4
> Connecting to host 1.2.3.4, port 5201
> [  4] local 1.2.3.3 port 64254 connected to 1.2.3.4 port 5201
> [ ID] Interval   Transfer Bandwidth   Total Datagrams
> [  4]   0.00-1.01   sec   120 KBytes   976 Kbits/sec  15
> [  4]   1.01-2.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   2.01-3.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   3.01-4.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   4.01-5.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   5.01-6.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   6.01-7.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   7.01-8.00   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   8.00-9.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   9.01-10.01  sec   128 KBytes  1.05 Mbits/sec  16
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval   Transfer Bandwidth   JitterLost/Total
> Datagrams
> [  4]   0.00-10.01  sec  1.24 MBytes  1.04 Mbits/sec  0.325 ms  0/159 (0%)
> [  4] Sent 159 datagrams
>
>

​The result display a Lost/Total ratio of 0/159: Where do you see missing
UDP packets ?
By default iperf send only 1 Mbit/sec in UDP mode: I didn't see any problem
on these stats.

Regards,

Olivier


​
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Re: Troubles with 'em' driver and UDP packets

2015-03-20 Thread Vaidas Damoševičius
If I start freeradius under high load, packets doesn't reach the destination - 
on sender side I see packets passing out (with tcpdump), on receiver side 
tcpdump "see" only part of these packets - some of them are missing.

> On 20 Mar 2015, at 11:49, Olivier Cochard-Labbé  wrote:
> 
> On Fri, Mar 20, 2015 at 10:23 AM, Vaidas Damoševičius  > wrote:
> Hello,
> 
> ​Hi,
> ​ 
> 
> I have 2 boxes with FreeBSD 10.1-RELEASE/amd64 and "Intel(R) PRO/1000 Network 
> Connection 7.4.2" NIC's directly connected to each other. I noticed strange 
> problem - I'm loosing small UDP packets under high load. I've tried to test 
> it with iperf and got the following:
> 
> ---
> 
> vd@v0s4:~ % iperf3 -u -c 1.2.3.4
> Connecting to host 1.2.3.4, port 5201
> [  4] local 1.2.3.3 port 64254 connected to 1.2.3.4 port 5201
> [ ID] Interval   Transfer Bandwidth   Total Datagrams
> [  4]   0.00-1.01   sec   120 KBytes   976 Kbits/sec  15
> [  4]   1.01-2.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   2.01-3.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   3.01-4.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   4.01-5.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   5.01-6.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   6.01-7.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   7.01-8.00   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   8.00-9.01   sec   128 KBytes  1.05 Mbits/sec  16
> [  4]   9.01-10.01  sec   128 KBytes  1.05 Mbits/sec  16
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval   Transfer Bandwidth   JitterLost/Total 
> Datagrams
> [  4]   0.00-10.01  sec  1.24 MBytes  1.04 Mbits/sec  0.325 ms  0/159 (0%)
> [  4] Sent 159 datagrams
> 
> 
> 
> ​The result display a Lost/Total ratio of 0/159: Where do you see missing UDP 
> packets ?
> By default iperf send only 1 Mbit/sec in UDP mode: I didn't see any problem 
> on these stats.
> 
> Regards,
> 
> Olivier
> 
> 
> ​ 

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Re: Fragment questions

2015-03-20 Thread Emeric POUPON
Hello,

Yes indeed, it has already been fixed!
However, the second point seems to be still here...

Regards,

Emeric

- Mail original -
De: "Hans Petter Selasky" 
À: "Emeric POUPON" , "freebsd-net" 

Envoyé: Jeudi 19 Mars 2015 13:54:33
Objet: Re: Fragment questions

On 03/19/15 12:38, Emeric POUPON wrote:
> Hello,
>
> I noticed two questionable things in the fragmentation code:
> - in ip_fragment, we do not copy the flowid from the original mbuf to the 
> fragmented mbuf. Therefore we may output very desynchronized fragments (first 
> fragment emitted far later the second fragment, etc.)
> - in the ip_newid macro, we do "htons(V_ip_id++))" if we do not use 
> randomized id. In multi core systems, we may emit successive packets with the 
> same id.
>
> Both problems combined lead to bad packet reassembly on the remote host.
>
> What do you think?
>

Hi,

I think this issue is already fixed:

https://svnweb.freebsd.org/base/head/sys/netinet/ip_output.c?revision=278103&view=markup

--HPS

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Re: Fragment questions

2015-03-20 Thread Hans Petter Selasky

On 03/20/15 14:31, Emeric POUPON wrote:

Hello,

Yes indeed, it has already been fixed!
However, the second point seems to be still here...

Regards,

Emeric



Can you suggest a patch for the second issue?

--HPS

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Another fragment question / patch

2015-03-20 Thread Karim Fodil-Lemelin

Hi,

While reading through a previous comment on this list about fragments 
I've noticed that mbuf tags aren't being copied from the leading 
fragment (header) to the subsequent fragment packets. In other words, 
one would expect that all fragments of a packet are carrying the same 
tags that were set on the original packet. I have built a simple test 
were I use ipfw with ALTQ and sent large packet (bigger then MTU) off 
that BSD machine. I have observed that the leading fragment (m0) packet 
is going through the right class although the next fragments are hitting 
the default class for unclassified packets.


Here is a patch that makes things works as expected (all fragments carry 
the ALTQ tag):


diff --git a/freebsd/sys/netinet/ip_output.c 
b/freebsd/sys/netinet/ip_output.c

index d650949..7d8f041 100644
--- a/freebsd/sys/netinet/ip_output.c
+++ b/freebsd/sys/netinet/ip_output.c
@@ -1184,7 +1184,10 @@ smart_frag_failure:
ipstat.ips_odropped++;
goto done;
}
-   m->m_flags |= (m0->m_flags & M_MCAST) | M_FRAG;
+
+   m->m_flags |= (m0->m_flags & M_COPYFLAGS) | M_FRAG;
+   m_tag_copy_chain(m, m0, M_NOWAIT);
+
/*
 * In the first mbuf, leave room for the link header, then
 * copy the original IP header including options. The 
payload

diff --git a/freebsd/sys/sys/mbuf.h b/freebsd/sys/sys/mbuf.h
index 2efff38..6ad8439 100644
--- a/freebsd/sys/sys/mbuf.h


I hope this helps,

Karim.


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Another fragment question / patch

2015-03-20 Thread Hans Petter Selasky

On 03/20/15 16:18, Karim Fodil-Lemelin wrote:

Hi,

While reading through a previous comment on this list about fragments
I've noticed that mbuf tags aren't being copied from the leading
fragment (header) to the subsequent fragment packets. In other words,
one would expect that all fragments of a packet are carrying the same
tags that were set on the original packet. I have built a simple test
were I use ipfw with ALTQ and sent large packet (bigger then MTU) off
that BSD machine. I have observed that the leading fragment (m0) packet
is going through the right class although the next fragments are hitting
the default class for unclassified packets.

Here is a patch that makes things works as expected (all fragments carry
the ALTQ tag):

diff --git a/freebsd/sys/netinet/ip_output.c
b/freebsd/sys/netinet/ip_output.c
index d650949..7d8f041 100644
--- a/freebsd/sys/netinet/ip_output.c
+++ b/freebsd/sys/netinet/ip_output.c
@@ -1184,7 +1184,10 @@ smart_frag_failure:
 ipstat.ips_odropped++;
 goto done;
 }
-   m->m_flags |= (m0->m_flags & M_MCAST) | M_FRAG;
+
+   m->m_flags |= (m0->m_flags & M_COPYFLAGS) | M_FRAG;
+   m_tag_copy_chain(m, m0, M_NOWAIT);
+
 /*
  * In the first mbuf, leave room for the link header, then
  * copy the original IP header including options. The
payload
diff --git a/freebsd/sys/sys/mbuf.h b/freebsd/sys/sys/mbuf.h
index 2efff38..6ad8439 100644
--- a/freebsd/sys/sys/mbuf.h



Hi,

I see your point about copying the tags. I'm not sure however that 
M_COPYFLAGS is correct, because it also copies M_RDONLY, which is not 
relevant for this case. Can you explain what flags need copying in 
addition to M_MCAST ? Maybe we need to define these flags separately.


Thank you!

--HPS

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Fragment questions

2015-03-20 Thread Hans Petter Selasky

On 03/20/15 14:31, Emeric POUPON wrote:

- in the ip_newid macro, we do "htons(V_ip_id++))" if we do not use randomized 
id.

> In multi core systems, we may emit successive packets with the same id.

Will using a mutex or an atomic macro fix this issue when incrementing 
the V_ip_id ?


--HPS
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Fragment questions

2015-03-20 Thread Adrian Chadd
On 20 March 2015 at 10:58, Hans Petter Selasky  wrote:
> On 03/20/15 14:31, Emeric POUPON wrote:
>>
>> - in the ip_newid macro, we do "htons(V_ip_id++))" if we do not use
>> randomized id.
>
>> In multi core systems, we may emit successive packets with the same id.
>
> Will using a mutex or an atomic macro fix this issue when incrementing the
> V_ip_id ?

It will, but it'll ping-pong between multiple cores and slow things
down at high pps.


-a
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


em resource allocation fails on SunFire X4500

2015-03-20 Thread rondzierwa
I am using 10.1-RELEASE on a SunFire X4500 (thumper). It has 4 em devices, of 
which only the first two work due to a resource failure: 

em0:  port 0xcc00-0xcc3f mem 
0xfdae-0xfdaf irq 52 at device 1.0 on pci7 
em0: Ethernet address: 00:14:4f:21:09:94 
em1:  port 0xc800-0xc83f mem 
0xfdac-0xfdad irq 53 at device 1.1 on pci7 
em1: Ethernet address: 00:14:4f:21:09:95 
em2:  mem 
0xfdbe-0xfdbf irq 61 at device 1.0 on pci8 
em2: 0x40 bytes of rid 0x20 res 4 failed (0, 0x). 
em2: Unable to allocate bus resource: ioport 
em2: Allocation of PCI resources failed 
em2:  mem 
0xfdbc-0xfdbd irq 62 at device 1.1 on pci8 
em2: 0x40 bytes of rid 0x20 res 4 failed (0, 0x). 
em2: Unable to allocate bus resource: ioport 
em2: Allocation of PCI resources failed 

a previous bug, #196501 was closed by setting 'hint.agp.0.disabled=1' in 
loader.conf, but this had no effect on the X4500. 

I have attached the pciconf output. 

Thanks in advance for any help or suggestions! 

ron. 



pciconf.out
Description: Binary data
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Re: Fragment questions

2015-03-20 Thread Hans Petter Selasky

On 03/20/15 19:02, Adrian Chadd wrote:

On 20 March 2015 at 10:58, Hans Petter Selasky  wrote:

On 03/20/15 14:31, Emeric POUPON wrote:


- in the ip_newid macro, we do "htons(V_ip_id++))" if we do not use
randomized id.



In multi core systems, we may emit successive packets with the same id.


Will using a mutex or an atomic macro fix this issue when incrementing the
V_ip_id ?


It will, but it'll ping-pong between multiple cores and slow things
down at high pps.



Hi,

Maybe we can have the V_ip_id per CPU and use the lower 8-bits as random 
CPU core number?


OK?

--HPS

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Fragment questions

2015-03-20 Thread Adrian Chadd
On 20 March 2015 at 11:56, Hans Petter Selasky  wrote:
> On 03/20/15 19:02, Adrian Chadd wrote:
>>
>> On 20 March 2015 at 10:58, Hans Petter Selasky  wrote:
>>>
>>> On 03/20/15 14:31, Emeric POUPON wrote:


 - in the ip_newid macro, we do "htons(V_ip_id++))" if we do not use
 randomized id.
>>>
>>>
 In multi core systems, we may emit successive packets with the same id.
>>>
>>>
>>> Will using a mutex or an atomic macro fix this issue when incrementing
>>> the
>>> V_ip_id ?
>>
>>
>> It will, but it'll ping-pong between multiple cores and slow things
>> down at high pps.
>>
>
> Hi,
>
> Maybe we can have the V_ip_id per CPU and use the lower 8-bits as random CPU
> core number?

Hm, someone with more cycles to spend on analysing the repercussions
from this should investigate it.

I think in the short term using an atomic is fine, as it's no worse
than what is currently there. But as we get more PPS unlocked and
happening we may need to fix it.



-adrian
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Another fragment question / patch

2015-03-20 Thread Karim Fodil-Lemelin

On 2015-03-20 1:57 PM, Hans Petter Selasky wrote:

On 03/20/15 16:18, Karim Fodil-Lemelin wrote:

Hi,

While reading through a previous comment on this list about fragments
I've noticed that mbuf tags aren't being copied from the leading
fragment (header) to the subsequent fragment packets. In other words,
one would expect that all fragments of a packet are carrying the same
tags that were set on the original packet. I have built a simple test
were I use ipfw with ALTQ and sent large packet (bigger then MTU) off
that BSD machine. I have observed that the leading fragment (m0) packet
is going through the right class although the next fragments are hitting
the default class for unclassified packets.

Here is a patch that makes things works as expected (all fragments carry
the ALTQ tag):

diff --git a/freebsd/sys/netinet/ip_output.c
b/freebsd/sys/netinet/ip_output.c
index d650949..7d8f041 100644
--- a/freebsd/sys/netinet/ip_output.c
+++ b/freebsd/sys/netinet/ip_output.c
@@ -1184,7 +1184,10 @@ smart_frag_failure:
 ipstat.ips_odropped++;
 goto done;
 }
-   m->m_flags |= (m0->m_flags & M_MCAST) | M_FRAG;
+
+   m->m_flags |= (m0->m_flags & M_COPYFLAGS) | M_FRAG;
+   m_tag_copy_chain(m, m0, M_NOWAIT);
+
 /*
  * In the first mbuf, leave room for the link 
header, then

  * copy the original IP header including options. The
payload
diff --git a/freebsd/sys/sys/mbuf.h b/freebsd/sys/sys/mbuf.h
index 2efff38..6ad8439 100644
--- a/freebsd/sys/sys/mbuf.h



Hi,

I see your point about copying the tags. I'm not sure however that 
M_COPYFLAGS is correct, because it also copies M_RDONLY, which is not 
relevant for this case. Can you explain what flags need copying in 
addition to M_MCAST ? Maybe we need to define these flags separately.


Thank you!

--HPS 

Hi,

Arguably the M_RDONLY is added when m_copy() is called a bit later in 
that function. m_copym() does a shallow copy (through a call to 
mb_dupcl) and will set the RDONLY flag when doing so. So the fact it was 
copied over from M_COPYFLAGS shouldn't be a problem in terms of 
functionality. A similar logic applies to the M_VLANTAG since it should 
never be set in ip_output() (severe layer violation). I guess 
M_COPYFLAGS is kinda safe but not necessarily correct.


In terms of appropriate behavior (whats the real purpose of 
M_COPYFLAGS?) my initial patch is debatable and to answer your question 
I would consider to copy the remaining flags:


M_PKTHDR  => already in there through the m_gethdr() call
M_BCAST => no need to copy
M_MCAST  => already in there in ip_fragment()
M_PROTOFLAGS
M_SKIP_FIREWALL => for layer 2 fire-walling?

So something like?

-   m->m_flags |= (m0->m_flags & M_MCAST);
+  m->m_flags |= (m0->m_flags & (M_MCAST | M_PROTOFLAGS));
+  m_tag_copy_chain(m, m0, M_NOWAIT);

Cheers!

Karim.

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Troubles with 'em' driver and UDP packets

2015-03-20 Thread Willem Jan Withagen
On 20/03/2015 10:42, Vaidas Damoševičius wrote:
> It's not cabling problem :)
> 
> Another example with -b and -i :
> 
> vd@v0s4:~ % iperf3 -u -c 1.2.3.4 -i4 -b1000m -P1
> Connecting to host 1.2.3.4, port 5201
> [  4] local 1.2.3.3 port 10672 connected to 1.2.3.4 port 5201
> [ ID] Interval   Transfer Bandwidth   Total Datagrams
> [  4]   0.00-4.00   sec   446 MBytes   935 Mbits/sec  1761605  
> [  4]   4.00-8.00   sec   457 MBytes   958 Mbits/sec  1809551  
> [  4]   8.00-10.00  sec   228 MBytes   958 Mbits/sec  900740  
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval   Transfer Bandwidth   JitterLost/Total 
> Datagrams
> [  4]   0.00-10.00  sec  1.10 GBytes   949 Mbits/sec  770.668 ms  0/35 (0%)  
> [  4] Sent 35 datagrams
> 
> Result is totaly different.
> 

>> On 20 Mar 2015, at 11:29, Mehmet Erol Sanliturk  
>> wrote:
>> I think you use Gigabit CROSS cable ( cat 5e or cat 6 ) .
>> CROSS cable is required if connection is from computer to computer .
>>
>> Only for remaindering .

to the best of my knowledge:
The standard for 1Gbit requires the media interface to do crossover
automagically, thus removing the requirement for X-cables.
And that also holds for computer <> computer connections.

--WjW


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Netmap and the host stack

2015-03-20 Thread Clive Philbrick
I'm confused by the documentation I've found on host stack access using netmap. 
The documentation says:
"Packets generated by the host stack are extracted from the mbufs and stored in 
the slots of an input ring, similar to those used for traffic coming from the 
network. Packets destined to the host stack are queued by the netmap client 
into an  output netmap ring, and from there encapsulated into mbufs and passed 
to the host stack as if they were coming from the corresponding netmap-enabled 
NIC."

Does this refer only to packets generated internally by the host stack, or can 
they be triggered by user-level code? 
Suppose I put an interface into netmap mode (eg eth1) with the NIOCREGIF flags 
indicating NETMAP_SW_RING. Is there any way to then open a socket, bind it to 
the IP address of that interface and then try to issue a connect via that 
interface? Do I instead have to craft the entire connect packet (ie a TCP SYN) 
and send it via a Netmap ring? 

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Troubles with 'em' driver and UDP packets

2015-03-20 Thread Mehmet Erol Sanliturk
On Fri, Mar 20, 2015 at 3:50 PM, Willem Jan Withagen 
wrote:

> On 20/03/2015 10:42, Vaidas Damoševičius wrote:
> > It's not cabling problem :)
> >
> > Another example with -b and -i :
> >
> > vd@v0s4:~ % iperf3 -u -c 1.2.3.4 -i4 -b1000m -P1
> > Connecting to host 1.2.3.4, port 5201
> > [  4] local 1.2.3.3 port 10672 connected to 1.2.3.4 port 5201
> > [ ID] Interval   Transfer Bandwidth   Total Datagrams
> > [  4]   0.00-4.00   sec   446 MBytes   935 Mbits/sec  1761605
> > [  4]   4.00-8.00   sec   457 MBytes   958 Mbits/sec  1809551
> > [  4]   8.00-10.00  sec   228 MBytes   958 Mbits/sec  900740
> > - - - - - - - - - - - - - - - - - - - - - - - - -
> > [ ID] Interval   Transfer Bandwidth   Jitter
> Lost/Total Datagrams
> > [  4]   0.00-10.00  sec  1.10 GBytes   949 Mbits/sec  770.668 ms  0/35
> (0%)
> > [  4] Sent 35 datagrams
> >
> > Result is totaly different.
> >
>
> >> On 20 Mar 2015, at 11:29, Mehmet Erol Sanliturk <
> m.e.sanlit...@gmail.com> wrote:
> >> I think you use Gigabit CROSS cable ( cat 5e or cat 6 ) .
> >> CROSS cable is required if connection is from computer to computer .
> >>
> >> Only for remaindering .
>
> to the best of my knowledge:
> The standard for 1Gbit requires the media interface to do crossover
> automagically, thus removing the requirement for X-cables.
> And that also holds for computer <> computer connections.
>
> --WjW
>
>
>


You are right .

In case of failures this point may be checked whether it is affecting the
communication or not .


http://en.wikipedia.org/wiki/Crossover_cable
http://en.wikipedia.org/wiki/Ethernet_crossover_cable
http://en.wikipedia.org/wiki/Medium_Dependent_Interface#Auto_MDI-X


In some computer related shops , they are sold separately as cables and
cable connectors .
If there were not any need , no one would produce , sell , and buy such two
different products .
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"