Increasing Rx/Tx queues for i40evf driver

2023-09-25 Thread Fred
Hi all,

There was an older thread on this (
http://mails.dpdk.org/archives/users/2018-May/003123.html) and the response
was to recompile the host PF driver.

Is there another method to do this now? Is a recompilation of the host
driver still needed or can we configure it some other way? The guest is
using the iavf pmd with DPDKP 21.11 while the host is running the i40e
drivers on a XXL710 card.

Thanks.


[dpdk-dev] Latest git version (Compiling Bugs)

2014-03-25 Thread Fred Pedrisa
Hi, Guys.



I am using the latest version on git, and there are problems compiling the
sources under FreeBSD 9.2, is this expected ?



Sincerely,



Fred Pedrisa



[dpdk-dev] RES: Latest git version (Compiling Bugs)

2014-03-25 Thread Fred Pedrisa
I?ve switched back to 1.6.0-0r0 and it is working fine, the problem seems to
be happening only with the new version.



De: Antonio Neto [mailto:netoftc at hotmail.com] 
Enviada em: ter?a-feira, 25 de mar?o de 2014 02:11
Para: Fr3DBr -
Assunto: RE: [dpdk-dev] Latest git version (Compiling Bugs)



Hello, Im having the same problem too using FreeBSD 9.2



> From: fredhps10 at hotmail.com
> To: dev at dpdk.org
> Date: Tue, 25 Mar 2014 02:10:27 -0300
> Subject: [dpdk-dev] Latest git version (Compiling Bugs)
> 
> Hi, Guys.
> 
> 
> 
> I am using the latest version on git, and there are problems compiling the
> sources under FreeBSD 9.2, is this expected ?
> 
> 
> 
> Sincerely,
> 
> 
> 
> Fred Pedrisa
> 



[dpdk-dev] Choosing NIC Ports to be used

2014-03-25 Thread Fred Pedrisa
Hi,



I've added : hw.nic_uio.bdfs="3:0:0,3:0:1,4:0:0,4:0:1" to my
/boot/loader.conf in FreeBSD



However, once the nic_uio is loaded, it takes all the ports to itself, how
can I solve this ?



[dpdk-dev] FreeBSD and NICs

2014-03-25 Thread Fred Pedrisa
Hi,



I have these settings in /boot/loader.conf



hw.contigmem.num_buffers=2

hw.contigmem.buffer_size=1073741824

hw.nic_uio.bdfs="3:0:0,3:0:1,4:0:0,4:0:1"

contigmem_load="YES"

nic_uio_load="YES"



However, the DPDK is taking all the available NIC Ports (8) from my system,
when I wanted it to use just 4 of them.



nic_uio0:  port 0xd020-0xd03f mem
0xf7e2-0xf7e3,0xf7c0-0xf7df,0xf7e44000-0xf7e47fff irq 18 at
device 0.0 on pci3

nic_uio1:  port 0xd000-0xd01f mem
0xf7e0-0xf7e1,0xf7a0-0xf7bf,0xf7e4-0xf7e43fff irq 19 at
device 0.1 on pci3

nic_uio2:  port 0xc020-0xc03f mem
0xf782-0xf783,0xf760-0xf77f,0xf7844000-0xf7847fff irq 16 at
device 0.0 on pci4

nic_uio3:  port 0xc000-0xc01f mem
0xf780-0xf781,0xf740-0xf75f,0xf784-0xf7843fff irq 17 at
device 0.1 on pci4

nic_uio4:  port 0xb020-0xb03f mem
0xf722-0xf723,0xf700-0xf71f,0xf7244000-0xf7247fff irq 18 at
device 0.0 on pci7

nic_uio5:  port 0xb000-0xb01f mem
0xf720-0xf721,0xf6e0-0xf6ff,0xf724-0xf7243fff irq 19 at
device 0.1 on pci7

nic_uio6:  port 0xa020-0xa03f mem
0xf6c2-0xf6c3,0xf6a0-0xf6bf,0xf6c44000-0xf6c47fff irq 16 at
device 0.0 on pci8

nic_uio7:  port 0xa000-0xa01f mem
0xf6c0-0xf6c1,0xf680-0xf69f,0xf6c4-0xf6c43fff irq 17 at
device 0.1 on pci8

dev.nic_uio.0.%desc: Intel(R) DPDK PCI Device

dev.nic_uio.0.%driver: nic_uio

dev.nic_uio.0.%location: slot=0 function=0

dev.nic_uio.0.%pnpinfo: vendor=0x8086 device=0x10d6 subvendor=0x8086
subdevice=0x145a class=0x02

dev.nic_uio.0.%parent: pci3

dev.nic_uio.1.%desc: Intel(R) DPDK PCI Device

dev.nic_uio.1.%driver: nic_uio

dev.nic_uio.1.%location: slot=0 function=1

dev.nic_uio.1.%pnpinfo: vendor=0x8086 device=0x10d6 subvendor=0x8086
subdevice=0x145a class=0x02

dev.nic_uio.1.%parent: pci3

dev.nic_uio.2.%desc: Intel(R) DPDK PCI Device

dev.nic_uio.2.%driver: nic_uio

dev.nic_uio.2.%location: slot=0 function=0

dev.nic_uio.2.%pnpinfo: vendor=0x8086 device=0x10d6 subvendor=0x8086
subdevice=0x145a class=0x02

dev.nic_uio.2.%parent: pci4

dev.nic_uio.3.%desc: Intel(R) DPDK PCI Device

dev.nic_uio.3.%driver: nic_uio

dev.nic_uio.3.%location: slot=0 function=1

dev.nic_uio.3.%pnpinfo: vendor=0x8086 device=0x10d6 subvendor=0x8086
subdevice=0x145a class=0x02

dev.nic_uio.3.%parent: pci4

dev.nic_uio.4.%desc: Intel(R) DPDK PCI Device

dev.nic_uio.4.%driver: nic_uio

dev.nic_uio.4.%location: slot=0 function=0

dev.nic_uio.4.%pnpinfo: vendor=0x8086 device=0x10d6 subvendor=0x8086
subdevice=0x145a class=0x02

dev.nic_uio.4.%parent: pci7

dev.nic_uio.5.%desc: Intel(R) DPDK PCI Device

dev.nic_uio.5.%driver: nic_uio

dev.nic_uio.5.%location: slot=0 function=1

dev.nic_uio.5.%pnpinfo: vendor=0x8086 device=0x10d6 subvendor=0x8086
subdevice=0x145a class=0x02

dev.nic_uio.5.%parent: pci7

dev.nic_uio.6.%desc: Intel(R) DPDK PCI Device

dev.nic_uio.6.%driver: nic_uio

dev.nic_uio.6.%location: slot=0 function=0

dev.nic_uio.6.%pnpinfo: vendor=0x8086 device=0x10d6 subvendor=0x8086
subdevice=0x145a class=0x02

dev.nic_uio.6.%parent: pci8

dev.nic_uio.7.%desc: Intel(R) DPDK PCI Device

dev.nic_uio.7.%driver: nic_uio

dev.nic_uio.7.%location: slot=0 function=1

dev.nic_uio.7.%pnpinfo: vendor=0x8086 device=0x10d6 subvendor=0x8086
subdevice=0x145a class=0x02

dev.nic_uio.7.%parent: pci8



What am I doing wrong in this case ?



I noticed that if we use : sysctl -a the variable : hw.nic_uio.bdfs is not
present there, is this some red flag ?



Sincerely,



Fred



[dpdk-dev] hw.nic_uio.bdfs

2014-03-26 Thread Fred Pedrisa
Hi, guys.



This variable is not working as intended for FreeBSD :(



It does not dettach nic_uio from the wanted ports :/





[dpdk-dev] RES: hw.nic_uio.bdfs

2014-03-26 Thread Fred Pedrisa
Hello,



You did not understand the purpose of that parameter, it is made to ?avoid?
nic_uio from pursuing the wanted NICs... so they are free to be used in the
system :)



Right now the code to handle it is wrong and I am trying to fix it myself.



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] 
Enviada em: quarta-feira, 26 de mar?o de 2014 03:16
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



Hi,

I tried with Intel version 1.6.0 and FreeBSD 9.2-RELEASE on VMware Player.

kldload nic_uio by hand, works fine.
But kldunload nic_uio only detach uio driver, don't re-attach kernel driver.

[oki@ ~]$ cat /boot/loader.conf
##
###  User settings  ##
##
hw.contigmem.num_buffers=64
hw.contigmem.buffer_size=2097152
hw.nic_uio.bdfs="2:5:0,2:6:0"
contigmem_load="YES"
#nic_uio_load="YES"
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
em1 at pci0:2:5:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
em2 at pci0:2:6:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
[oki@ ~]$ kenv hw.nic_uio.bdfs
2:5:0,2:6:0
[oki@ ~]$ sudo kldload nic_uio
Password:
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
nic_uio0 at pci0:2:5:0:class=0x02 card=0x075015ad chip=0x100f8086
rev=0x01 hdr=0x00
nic_uio1 at pci0:2:6:0:class=0x02 card=0x075015ad chip=0x100f8086
rev=0x01 hdr=0x00
[oki@ ~]$ sudo kldunload nic_uio
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
[oki@ ~]$



2014-03-26 14:35 GMT+09:00 Fred Pedrisa :

Hi, guys.



This variable is not working as intended for FreeBSD :(



It does not dettach nic_uio from the wanted ports :/








[dpdk-dev] RES: hw.nic_uio.bdfs

2014-03-26 Thread Fred Pedrisa
Hello,



By default nic_uio takes all the NICs for itself




So in theory, you needed an option to reserve some NIC ports to your system,
without DPDK taking it for itself




De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] 
Enviada em: quarta-feira, 26 de mar?o de 2014 03:43
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



avoid??? want you hw.nic_uio.avoid_bdfs?


nic_uio behavior I guessed
1. detach kernel driver specified by hw.nic_uio.bdfs

2. attach nic_uio driver for all NICs not attached.

but 2. is not correct, I think.





2014-03-26 15:20 GMT+09:00 Fred Pedrisa :

Hello,



You did not understand the purpose of that parameter, it is made to ?avoid?
nic_uio from pursuing the wanted NICs... so they are free to be used in the
system :)



Right now the code to handle it is wrong and I am trying to fix it myself.



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] 
Enviada em: quarta-feira, 26 de mar?o de 2014 03:16
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



Hi,

I tried with Intel version 1.6.0 and FreeBSD 9.2-RELEASE on VMware Player.

kldload nic_uio by hand, works fine.
But kldunload nic_uio only detach uio driver, don't re-attach kernel driver.

[oki@ ~]$ cat /boot/loader.conf
##
###  User settings  ##
##
hw.contigmem.num_buffers=64
hw.contigmem.buffer_size=2097152
hw.nic_uio.bdfs="2:5:0,2:6:0"
contigmem_load="YES"
#nic_uio_load="YES"
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
em1 at pci0:2:5:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
em2 at pci0:2:6:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
[oki@ ~]$ kenv hw.nic_uio.bdfs
2:5:0,2:6:0
[oki@ ~]$ sudo kldload nic_uio
Password:
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
nic_uio0 at pci0:2:5:0:class=0x02 card=0x075015ad chip=0x100f8086
rev=0x01 hdr=0x00
nic_uio1 at pci0:2:6:0:class=0x02 card=0x075015ad chip=0x100f8086
rev=0x01 hdr=0x00
[oki@ ~]$ sudo kldunload nic_uio
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
[oki@ ~]$



2014-03-26 14:35 GMT+09:00 Fred Pedrisa :

Hi, guys.





This variable is not working as intended for FreeBSD :(



It does not dettach nic_uio from the wanted ports :/









[dpdk-dev] RES: RES: hw.nic_uio.bdfs

2014-03-26 Thread Fred Pedrisa
Hello,

Anyways, here is my proposition for this code :

static void
nic_uio_load(void)
{
char *remaining;
long bus = 0, device = 0, function = 0;
char bdf_str[1024];
int i, j, len;
device_t dev;

memset(bdf_str, 0, sizeof(bdf_str));
TUNABLE_STR_FETCH("hw.nic_uio.bdfs", bdf_str, sizeof(bdf_str));
remaining = bdf_str;
len = strlen(remaining);

for (i = 0; remaining && len >= 5 && i < len;i+=6) {
if ( remaining[i + 1] == ':' && remaining[i + 3] == ':' ) {
bus = strtol(&remaining[i + 0],NULL,0);
device = strtol(&remaining[i + 2],NULL,0);
function = strtol(&remaining[i + 4],NULL,0);

dev = pci_find_bsf(bus, device, function);
if (dev != NULL) {
for (j = 0; j < NUM_DEVICES; j++) {
if (pci_get_vendor(dev) ==
devices[j].vend && pci_get_device(dev) == devices[j].dev) {
device_detach(dev);
}
}
}
}
}
}

I think it looks better this way.

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa
Enviada em: quarta-feira, 26 de mar?o de 2014 03:50
Para: 'Masaru Oki'
Cc: dev at dpdk.org
Assunto: [dpdk-dev] RES: hw.nic_uio.bdfs

Hello,



By default nic_uio takes all the NICs for itself




So in theory, you needed an option to reserve some NIC ports to your system,
without DPDK taking it for itself




De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] Enviada em: quarta-feira,
26 de mar?o de 2014 03:43
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



avoid??? want you hw.nic_uio.avoid_bdfs?


nic_uio behavior I guessed
1. detach kernel driver specified by hw.nic_uio.bdfs

2. attach nic_uio driver for all NICs not attached.

but 2. is not correct, I think.





2014-03-26 15:20 GMT+09:00 Fred Pedrisa :

Hello,



You did not understand the purpose of that parameter, it is made to ?avoid?
nic_uio from pursuing the wanted NICs... so they are free to be used in the
system :)



Right now the code to handle it is wrong and I am trying to fix it myself.



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] Enviada em: quarta-feira,
26 de mar?o de 2014 03:16
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



Hi,

I tried with Intel version 1.6.0 and FreeBSD 9.2-RELEASE on VMware Player.

kldload nic_uio by hand, works fine.
But kldunload nic_uio only detach uio driver, don't re-attach kernel driver.

[oki@ ~]$ cat /boot/loader.conf
##
###  User settings  ##
##
hw.contigmem.num_buffers=64
hw.contigmem.buffer_size=2097152
hw.nic_uio.bdfs="2:5:0,2:6:0"
contigmem_load="YES"
#nic_uio_load="YES"
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
em1 at pci0:2:5:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
em2 at pci0:2:6:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
[oki@ ~]$ kenv hw.nic_uio.bdfs
2:5:0,2:6:0
[oki@ ~]$ sudo kldload nic_uio
Password:
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
nic_uio0 at pci0:2:5:0:class=0x02 card=0x075015ad chip=0x100f8086
rev=0x01 hdr=0x00
nic_uio1 at pci0:2:6:0:class=0x02 card=0x075015ad chip=0x100f8086
rev=0x01 hdr=0x00
[oki@ ~]$ sudo kldunload nic_uio
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
[oki@ ~]$



2014-03-26 14:35 GMT+09:00 Fred Pedrisa :

Hi, guys.





This variable is not working as intended for FreeBSD :(



It does not dettach nic_uio from the wanted ports :/










[dpdk-dev] RES: hw.nic_uio.bdfs

2014-03-26 Thread Fred Pedrisa
Hello,



Yes, I am writing a fix for this too ;)



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] 
Enviada em: quarta-feira, 26 de mar?o de 2014 04:08
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



> By default nic_uio takes all the NICs for itself


Yes.

I think nic_uio_probe should check hw.nic_uio.bdfs.





2014-03-26 15:49 GMT+09:00 Fred Pedrisa :

Hello,



By default nic_uio takes all the NICs for itself




So in theory, you needed an option to reserve some NIC ports to your system,
without DPDK taking it for itself




De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] 
Enviada em: quarta-feira, 26 de mar?o de 2014 03:43


Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



avoid??? want you hw.nic_uio.avoid_bdfs?


nic_uio behavior I guessed
1. detach kernel driver specified by hw.nic_uio.bdfs

2. attach nic_uio driver for all NICs not attached.

but 2. is not correct, I think.





2014-03-26 15:20 GMT+09:00 Fred Pedrisa :

Hello,



You did not understand the purpose of that parameter, it is made to ?avoid?
nic_uio from pursuing the wanted NICs... so they are free to be used in the
system :)



Right now the code to handle it is wrong and I am trying to fix it myself.



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] 
Enviada em: quarta-feira, 26 de mar?o de 2014 03:16
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



Hi,

I tried with Intel version 1.6.0 and FreeBSD 9.2-RELEASE on VMware Player.

kldload nic_uio by hand, works fine.
But kldunload nic_uio only detach uio driver, don't re-attach kernel driver.

[oki@ ~]$ cat /boot/loader.conf
##
###  User settings  ##
##
hw.contigmem.num_buffers=64
hw.contigmem.buffer_size=2097152
hw.nic_uio.bdfs="2:5:0,2:6:0"
contigmem_load="YES"
#nic_uio_load="YES"
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
em1 at pci0:2:5:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
em2 at pci0:2:6:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
[oki@ ~]$ kenv hw.nic_uio.bdfs
2:5:0,2:6:0
[oki@ ~]$ sudo kldload nic_uio
Password:
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
nic_uio0 at pci0:2:5:0:class=0x02 card=0x075015ad chip=0x100f8086
rev=0x01 hdr=0x00
nic_uio1 at pci0:2:6:0:class=0x02 card=0x075015ad chip=0x100f8086
rev=0x01 hdr=0x00
[oki@ ~]$ sudo kldunload nic_uio
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
[oki@ ~]$



2014-03-26 14:35 GMT+09:00 Fred Pedrisa :

Hi, guys.





This variable is not working as intended for FreeBSD :(



It does not dettach nic_uio from the wanted ports :/









[dpdk-dev] RES: RES: hw.nic_uio.bdfs

2014-03-26 Thread Fred Pedrisa
Hello,

Here is my fix for probe code :

static int
nic_uio_probe (device_t dev)
{
int i, len;
char *remaining;
long bus = 0, device = 0, function = 0;
remaining = bdf_str;
len = strlen(remaining);

for (i = 0; remaining && len >= 5 && i < len;i+=6) {
if ( remaining[i + 1] == ':' && remaining[i + 3] == ':' ) {
bus = strtol(&remaining[i + 0],NULL,0);
device = strtol(&remaining[i + 2],NULL,0);
function = strtol(&remaining[i + 4],NULL,0);
if (dev != NULL) {
if (pci_get_bus(dev) == bus &&
pci_get_slot(dev) == device && pci_get_function(dev) == function) {
printf("nic_uio: success blocking
probe of : %ld:%ld:%ld!\n", bus, device, function);
return (ENXIO);
}
}
}
}

for (i = 0; i < NUM_DEVICES; i++)
if (pci_get_vendor(dev) == devices[i].vend &&
pci_get_device(dev) == devices[i].dev) {

device_set_desc(dev, "Intel(R) DPDK PCI Device");
return (BUS_PROBE_SPECIFIC);
}

return (ENXIO);
}

Now it is working as intended ;)

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa
Enviada em: quarta-feira, 26 de mar?o de 2014 04:16
Para: 'Masaru Oki'
Cc: dev at dpdk.org
Assunto: [dpdk-dev] RES: hw.nic_uio.bdfs

Hello,



Yes, I am writing a fix for this too ;)



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] Enviada em: quarta-feira,
26 de mar?o de 2014 04:08
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



> By default nic_uio takes all the NICs for itself


Yes.

I think nic_uio_probe should check hw.nic_uio.bdfs.





2014-03-26 15:49 GMT+09:00 Fred Pedrisa :

Hello,



By default nic_uio takes all the NICs for itself




So in theory, you needed an option to reserve some NIC ports to your system,
without DPDK taking it for itself




De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] Enviada em: quarta-feira,
26 de mar?o de 2014 03:43


Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



avoid??? want you hw.nic_uio.avoid_bdfs?


nic_uio behavior I guessed
1. detach kernel driver specified by hw.nic_uio.bdfs

2. attach nic_uio driver for all NICs not attached.

but 2. is not correct, I think.





2014-03-26 15:20 GMT+09:00 Fred Pedrisa :

Hello,



You did not understand the purpose of that parameter, it is made to ?avoid?
nic_uio from pursuing the wanted NICs... so they are free to be used in the
system :)



Right now the code to handle it is wrong and I am trying to fix it myself.



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] 
Enviada em: quarta-feira, 26 de mar?o de 2014 03:16
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



Hi,

I tried with Intel version 1.6.0 and FreeBSD 9.2-RELEASE on VMware Player.

kldload nic_uio by hand, works fine.
But kldunload nic_uio only detach uio driver, don't re-attach kernel driver.

[oki@ ~]$ cat /boot/loader.conf
##
###  User settings  ##
##
hw.contigmem.num_buffers=64
hw.contigmem.buffer_size=2097152
hw.nic_uio.bdfs="2:5:0,2:6:0"
contigmem_load="YES"
#nic_uio_load="YES"
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
em1 at pci0:2:5:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
em2 at pci0:2:6:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
[oki@ ~]$ kenv hw.nic_uio.bdfs
2:5:0,2:6:0
[oki@ ~]$ sudo kldload nic_uio
Password:
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
nic_uio0 at pci0:2:5:0:class=0x02 card=0x075015ad chip=0x100f8086
rev=0x01 hdr=0x00
nic_uio1 at pci0:2:6:0:class=0x02 card=0x075015ad chip=0x100f8086
rev=0x01 hdr=0x00
[oki@ ~]$ sudo kldunload nic_uio
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
[oki@ ~]$



2014-03-26 14:35 GMT+09:00 Fred Pedrisa :

Hi, guys.





This variable is not working as intended for FreeBSD :(



It does not dettach nic_uio from the wanted ports :/










[dpdk-dev] RES: RES: RES: hw.nic_uio.bdfs

2014-03-26 Thread Fred Pedrisa
Oh, don't forget to make : 

static char bdf_str[1024]; 

Anywhere in the nic_uio.c file, so this way the other methods can check the
content :-)

And remove the local declaration from nic_uio_load.

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa
Enviada em: quarta-feira, 26 de mar?o de 2014 04:22
Para: 'Masaru Oki'
Cc: dev at dpdk.org
Assunto: [dpdk-dev] RES: RES: hw.nic_uio.bdfs

Hello,

Here is my fix for probe code :

static int
nic_uio_probe (device_t dev)
{
int i, len;
char *remaining;
long bus = 0, device = 0, function = 0;
remaining = bdf_str;
len = strlen(remaining);

for (i = 0; remaining && len >= 5 && i < len;i+=6) {
if ( remaining[i + 1] == ':' && remaining[i + 3] == ':' ) {
bus = strtol(&remaining[i + 0],NULL,0);
device = strtol(&remaining[i + 2],NULL,0);
function = strtol(&remaining[i + 4],NULL,0);
if (dev != NULL) {
if (pci_get_bus(dev) == bus &&
pci_get_slot(dev) == device && pci_get_function(dev) == function) {
printf("nic_uio: success blocking
probe of : %ld:%ld:%ld!\n", bus, device, function);
return (ENXIO);
}
}
}
}

for (i = 0; i < NUM_DEVICES; i++)
if (pci_get_vendor(dev) == devices[i].vend &&
pci_get_device(dev) == devices[i].dev) {

device_set_desc(dev, "Intel(R) DPDK PCI Device");
return (BUS_PROBE_SPECIFIC);
}

return (ENXIO);
}

Now it is working as intended ;)

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa Enviada em:
quarta-feira, 26 de mar?o de 2014 04:16
Para: 'Masaru Oki'
Cc: dev at dpdk.org
Assunto: [dpdk-dev] RES: hw.nic_uio.bdfs

Hello,



Yes, I am writing a fix for this too ;)



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] Enviada em: quarta-feira,
26 de mar?o de 2014 04:08
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



> By default nic_uio takes all the NICs for itself


Yes.

I think nic_uio_probe should check hw.nic_uio.bdfs.





2014-03-26 15:49 GMT+09:00 Fred Pedrisa :

Hello,



By default nic_uio takes all the NICs for itself




So in theory, you needed an option to reserve some NIC ports to your system,
without DPDK taking it for itself




De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] Enviada em: quarta-feira,
26 de mar?o de 2014 03:43


Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



avoid??? want you hw.nic_uio.avoid_bdfs?


nic_uio behavior I guessed
1. detach kernel driver specified by hw.nic_uio.bdfs

2. attach nic_uio driver for all NICs not attached.

but 2. is not correct, I think.





2014-03-26 15:20 GMT+09:00 Fred Pedrisa :

Hello,



You did not understand the purpose of that parameter, it is made to ?avoid?
nic_uio from pursuing the wanted NICs... so they are free to be used in the
system :)



Right now the code to handle it is wrong and I am trying to fix it myself.



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] Enviada em: quarta-feira,
26 de mar?o de 2014 03:16
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



Hi,

I tried with Intel version 1.6.0 and FreeBSD 9.2-RELEASE on VMware Player.

kldload nic_uio by hand, works fine.
But kldunload nic_uio only detach uio driver, don't re-attach kernel driver.

[oki@ ~]$ cat /boot/loader.conf
##
###  User settings  ##
##
hw.contigmem.num_buffers=64
hw.contigmem.buffer_size=2097152
hw.nic_uio.bdfs="2:5:0,2:6:0"
contigmem_load="YES"
#nic_uio_load="YES"
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
em1 at pci0:2:5:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
em2 at pci0:2:6:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
[oki@ ~]$ kenv hw.nic_uio.bdfs
2:5:0,2:6:0
[oki@ ~]$ sudo kldload nic_uio
Password:
[oki@ ~]$ pciconf -l | egrep '(em|uio)'
em0 at pci0:2:1:0: class=0x02 card=0x075015ad chip=0x100f8086 rev=0x01
hdr=0x00
nic_uio0 at pci0:2:5:0:class=0x02 card=0x075015ad chip=0x100f8086
rev=0x01 hdr=0x00
nic_uio1 at pci0:2:6:0:class=0x02 card=0x075015ad chip=0x100f8086
rev=0x01 hdr=0x00
[oki@ ~]$ sudo kldunload nic_uio
[oki@ ~]$ pciconf -l | 

[dpdk-dev] l2fwd high latency/delay

2014-03-26 Thread Fred Pedrisa
Hi,



I am testing l2fwd in FreeBSD and I am noticing a delay of around 0~3
seconds, what could be causing this behavior ?



Fred



[dpdk-dev] RES: l2fwd high latency/delay

2014-03-26 Thread Fred Pedrisa
It is just a ping test, from one PC to another, using 2 ports as a L2
Bridge.

64 bytes from 192.168.2.249: icmp_seq=967 ttl=128 time=1630 ms
64 bytes from 192.168.2.249: icmp_seq=968 ttl=128 time=5005 ms
64 bytes from 192.168.2.249: icmp_seq=969 ttl=128 time=4004 ms
64 bytes from 192.168.2.249: icmp_seq=970 ttl=128 time=3003 ms
64 bytes from 192.168.2.249: icmp_seq=971 ttl=128 time=3661 ms
64 bytes from 192.168.2.249: icmp_seq=972 ttl=128 time=2660 ms
64 bytes from 192.168.2.249: icmp_seq=973 ttl=128 time=1660 ms
64 bytes from 192.168.2.249: icmp_seq=974 ttl=128 time=3001 ms
64 bytes from 192.168.2.249: icmp_seq=975 ttl=128 time=2001 ms
64 bytes from 192.168.2.249: icmp_seq=976 ttl=128 time=1000 ms
64 bytes from 192.168.2.249: icmp_seq=977 ttl=128 time=0.713 ms
64 bytes from 192.168.2.249: icmp_seq=978 ttl=128 time=3000 ms
64 bytes from 192.168.2.249: icmp_seq=979 ttl=128 time=2000 ms
64 bytes from 192.168.2.249: icmp_seq=980 ttl=128 time=1000 ms
64 bytes from 192.168.2.249: icmp_seq=981 ttl=128 time=4003 ms
64 bytes from 192.168.2.249: icmp_seq=982 ttl=128 time=3001 ms
64 bytes from 192.168.2.249: icmp_seq=983 ttl=128 time=4654 ms
64 bytes from 192.168.2.249: icmp_seq=984 ttl=128 time=3654 ms
64 bytes from 192.168.2.249: icmp_seq=985 ttl=128 time=2654 ms

However, this is what happens :-(

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa
Enviada em: quarta-feira, 26 de mar?o de 2014 20:34
Para: dev at dpdk.org
Assunto: [dpdk-dev] l2fwd high latency/delay

Hi,



I am testing l2fwd in FreeBSD and I am noticing a delay of around 0~3
seconds, what could be causing this behavior ?



Fred




[dpdk-dev] RES: RES: l2fwd high latency/delay

2014-03-26 Thread Fred Pedrisa
Hello,



I see, but even making it?s value as 1, still doesn?t help ? :/



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] 
Enviada em: quarta-feira, 26 de mar?o de 2014 23:18
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] RES: l2fwd high latency/delay



there is problem in l2fwd_send_packet().

l2fwd and some other examples assume burst traffic.
l2fwd doesn't send packet if qconf->tx_mbufs[port].len < MAX_PKT_BURST.



2014-03-27 9:56 GMT+09:00 Fred Pedrisa :

It is just a ping test, from one PC to another, using 2 ports as a L2
Bridge.

64 bytes from 192.168.2.249: icmp_seq=967 ttl=128 time=1630 ms
64 bytes from 192.168.2.249: icmp_seq=968 ttl=128 time=5005 ms
64 bytes from 192.168.2.249: icmp_seq=969 ttl=128 time=4004 ms
64 bytes from 192.168.2.249: icmp_seq=970 ttl=128 time=3003 ms
64 bytes from 192.168.2.249: icmp_seq=971 ttl=128 time=3661 ms
64 bytes from 192.168.2.249: icmp_seq=972 ttl=128 time=2660 ms
64 bytes from 192.168.2.249: icmp_seq=973 ttl=128 time=1660 ms
64 bytes from 192.168.2.249: icmp_seq=974 ttl=128 time=3001 ms
64 bytes from 192.168.2.249: icmp_seq=975 ttl=128 time=2001 ms
64 bytes from 192.168.2.249: icmp_seq=976 ttl=128 time=1000 ms
64 bytes from 192.168.2.249: icmp_seq=977 ttl=128 time=0.713 ms
64 bytes from 192.168.2.249: icmp_seq=978 ttl=128 time=3000 ms
64 bytes from 192.168.2.249: icmp_seq=979 ttl=128 time=2000 ms
64 bytes from 192.168.2.249: icmp_seq=980 ttl=128 time=1000 ms
64 bytes from 192.168.2.249: icmp_seq=981 ttl=128 time=4003 ms
64 bytes from 192.168.2.249: icmp_seq=982 ttl=128 time=3001 ms
64 bytes from 192.168.2.249: icmp_seq=983 ttl=128 time=4654 ms
64 bytes from 192.168.2.249: icmp_seq=984 ttl=128 time=3654 ms
64 bytes from 192.168.2.249: icmp_seq=985 ttl=128 time=2654 ms

However, this is what happens :-(

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa
Enviada em: quarta-feira, 26 de mar?o de 2014 20:34
Para: dev at dpdk.org
Assunto: [dpdk-dev] l2fwd high latency/delay


Hi,



I am testing l2fwd in FreeBSD and I am noticing a delay of around 0~3
seconds, what could be causing this behavior ?



Fred







[dpdk-dev] RES: RES: RES: l2fwd high latency/delay

2014-03-26 Thread Fred Pedrisa
Hello,

Also the same problem happens with : testpmd

64 bytes from 192.168.2.81: icmp_seq=1612 ttl=64 time=1002.400 ms
64 bytes from 192.168.2.81: icmp_seq=1613 ttl=64 time=2833.217 ms
64 bytes from 192.168.2.81: icmp_seq=1614 ttl=64 time=1832.221 ms
64 bytes from 192.168.2.81: icmp_seq=1615 ttl=64 time=4004.454 ms
64 bytes from 192.168.2.81: icmp_seq=1616 ttl=64 time=3003.457 ms
64 bytes from 192.168.2.81: icmp_seq=1617 ttl=64 time=2002.460 ms
64 bytes from 192.168.2.81: icmp_seq=1618 ttl=64 time=3526.880 ms
64 bytes from 192.168.2.81: icmp_seq=1619 ttl=64 time=2525.883 ms
64 bytes from 192.168.2.81: icmp_seq=1620 ttl=64 time=1524.885 ms
64 bytes from 192.168.2.81: icmp_seq=1621 ttl=64 time=2576.861 ms
64 bytes from 192.168.2.81: icmp_seq=1622 ttl=64 time=1575.864 ms
64 bytes from 192.168.2.81: icmp_seq=1623 ttl=64 time=574.866 ms

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa
Enviada em: quarta-feira, 26 de mar?o de 2014 23:27
Para: 'Masaru Oki'
Cc: dev at dpdk.org
Assunto: [dpdk-dev] RES: RES: l2fwd high latency/delay

Hello,



I see, but even making it?s value as 1, still doesn?t help ? :/



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] Enviada em: quarta-feira,
26 de mar?o de 2014 23:18
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] RES: l2fwd high latency/delay



there is problem in l2fwd_send_packet().

l2fwd and some other examples assume burst traffic.
l2fwd doesn't send packet if qconf->tx_mbufs[port].len < MAX_PKT_BURST.



2014-03-27 9:56 GMT+09:00 Fred Pedrisa :

It is just a ping test, from one PC to another, using 2 ports as a L2
Bridge.

64 bytes from 192.168.2.249: icmp_seq=967 ttl=128 time=1630 ms
64 bytes from 192.168.2.249: icmp_seq=968 ttl=128 time=5005 ms
64 bytes from 192.168.2.249: icmp_seq=969 ttl=128 time=4004 ms
64 bytes from 192.168.2.249: icmp_seq=970 ttl=128 time=3003 ms
64 bytes from 192.168.2.249: icmp_seq=971 ttl=128 time=3661 ms
64 bytes from 192.168.2.249: icmp_seq=972 ttl=128 time=2660 ms
64 bytes from 192.168.2.249: icmp_seq=973 ttl=128 time=1660 ms
64 bytes from 192.168.2.249: icmp_seq=974 ttl=128 time=3001 ms
64 bytes from 192.168.2.249: icmp_seq=975 ttl=128 time=2001 ms
64 bytes from 192.168.2.249: icmp_seq=976 ttl=128 time=1000 ms
64 bytes from 192.168.2.249: icmp_seq=977 ttl=128 time=0.713 ms
64 bytes from 192.168.2.249: icmp_seq=978 ttl=128 time=3000 ms
64 bytes from 192.168.2.249: icmp_seq=979 ttl=128 time=2000 ms
64 bytes from 192.168.2.249: icmp_seq=980 ttl=128 time=1000 ms
64 bytes from 192.168.2.249: icmp_seq=981 ttl=128 time=4003 ms
64 bytes from 192.168.2.249: icmp_seq=982 ttl=128 time=3001 ms
64 bytes from 192.168.2.249: icmp_seq=983 ttl=128 time=4654 ms
64 bytes from 192.168.2.249: icmp_seq=984 ttl=128 time=3654 ms
64 bytes from 192.168.2.249: icmp_seq=985 ttl=128 time=2654 ms

However, this is what happens :-(

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa Enviada em:
quarta-feira, 26 de mar?o de 2014 20:34
Para: dev at dpdk.org
Assunto: [dpdk-dev] l2fwd high latency/delay


Hi,



I am testing l2fwd in FreeBSD and I am noticing a delay of around 0~3
seconds, what could be causing this behavior ?



Fred








[dpdk-dev] RES: RES: RES: RES: l2fwd high latency/delay

2014-03-27 Thread Fred Pedrisa
Hello,

I've solved the problem.

It was related with the threshoulds of the network card, that must be
adjusted following the manual as explained inside the example code.

It is not related with the maximum burst :)

So now all fine !

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa
Enviada em: quarta-feira, 26 de mar?o de 2014 23:55
Para: 'Masaru Oki'
Cc: dev at dpdk.org
Assunto: [dpdk-dev] RES: RES: RES: l2fwd high latency/delay

Hello,

Also the same problem happens with : testpmd

64 bytes from 192.168.2.81: icmp_seq=1612 ttl=64 time=1002.400 ms
64 bytes from 192.168.2.81: icmp_seq=1613 ttl=64 time=2833.217 ms
64 bytes from 192.168.2.81: icmp_seq=1614 ttl=64 time=1832.221 ms
64 bytes from 192.168.2.81: icmp_seq=1615 ttl=64 time=4004.454 ms
64 bytes from 192.168.2.81: icmp_seq=1616 ttl=64 time=3003.457 ms
64 bytes from 192.168.2.81: icmp_seq=1617 ttl=64 time=2002.460 ms
64 bytes from 192.168.2.81: icmp_seq=1618 ttl=64 time=3526.880 ms
64 bytes from 192.168.2.81: icmp_seq=1619 ttl=64 time=2525.883 ms
64 bytes from 192.168.2.81: icmp_seq=1620 ttl=64 time=1524.885 ms
64 bytes from 192.168.2.81: icmp_seq=1621 ttl=64 time=2576.861 ms
64 bytes from 192.168.2.81: icmp_seq=1622 ttl=64 time=1575.864 ms
64 bytes from 192.168.2.81: icmp_seq=1623 ttl=64 time=574.866 ms

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa Enviada em:
quarta-feira, 26 de mar?o de 2014 23:27
Para: 'Masaru Oki'
Cc: dev at dpdk.org
Assunto: [dpdk-dev] RES: RES: l2fwd high latency/delay

Hello,



I see, but even making it?s value as 1, still doesn?t help ? :/



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] Enviada em: quarta-feira,
26 de mar?o de 2014 23:18
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] RES: l2fwd high latency/delay



there is problem in l2fwd_send_packet().

l2fwd and some other examples assume burst traffic.
l2fwd doesn't send packet if qconf->tx_mbufs[port].len < MAX_PKT_BURST.



2014-03-27 9:56 GMT+09:00 Fred Pedrisa :

It is just a ping test, from one PC to another, using 2 ports as a L2
Bridge.

64 bytes from 192.168.2.249: icmp_seq=967 ttl=128 time=1630 ms
64 bytes from 192.168.2.249: icmp_seq=968 ttl=128 time=5005 ms
64 bytes from 192.168.2.249: icmp_seq=969 ttl=128 time=4004 ms
64 bytes from 192.168.2.249: icmp_seq=970 ttl=128 time=3003 ms
64 bytes from 192.168.2.249: icmp_seq=971 ttl=128 time=3661 ms
64 bytes from 192.168.2.249: icmp_seq=972 ttl=128 time=2660 ms
64 bytes from 192.168.2.249: icmp_seq=973 ttl=128 time=1660 ms
64 bytes from 192.168.2.249: icmp_seq=974 ttl=128 time=3001 ms
64 bytes from 192.168.2.249: icmp_seq=975 ttl=128 time=2001 ms
64 bytes from 192.168.2.249: icmp_seq=976 ttl=128 time=1000 ms
64 bytes from 192.168.2.249: icmp_seq=977 ttl=128 time=0.713 ms
64 bytes from 192.168.2.249: icmp_seq=978 ttl=128 time=3000 ms
64 bytes from 192.168.2.249: icmp_seq=979 ttl=128 time=2000 ms
64 bytes from 192.168.2.249: icmp_seq=980 ttl=128 time=1000 ms
64 bytes from 192.168.2.249: icmp_seq=981 ttl=128 time=4003 ms
64 bytes from 192.168.2.249: icmp_seq=982 ttl=128 time=3001 ms
64 bytes from 192.168.2.249: icmp_seq=983 ttl=128 time=4654 ms
64 bytes from 192.168.2.249: icmp_seq=984 ttl=128 time=3654 ms
64 bytes from 192.168.2.249: icmp_seq=985 ttl=128 time=2654 ms

However, this is what happens :-(

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa Enviada em:
quarta-feira, 26 de mar?o de 2014 20:34
Para: dev at dpdk.org
Assunto: [dpdk-dev] l2fwd high latency/delay


Hi,



I am testing l2fwd in FreeBSD and I am noticing a delay of around 0~3
seconds, what could be causing this behavior ?



Fred









[dpdk-dev] RES: Attempting to get the DPDK to build for FreeBSD 9.2Release DPDK 1.6.0

2014-03-27 Thread Fred Pedrisa
Hello,

Im my first attempt, I had to restart the server :D

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Randall Stewart
Enviada em: quinta-feira, 27 de mar?o de 2014 08:26
Para: dev at dpdk.org
Assunto: Re: [dpdk-dev] Attempting to get the DPDK to build for FreeBSD
9.2Release DPDK 1.6.0

Ok

I have figured this one out ;-)

When you do

kldload ./contigmem.ko

It trys to malloc up a chunk of memory (1Gig it looks by default).. and if a
contiguious piece of memory is not available.. it fails.. which causes the
failure back to kldload that says ?Exec format error?.

Not very descriptive, turns out you have to dig into /var/log/messages to
figure out what is really happening ;-)

R


On Mar 27, 2014, at 7:05 AM, Randall Stewart  wrote:

> 
>> 
>> Hi all:
>> 
>> I have a stock 9.2Release FreeBSD system and am attempting to build 
>> the DPDK for it. I have followed the steps i.e:
>> 
>> 1) Gotten the ports in:
>> - dialog4ports
>> - gcc 4.8
>> - gmake
>> - coreutils
>> - libexecinfo
>> 
>> 2) Ran the make
>> - gmake install T=x86_64-default-bsdapp-gcc CC=gcc48
>> 
>> It completes nicely and I have a x86_64-default-bsdapp-gcc directory with
all the goodies.
>> But here is the issue:
>> 
>> bash-fastone: cd x86_64-default-bsdapp-gcc/kmod/
>> bash-fastone: su
>> Password:
>> root at fastone:/usr/randall/DPDK-1.6.0/x86_64-default-bsdapp-gcc/kmod # ls
>> contigmem.ko nic_uio.ko
>> root at fastone:/usr/randall/DPDK-1.6.0/x86_64-default-bsdapp-gcc/kmod # 
>> kldload ./contigmem.ko
>> kldload: can't load ./contigmem.ko: Exec format error 
>> root at fastone:/usr/randall/DPDK-1.6.0/x86_64-default-bsdapp-gcc/kmod #
>> 
>> 
>> 
>> Anyone ever seen this? Per chance am I missing something?
>> 
>> I guess my next step is to see if I can build the klm?s individually 
>> with a more BSDish setup to see if there is some sort of interaction 
>> going on here between gcc and gcc48.. hmmm
>> 
>> Any pointers would be appreciated.
>> 
>> Thanks
>> 
>> R
> 
> --
> Randall Stewart
> 803-317-4952 (cell)
> 
> 

--
Randall Stewart
803-317-4952 (cell)




[dpdk-dev] RES: RES: RES: RES: hw.nic_uio.bdfs

2014-03-27 Thread Fred Pedrisa
Hello,

It just requires a small code change :), and it can work in the expected
way.

So you mean bdfs is to 'select' the only wanted devices yes ? May I change
my code proposition for you ?

Sincerely,

Fred Pedrisa

-Mensagem original-
De: Carew, Alan [mailto:alan.carew at intel.com] 
Enviada em: quinta-feira, 27 de mar?o de 2014 17:12
Para: Fred Pedrisa; Masaru Oki
Cc: dev at dpdk.org
Assunto: [dpdk-dev] RES: RES: RES: hw.nic_uio.bdfs

Hi folks,

Just to clarify, the purpose of hw.nic_uio.bdfs is to remove NICs from
drivers not owned by nic_uio and add them to a pool of unbound devices.
As with the Linux equivalent driver(igb_uio), nic_uio will then attempt to
bind any unbound NICs and make them available for DPDK.

However, the Linux OS also has the ability to selectively bind/unbind
devices via sysfs, this is something we lack on FreeBSD.
I can see where it would be a problem when only a subset of unbound devices
is required for nic_uio and currently the only option would be to ensure the
original driver is loaded first.

The change below changes the purpose of hw.nic_uio.bdfs, i.e. blacklist
devices so that they are not used by nic_uio.
A better approach would be to make the variable an exclusive list of devices
to be used, this would then require all users to specify the exact devices
to be used.

Thanks,
Alan

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa Enviada em:
quarta-feira, 26 de mar?o de 2014 04:22
Para: 'Masaru Oki'
Cc: dev at dpdk.org
Assunto: [dpdk-dev] RES: RES: hw.nic_uio.bdfs

Hello,

Here is my fix for probe code :

static int
nic_uio_probe (device_t dev)
{
int i, len;
char *remaining;
long bus = 0, device = 0, function = 0;
remaining = bdf_str;
len = strlen(remaining);

for (i = 0; remaining && len >= 5 && i < len;i+=6) {
if ( remaining[i + 1] == ':' && remaining[i + 3] == ':' ) {
bus = strtol(&remaining[i + 0],NULL,0);
device = strtol(&remaining[i + 2],NULL,0);
function = strtol(&remaining[i + 4],NULL,0);
if (dev != NULL) {
if (pci_get_bus(dev) == bus &&
pci_get_slot(dev) == device && pci_get_function(dev) == function) {
printf("nic_uio: success blocking
probe of : %ld:%ld:%ld!\n", bus, device, function);
return (ENXIO);
}
}
}
}

for (i = 0; i < NUM_DEVICES; i++)
if (pci_get_vendor(dev) == devices[i].vend &&
pci_get_device(dev) == devices[i].dev) {

device_set_desc(dev, "Intel(R) DPDK PCI Device");
return (BUS_PROBE_SPECIFIC);
}

return (ENXIO);
}

Now it is working as intended ;)

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa Enviada em:
quarta-feira, 26 de mar?o de 2014 04:16
Para: 'Masaru Oki'
Cc: dev at dpdk.org
Assunto: [dpdk-dev] RES: hw.nic_uio.bdfs

Hello,



Yes, I am writing a fix for this too ;)



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] Enviada em:
quarta-feira,
26 de mar?o de 2014 04:08
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



> By default nic_uio takes all the NICs for itself


Yes.

I think nic_uio_probe should check hw.nic_uio.bdfs.





2014-03-26 15:49 GMT+09:00 Fred Pedrisa :

Hello,



By default nic_uio takes all the NICs for itself




So in theory, you needed an option to reserve some NIC ports to your system,
without DPDK taking it for itself




De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] Enviada em:
quarta-feira,
26 de mar?o de 2014 03:43


Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



avoid??? want you hw.nic_uio.avoid_bdfs?


nic_uio behavior I guessed
1. detach kernel driver specified by hw.nic_uio.bdfs

2. attach nic_uio driver for all NICs not attached.

but 2. is not correct, I think.





2014-03-26 15:20 GMT+09:00 Fred Pedrisa :

Hello,



You did not understand the purpose of that parameter, it is made to 'avoid'
nic_uio from pursuing the wanted NICs... so they are free to be used in the
system :)



Right now the code to handle it is wrong and I am trying to fix it myself.



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] Enviada em:
quarta-feira,
26 de mar?o de 2014 03:16
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



Hi,

I tried with Intel version 1.6.0 and FreeBSD 9.2-RELEASE on VMware Player.

kldload nic_uio by hand, works fine.
But kldunload nic_uio only detach uio d

[dpdk-dev] RES: RES: RES: RES: hw.nic_uio.bdfs

2014-03-27 Thread Fred Pedrisa
Hi !

I've attached my contribution (the fixed source) by changing the behavior
the way Alan wanted :)

It is working and if you like to use it, this would be cool.

Sincerely,

Fred Pedrisa

-Mensagem original-
De: Fred Pedrisa [mailto:fredhps10 at hotmail.com] 
Enviada em: quinta-feira, 27 de mar?o de 2014 17:28
Para: 'Carew, Alan'; 'Masaru Oki'
Cc: 'dev at dpdk.org'
Assunto: RES: [dpdk-dev] RES: RES: RES: hw.nic_uio.bdfs

Hello,

It just requires a small code change :), and it can work in the expected
way.

So you mean bdfs is to 'select' the only wanted devices yes ? May I change
my code proposition for you ?

Sincerely,

Fred Pedrisa

-Mensagem original-
De: Carew, Alan [mailto:alan.carew at intel.com] Enviada em: quinta-feira, 27
de mar?o de 2014 17:12
Para: Fred Pedrisa; Masaru Oki
Cc: dev at dpdk.org
Assunto: [dpdk-dev] RES: RES: RES: hw.nic_uio.bdfs

Hi folks,

Just to clarify, the purpose of hw.nic_uio.bdfs is to remove NICs from
drivers not owned by nic_uio and add them to a pool of unbound devices.
As with the Linux equivalent driver(igb_uio), nic_uio will then attempt to
bind any unbound NICs and make them available for DPDK.

However, the Linux OS also has the ability to selectively bind/unbind
devices via sysfs, this is something we lack on FreeBSD.
I can see where it would be a problem when only a subset of unbound devices
is required for nic_uio and currently the only option would be to ensure the
original driver is loaded first.

The change below changes the purpose of hw.nic_uio.bdfs, i.e. blacklist
devices so that they are not used by nic_uio.
A better approach would be to make the variable an exclusive list of devices
to be used, this would then require all users to specify the exact devices
to be used.

Thanks,
Alan

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa Enviada em:
quarta-feira, 26 de mar?o de 2014 04:22
Para: 'Masaru Oki'
Cc: dev at dpdk.org
Assunto: [dpdk-dev] RES: RES: hw.nic_uio.bdfs

Hello,

Here is my fix for probe code :

static int
nic_uio_probe (device_t dev)
{
int i, len;
char *remaining;
long bus = 0, device = 0, function = 0;
remaining = bdf_str;
len = strlen(remaining);

for (i = 0; remaining && len >= 5 && i < len;i+=6) {
if ( remaining[i + 1] == ':' && remaining[i + 3] == ':' ) {
bus = strtol(&remaining[i + 0],NULL,0);
device = strtol(&remaining[i + 2],NULL,0);
function = strtol(&remaining[i + 4],NULL,0);
if (dev != NULL) {
if (pci_get_bus(dev) == bus &&
pci_get_slot(dev) == device && pci_get_function(dev) == function) {
printf("nic_uio: success blocking
probe of : %ld:%ld:%ld!\n", bus, device, function);
return (ENXIO);
}
}
}
}

for (i = 0; i < NUM_DEVICES; i++)
if (pci_get_vendor(dev) == devices[i].vend &&
pci_get_device(dev) == devices[i].dev) {

device_set_desc(dev, "Intel(R) DPDK PCI Device");
return (BUS_PROBE_SPECIFIC);
}

return (ENXIO);
}

Now it is working as intended ;)

-Mensagem original-
De: dev [mailto:dev-bounces at dpdk.org] Em nome de Fred Pedrisa Enviada em:
quarta-feira, 26 de mar?o de 2014 04:16
Para: 'Masaru Oki'
Cc: dev at dpdk.org
Assunto: [dpdk-dev] RES: hw.nic_uio.bdfs

Hello,



Yes, I am writing a fix for this too ;)



De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] Enviada em:
quarta-feira,
26 de mar?o de 2014 04:08
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



> By default nic_uio takes all the NICs for itself


Yes.

I think nic_uio_probe should check hw.nic_uio.bdfs.





2014-03-26 15:49 GMT+09:00 Fred Pedrisa :

Hello,



By default nic_uio takes all the NICs for itself




So in theory, you needed an option to reserve some NIC ports to your system,
without DPDK taking it for itself




De: Masaru Oki [mailto:m-oki at stratosphere.co.jp] Enviada em:
quarta-feira,
26 de mar?o de 2014 03:43


Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] hw.nic_uio.bdfs



avoid??? want you hw.nic_uio.avoid_bdfs?


nic_uio behavior I guessed
1. detach kernel driver specified by hw.nic_uio.bdfs

2. attach nic_uio driver for all NICs not attached.

but 2. is not correct, I think.





2014-03-26 15:20 GMT+09:00 Fred Pedrisa :

Hello,



You did not understand the purpose of that parameter, it is made to 'avoid'
nic_uio from pursuing the wanted NICs... so they are free to be

[dpdk-dev] Core Performance

2014-03-30 Thread Fred Pedrisa
Hi, guys.



What is the expected performance using a 2650 (2.0ghz) per core ? In terms
of packet forwarding with a 82599 ?



-  Small 64b packets ?

-  Large 1540b packets ?



Sincerely,



Fred



[dpdk-dev] RES: Core Performance

2014-03-30 Thread Fred Pedrisa
Hello,

Ok, but the current dpdk code (1.6.0 r0) for FreeBSD is achieving this
current performance ?

Sincerely,

Fred Pedrisa

-Mensagem original-
De: Jayakumar, Muthurajan [mailto:muthurajan.jayakumar at intel.com] 
Enviada em: domingo, 30 de mar?o de 2014 19:27
Para: Fred Pedrisa; dev at dpdk.org
Assunto: RE: [dpdk-dev] Core Performance

Hi, 

http://www.intel.com/content/dam/www/public/us/en/documents/presentation/dpd
k-packet-processing-ia-overview-presentation.pdf

Foil # 27 has the forwarding performance for ES-2658 core.
Foil # 7 has the problem statement indicating small packet size.

Thx

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Fred Pedrisa
Sent: Sunday, March 30, 2014 3:01 PM
To: dev at dpdk.org
Subject: [dpdk-dev] Core Performance

Hi, guys.



What is the expected performance using a 2650 (2.0ghz) per core ? In terms
of packet forwarding with a 82599 ?



-  Small 64b packets ?

-  Large 1540b packets ?



Sincerely,



Fred




[dpdk-dev] RES: L2FWD uses 'too much' CPU

2014-04-01 Thread Fred Pedrisa
Hello,



Ok. Can you help me with something else ?



Right now, at a rate of 1.3~1.4 Mpps I am having about 4~5% of packet loss,
I wonder if I use my application wth a 10G NIC, will it not be able to cope
with more than 1.4 Mpps ? Because by seeing a packet loss I got a little
afraid of it.



Sincerely,



Fred



De: Vladimir Medvedkin [mailto:medvedkinv at gmail.com] 
Enviada em: ter?a-feira, 1 de abril de 2014 13:36
Para: Fred Pedrisa
Cc: dev at dpdk.org
Assunto: Re: [dpdk-dev] L2FWD uses 'too much' CPU



Hi,

One of the objectives of DPDK is avoiding of interrupts, so application (not
only L2FWD) polls NIC infineteley. You can look at  programmers guide
section 24 "Power Management" and "L3 Forwarding with Power Management
Sample Application" in  Sample Applications User Guide.

Regards,

Vladimir.



2014-04-01 9:24 GMT+04:00 Fred Pedrisa :

Hi,



Why by default L2FWD saturate both cores ? I mean, it seems it keeps wasting
cycles due to the infinite loop placed there to check the queues.



Which would be the way to improve this and make it to become more efficient
?



Sincerely,



Fred





[dpdk-dev] IRC Channel, Come Guys ;)

2014-04-01 Thread Fred Pedrisa
Hey,



I've created an unofficial channel in freenode irc network, so for those
interested in joining there for a better chatting here are the informations
:



Server : irc.freenode.net

Channel : ##dpdk (Yes, you must type two sharps).



If you don't want to install an irc client, you might use :



http://webchat.freenode.net and join the ##dpdk channel J



Sincerely,



Fred Pedrisa



[dpdk-dev] L2FWD uses 'too much' CPU

2014-04-01 Thread Fred Pedrisa
Hi,



Why by default L2FWD saturate both cores ? I mean, it seems it keeps wasting
cycles due to the infinite loop placed there to check the queues.



Which would be the way to improve this and make it to become more efficient
?



Sincerely,



Fred