ased
>> driver?
>> > >
>> > > > We release the driver itself under BSD license, but to use it
>> > > > for commercial products, you may have to re-implement the
>> > > > separated GPL sources.
>> > >
>> > > The GPL sources are not really separated.
>> > > The DPDK libraries must be BSD-licensed.
>> > >
>> > > > The GPL affected source codes reside in the mlnx_uio/kernel
>> directory.
>> > >
>> > > It seems that a large part of the GPL driver was also copied in
>> > > mlnx_uio/mlnx/.
>> > >
>> > > Given that you are dropping a huge GPL codebase (whose you don't
>> > > own
>> the
>> > > copyright) in a BSD library, and that you didn't give your real
>> > > name in the signed-off line, it is NACK.
>>
>>
>>
>>
--
Sincerely yours, Pavel Odintsov
r Medvedkin
>>> wrote:
>>> > In case with syn flood you should take into account return syn-ack
>>> > traffic,
>>> > which generates PCIe DLLP's from NIC to host, thus pcie bandwith exceeds
>>> > faster. And don't forget about DLLP
Yes, Bruce, we understand this. But we are working with huge SYN
attacks processing and they are 64byte only :(
On Wed, Jul 1, 2015 at 3:59 PM, Bruce Richardson
wrote:
> On Wed, Jul 01, 2015 at 03:44:57PM +0300, Pavel Odintsov wrote:
>> Thanks for answer, Vladimir! So we need look for x
, so x8 transmits 8 bytes in 1 ns. 1 packet transmits in 20 ns. Thus
> in theory pcie 3.0 x8 may transfer not more than 50mpps.
> Correct me if I'm wrong.
>
> Regards,
> Vladimir
>
>
> 2015-06-29 18:41 GMT+03:00 Pavel Odintsov :
>>
>> Hello, Andrew!
>>
&g
nhong.
>>
>> 2015-06-29 15:59 GMT+09:00 Pavel Odintsov :
>>
>> > Hello!
>> >
>> > Lee, thank you so much for sharing your experience! What do you think
>> > about 40GE version of 82599?
>> >
>> > On Mon, Jun 29, 2015 at 2:35 AM,
only 10G NIC which supports line rate with
> minimum sized packets (64 byte).
> According to our internal tests, Mellanox's 40G NICs even support less than
> 30Mpps.
> I think 40 Mpps is the hardware capacity.
>
> Keunhong.
>
>
>
> 2015-06-28 19:34 GMT+09:00 Pavel Odint
0Mpps and could do more.
Could anybody help us with this issue? Looks like this NIC's could not
work on wire speed :(
Platform: Intel Xeon E5 e5 2670 + XL 710.
--
Sincerely yours, Pavel Odintsov
//rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature);
And everything compiled OK. But I thought C++ tests and compatibility
is should for dpdk.
On Tue, May 5, 2015 at 10:57 AM, Pavel Odintsov
wrote:
> Hello!
>
> Could anybody help me with this issue? :(
>
> In this file widely us
Hello!
Could anybody help me with this issue? :(
In this file widely used enum forward declarations which completely
incompatible with C++ and need some rewrite.
On Wed, Apr 29, 2015 at 3:17 PM, Pavel Odintsov
wrote:
> Hello!
>
> I have C++ application compiles and works nice. B
this header file and C++?
--
Sincerely yours, Pavel Odintsov
+int eth_configure_ret = rte_eth_dev_configure(current_port, rx_queues,
> tx_queues, &default_port_conf);
> according to
> http://dpdk.org/doc/api/rte__ethdev_8h.html#ac30d075b4b206c7122e200164ce69893
> second arg is number of rx queues
>
> Regards,
> Vladimir
>
> 2015-04-28 13:02 GMT+03
Hello, Network Performance Gurus!
I have Debian Jessie with 3.16 kernel, DPDK 2.0.0 with ixgbe NIC. And
I wrote following code:
https://gist.github.com/pavel-odintsov/e1f64de4d56c0ab1b37c
I try to allocate 2 queues for TX and only 1 queue for RX and I can't
do it with error (detailed
for both l2fwd and my packet
> generator.
>
> Any ideas how to fix this? A 25% loss in throughput prevents me from
> upgrading to DPDK 2.0.0. I need the new lcore features and the 40 GBit
> driver updates, so I can't stay on 1.7.1 forever.
>
> Paul
>
>
> [1] https://github.com/emmericp/MoonGen
> [2] http://comments.gmane.org/gmane.comp.networking.dpdk.devel/5155
--
Sincerely yours, Pavel Odintsov
13 matches
Mail list logo