> Subject: Re: [dpdk-dev] [memnic PATCH 2/7] pmd: remove needless assignment
>
> 2014-09-11 07:47, Hiroshi Shimamoto:
> > Because these assignment are done in rte_pktmbuf_alloc(), get rid of them.
>
> Is it increasing the performances?
I hadn't tried to test, because I don't think it can be noti
Hi,
> Subject: Re: [dpdk-dev] [memnic PATCH 1/7] guest: memnic-tester: PMD
> benchmark in guest
>
> Hi Hiroshi,
>
> 2014-09-11 07:46, Hiroshi Shimamoto:
> > master |<- put packets ->| |<- get packets ->|
> > slave | |<- rx packets ->|<- tx packets ->| |
> > |<-
1) All the calls to add entries succeeds
2) The key look up works as expected.
3) The value (entry_data) that is returned is incorrect for every other
entry - 1st entry data on .f_action_hit is wrong, 2nd entry_data on
.f_action_hit is correct and so on.
I have initialized my L
I take the opportunity of this patchset to talk about commit formatting.
2014-09-11 19:47, Balazs Nemeth:
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are
Hey guys
Is it safe to add an entry to the rte_table_hash while the pipeline is being
run - for instance if I were to try and add an entry on a port reader action
when the packet enters the pipeline?
Thanks
Avik
2014-09-10 01:28, Zhang, Helin:
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Sergey Mironov
> > Hi! I have got an update for my "i212 problem". First of all, I found that
> > I have
> > made a mistake. My controller is named i211, not i212 :) Next, i211
> > controller is
> > controll
Dear list,
I'm experiencing problems allocating big chunks of memory with
rte_malloc_socket. Basically, it successfully allocates 6GB but returns
NULL when I try to allocate 8GB. I tried dpdk-1.5.1 and 1.7.1 and got
similar behavior. First machine I was trying this on had 29*1GB
hugepages on s
This patch supports i40e in vmdq example.
1. queue index is added by vmdq queue base in rte_eth_rx_burst.
2. pool index is added by vmdq pool base when mac address is added to pools.
3. add some error message print
Besides, due to some limitation in PMD,
1. mac addresses are needed to be pre-alloca
With i40e, the queue index of VMDQ pools doesn't always start from zero, and
the queues aren't all occupied by VMDQ. These information are retrieved through
rte_eth_dev_info_get, and used to initialise VMDQ.
Huawei Xie (1):
support i40e in vmdq example
examples/vmdq/main.c | 162
Updated the unit tests to cover both librte_power implementations as well as
the external API.
Signed-off-by: Alan Carew
---
app/test/Makefile | 3 +-
app/test/autotest_data.py | 26 ++
app/test/test_power.c | 445 +++---
app/test
librte_power now contains both rte_power_acpi_cpufreq and rte_power_kvm_vm
implementations.
Signed-off-by: Alan Carew
---
lib/librte_power/Makefile | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/lib/librte_power/Makefile b/lib/librte_power/Makefile
index 6185812..d672a5a 1
Provides a command packet format for host and guest.
Signed-off-by: Alan Carew
---
lib/librte_power/channel_commands.h | 68 +
1 file changed, 68 insertions(+)
create mode 100644 lib/librte_power/channel_commands.h
diff --git a/lib/librte_power/channel_comma
Moved the current librte_power implementation to rte_power_acpi_cpufreq, with
renaming of functions only.
Added rte_power_kvm_vm implmentation to support Power Management from a VM.
librte_power now hides the implementation based on the environment used.
A new call rte_power_set_env() can explicid
Allows for the opening of Virtio-Serial devices on a VM, where a DPDK
application can send packets to the host based monitor. The packet formatted is
specified in channel_commands.h
Each device appears as a serial device in path
/dev/virtio-ports/virtio.serial.port.. where each lcore
in a DPDK appl
Provides a small sample application(guest_vm_power_mgr) to run on a VM.
The application is run by providing a core mask(-c) and number of memory
channels(-n). The core mask corresponds to the number of lcore channels to
attempt to open. A maximum of 64 channels per VM is allowed. The channels must
For launching CLI thread and Monitor thread and initialising
resources.
Requires a minimum of two lcores to run, additional cores specified by eal core
mask are not used.
Signed-off-by: Alan Carew
---
examples/vm_power_manager/Makefile | 57 +++
examples/vm_power_manager/main.c
A wrapper around librte_power(using ACPI cpufreq), providing locking around the
non-threadsafe library, allowing for frequency changes based on core masks and
core numbers from both the CLI thread and epoll monitor thread.
Signed-off-by: Alan Carew
---
examples/vm_power_manager/power_manager.c |
The CLI is used for administrating the channel monitor and manager and
manually setting the CPU frequency on the host.
Supports the following commands:
add_vm [Mul-choice STRING]: add_vm|rm_vm , add a VM for subsequent
operations with the CLI or remove a previously added VM from the VM Power
The manager is responsible for adding communications channels to the Monitor
thread, tracking and reporting VM state and employs the libvirt API for
synchronization with the KVM Hypervisor. The manager interacts with the
Hypervisor to discover the mapping of virtual CPUS(vCPUs) to the host
physical
Virtual Machine Power Management.
The following patches add two DPDK sample applications and an alternate
implementation of librte_power for use in virtualized environments.
The idea is to provide librte_power functionality from within a VM to address
the lack of MSRs to facilitate frequency chang
Hi,
2014-09-24 14:32, Ouyang, Changchun:
> This v4 patch remove the jumbo frame related codes
> and Huawei will add it back in a separate patch,
I'd prefer a v5 which includes these changes.
I know this patchset is pending and reworked many times,
so I'll try to integrate v5 with top priority.
O
2014-09-11 07:52, Hiroshi Shimamoto:
> @@ -408,9 +408,9 @@ retry:
>
> rte_compiler_barrier();
> p->status = MEMNIC_PKT_ST_FILLED;
> -
> - rte_pktmbuf_free(tx_pkts[nr]);
> }
> + for (i = 0; i < nr; i++)
> + rte_pktmbuf_free(tx_pkts[i]);
2014-09-11 07:48, Hiroshi Shimamoto:
> x86 can keep store ordering with standard operations.
Are we sure it's always the case (including old 32-bit CPU)?
I would prefer to have a reference here. I know we already discussed
this kind of things but having a reference in commit log could help
for fut
2014-09-11 07:47, Hiroshi Shimamoto:
> Do not touch pktmbuf directly.
>
> Instead of direct access, use rte_pktmbuf_pkt_len() and rte_pktmbuf_data_len()
> to access the property.
I guess this change is for compatibility with DPDK 1.8.
Does it have an impact on performance?
--
Thomas
2014-09-11 07:47, Hiroshi Shimamoto:
> Because these assignment are done in rte_pktmbuf_alloc(), get rid of them.
Is it increasing the performances?
--
Thomas
Hi Hiroshi,
2014-09-11 07:46, Hiroshi Shimamoto:
> master |<- put packets ->| |<- get packets ->|
> slave | |<- rx packets ->|<- tx packets ->| |
> |<- set ->|
>
> Measuring how many sets in the certain period, that represents
> the MEMNIC PMD pe
On Wed, Sep 24, 2014 at 07:38:49PM +, Saha, Avik (AWS) wrote:
> Hey guys
>Is it safe to add an entry to the rte_table_hash while the pipeline is
> being run - for instance if I were to try and add an entry on a port reader
> action when the packet enters the pipeline?
>
> Thanks
> Avik
>
On Sep 24, 2014, at 10:20 AM, Thomas Monjalon
wrote:
> 2014-09-11 07:52, Hiroshi Shimamoto:
>> @@ -408,9 +408,9 @@ retry:
>>
>> rte_compiler_barrier();
>> p->status = MEMNIC_PKT_ST_FILLED;
>> -
>> -rte_pktmbuf_free(tx_pkts[nr]);
>> }
>> +for (i =
Hi Michal,
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Michal Jastrzebski
> Sent: Tuesday, September 23, 2014 4:02 PM
> To: dev at dpdk.org
> Subject: [dpdk-dev] [PATCH] Change alarm cancel function to thread-safe.
>
> It eliminates a race between thread
From: Reshma Pattan
A new sample app that shows the usage of the distributor library. This
app works as follows:
* An RX thread runs which pulls packets from each ethernet port in turn
and passes those packets to worker using a distributor component.
* The workers take the packets in turn, and
Hi,
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Huawei Xie
> Sent: Friday, September 12, 2014 6:55 PM
> To: dev at dpdk.org
> Subject: [dpdk-dev] [PATCH v4 0/5] lib/librte_vhost: user space vhost cuse
> driver library
>
> This set of patches transforms a
Hi Thomas,
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Thomas Monjalon
> Sent: Wednesday, September 24, 2014 5:32 PM
> To: Fu, JingguoX
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3] virtio: Support mergeable buffer in virtio
> pmd
>
> Hi Jin
On Thu, Sep 18, 2014 at 03:14:01PM -0400, Neil Horman wrote:
> On Thu, Sep 18, 2014 at 08:23:36PM +0200, Thomas Monjalon wrote:
> > Hi Neil,
> >
> > 2014-09-15 15:23, Neil Horman:
> > > The DPDK ABI develops and changes quickly, which makes it difficult for
> > > applications to keep up with the l
On Wed, Sep 24, 2014 at 01:10:32PM -0700, Malveeka Tewari wrote:
>
> There is already a rump-kernel based TCP/IP stack for DPDK
> https://github.com/rumpkernel/dpdk-rumptcpip/.
...
> But these solutions are again too heavy weight.
Try using this along with https://github.com/rumpkernel/rumprun-p
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Declan Doherty
> Sent: Tuesday, September 23, 2014 2:18 PM
> To: dev at dpdk.org
> Subject: [dpdk-dev] [PATCH v3 4/5] bond: lsc polling support
>
> Adds link status polling functionality to bonding device as w
Hi all,
I've been trying to run unmodified applications with the DPDK framework.
I used the KNI module and while it allowed me to run stock applications
with DPDK infrastructure, it was not optimized for performance.
There is already a rump-kernel based TCP/IP stack for DPDK
https://github.com/r
Hi,
How about the following patch for the next DPDK release?
Thanks,
Saori
2014-09-05 19:10 GMT+09:00 Saori USAMI :
> The pkt.in_port parameter in mbuf should be set with an input port id
> because DPDK apps may use it to know where each packet came from.
>
> Signed-off-by: Saori USAMI
> ---
>
This patch is related to discussion from mode 4 link bonding patch set.
Best regards
Michal
-Original Message-
From: Jastrzebski, MichalX K
Sent: Tuesday, September 23, 2014 5:02 PM
To: dev at dpdk.org
Cc: Jastrzebski, MichalX K; Wodkowski, PawelX
Subject: [PATCH] Change alarm cancel fun
Hi Jingguo,
2014-09-24 09:22, Fu, JingguoX:
> Tested-by: Jingguo Fu
>
> This patch includes 1 files, and has been tested by Intel.
> Please see information as the following:
>
> Host:
> Fedora 19 x86_64, Linux Kernel 3.9.0, GCC 4.8.2 Intel Xeon CPU E5-2680 v2 @
> 2.80GHz
> NIC: Intel Niantic
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Xie, Huawei
> Sent: Wednesday, September 24, 2014 6:58 PM
> To: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] examples/vmdq: support i40e in vmdq example
>
> This patch depends on "[dpdk-dev] [PATCH 0/6] i4
This patch depends on "[dpdk-dev] [PATCH 0/6] i40e VMDQ support"
> -Original Message-
> From: Xie, Huawei
> Sent: Wednesday, September 24, 2014 6:54 PM
> To: dev at dpdk.org
> Cc: Xie, Huawei
> Subject: [PATCH] examples/vmdq: support i40e in vmdq example
>
> This patch supports i40e in vm
I am not getting a useful core -
I just get the error message -PIPELINE: rte_pipeline_table_create: Table
creation failed on the command line
So I played around with the action_data_size and for some reason, the
application comes up fine if it I specify it as sizeof(struct
rtre_pipeline_table
Tested-by: Jingguo Fu
This patch includes 1 file, and has been tested by Intel.
Please see information as the following:
Host:
Fedora 19 x86_64, Linux Kernel 3.9.0, GCC 4.8.2 Intel Xeon CPU E5-2680 v2 @
2.80GHz
NIC: Intel Niantic 82599, Intel i350, Intel 82580 and Intel 82576
Guest:
Fedora 1
Tested-by: Jingguo Fu
This patch includes 1 files, and has been tested by Intel.
Please see information as the following:
Host:
Fedora 19 x86_64, Linux Kernel 3.9.0, GCC 4.8.2 Intel Xeon CPU E5-2680 v2 @
2.80GHz
NIC: Intel Niantic 82599, Intel i350, Intel 82580 and Intel 82576
Guest:
Fedora
Tested-by: Jingguo Fu
This patch includes 1 file, and has been tested by Intel.
Please see information as the following:
Host:
Fedora 19 x86_64, Linux Kernel 3.9.0, GCC 4.8.2 Intel Xeon CPU E5-2680 v2 @
2.80GHz
NIC: Intel Niantic 82599, Intel i350, Intel 82580 and Intel 82576
Guest:
Fedora 1
Tested-by: Xiaonan Zhang
This patch includes five files, and has been tested by Intel.
Please see information as the following:
Host:
Fedora 20 x86_64, Linux Kernel 3.11.10-301.fc20.x86_64, GCC 4.8.3 20140624
Intel Xeon CPU E5-2680 v2 @ 2.80GHz
NIC: Intel Niantic 82599, Intel i350, Intel 8258
> -Original Message-
> From: Neil Horman [mailto:nhorman at tuxdriver.com]
> Sent: Tuesday, September 23, 2014 6:03 PM
> To: Richardson, Bruce
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2 3/5] testpmd: Change rxfreet default to 32
>
> On Tue, Sep 23, 2014 at 12:08:15PM +0100,
On Wed, Sep 24, 2014 at 09:37:07AM +, Saha, Avik (AWS) wrote:
> I am not getting a useful core -
> I just get the error message -PIPELINE: rte_pipeline_table_create: Table
> creation failed on the command line
>
> So I played around with the action_data_size and for some reason, the
> appl
On Wed, Sep 24, 2014 at 09:03:20AM +, Richardson, Bruce wrote:
> > -Original Message-
> > From: Neil Horman [mailto:nhorman at tuxdriver.com]
> > Sent: Tuesday, September 23, 2014 6:03 PM
> > To: Richardson, Bruce
> > Cc: dev at dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH v2 3/5] testpm
Tested-by: Min Cao
I have tested this patch with Fortville. Flow director is tested with 2*40G,
1*40G and 4*10G NIC.
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Jingjing Wu
Sent: Wednesday, August 27, 2014 10:14 AM
To: dev at dpdk.org
Subject: [dpdk-dev] [PA
50 matches
Mail list logo