In order to get more accurate the cntvct_el0 reading,
SW must invoke isb.
Fixes: ccad39ea0712 ("eal/arm: add cpu cycle operations for ARMv8")
Cc: sta...@dpdk.org
Reviewed-by: David Marchand
Reviewed-by: Jerin Jacob
Reviewed-by: Gavin Hu
Signed-off-by: Haifeng Lin
---
lib/librte_eal/common/in
> > +static inline void
> > +isb(void)
> > +{
> > + asm volatile("isb" : : : "memory"); }
>
> NAK.
>
> Don't export badly named stuff like this.
>
Just use asm volatile("isb" : : : "memory") in rte_rdtsc_precise or which file
I should use to define this maco
> > +
> > +static inline voi
> -Original Message-
> From: Jerin Jacob [mailto:jerinjac...@gmail.com]
> Sent: Tuesday, March 10, 2020 6:47 PM
> To: Linhaifeng
> Cc: Gavin Hu ; dev@dpdk.org; tho...@monjalon.net;
> chenchanghu ; xudingke
> ; Lilijun (Jerry) ; Honnappa
> Nagarahalli ; Steve Capp
In order to get more accurate the cntvct_el0 reading,
SW must invoke isb and arch_counter_enforce_ordering.
Reference of linux kernel:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/
linux.git/tree/arch/arm64/include/asm/arch_timer.h?h=v5.5#n220
Fixes: ccad39ea0712 ("eal/arm: add cpu cy
From 70acca49c2109ef07e59dd035c5b66d7987d Mon Sep 17 00:00:00 2001
From: Haifeng Lin
Date: Mon, 9 Mar 2020 16:49:10 +0800
Subject: [PATCH] eal/arm64: fix rdtsc precise version
In order to get more accurate the cntvct_el0 reading,
SW must invoke isb and arch_counter_enforce_ordering.
Referenc
In order to get more accurate the cntvct_el0 reading,
SW must invoke isb and arch_counter_enforce_ordering.
Reference of linux kernel:
https://git.kernel.org/pub/scm/linux/kernel/git/
torvalds/linux.git/tree/arch/arm64/include/asm/arch_timer.h?h=v5.5#n220
Fixes: ccad39ea0712 ("eal/arm: add cpu cy
In order to get more accurate the cntvct_el0 reading,
SW must invoke isb and arch_counter_enforce_ordering.
Reference of linux kernel:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/include/asm/arch_timer.h?h=v5.5#n220
Signed-off-by: Haifeng Lin
---
.../commo
> -Original Message-
> From: Jerin Jacob [mailto:jerinjac...@gmail.com]
> Sent: Tuesday, March 10, 2020 5:03 PM
> To: Linhaifeng
> Cc: Gavin Hu ; dev@dpdk.org; tho...@monjalon.net;
> chenchanghu ; xudingke
> ; Lilijun (Jerry) ; Honnappa
> Nagarahalli ; Steve Capp
We should use isb rather than dsb to sync system counter to cntvct_el0.
Reference of linux kernel:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/include/asm/arch_timer.h?h=v5.5#n220
Signed-off-by: Haifeng Lin
---
.../common/include/arch/arm/rte_atomic_64.h
> -Original Message-
> From: Gavin Hu [mailto:gavin...@arm.com]
> Sent: Tuesday, March 10, 2020 3:11 PM
> To: Linhaifeng ; dev@dpdk.org;
> tho...@monjalon.net
> Cc: chenchanghu ; xudingke
> ; Lilijun (Jerry) ; Honnappa
> Nagarahalli ; Steve Capper
> ; nd
&g
-邮件原件-
发件人: David Marchand [mailto:david.march...@redhat.com]
发送时间: 2020年3月9日 17:19
收件人: Linhaifeng
抄送: dev@dpdk.org; tho...@monjalon.net; Lilijun (Jerry)
; chenchanghu ; xudingke
主题: Re: [dpdk-dev] [PATCH] cycles: add isb before read cntvct_el0
On Mon, Mar 9, 2020 at 10:14 AM
-邮件原件-
发件人: Jerin Jacob [mailto:jerinjac...@gmail.com]
发送时间: 2020年3月9日 23:43
收件人: Linhaifeng
抄送: dev@dpdk.org; tho...@monjalon.net; Lilijun (Jerry)
; chenchanghu ; xudingke
主题: Re: [dpdk-dev] [PATCH] cycles: add isb before read cntvct_el0
On Mon, Mar 9, 2020 at 2:43 PM Linhaifeng
We should use isb rather than dsb to sync system counter to cntvct_el0.
Signed-off-by: Linhaifeng
---
lib/librte_eal/common/include/arch/arm/rte_atomic_64.h | 3 +++
lib/librte_eal/common/include/arch/arm/rte_cycles_64.h | 2 ++
2 files changed, 5 insertions(+)
diff --git a/lib/librte_eal
We should use isb rather than dsb to sync system counter to cntvct_el0.
Signed-off-by: Haifeng Lin
---
lib/librte_eal/common/include/arch/arm/rte_atomic_64.h | 3 +++
lib/librte_eal/common/include/arch/arm/rte_cycles_64.h | 2 ++
2 files changed, 5 insertions(+)
diff --git a/lib/librte_eal/comm
We should use isb rather than dsb to sync system counter to cntvct_el0.
Signed-off-by: Haifeng Lin
---
lib/librte_eal/common/include/arch/arm/rte_atomic_64.h | 3 +++
lib/librte_eal/common/include/arch/arm/rte_cycles_64.h | 2 ++
2 files changed, 5 insertions(+)
diff --git a/lib/librte_eal/common/
We nead isb rather than dsb to sync system counter to cntvct_el0.
Signed-off-by: Haifeng Lin
---
lib/librte_eal/common/include/arch/arm/rte_atomic_64.h | 3 +++
lib/librte_eal/common/include/arch/arm/rte_cycles_64.h | 2 ++
2 files changed, 5 insertions(+)
diff --git a/lib/librte_eal/common/inc
Hi, Chars
Thank you.
I use it for send pkts to the dedicated queue of slaves.
Maybe i should not use it. I would though another way.
-邮件原件-
发件人: Chas Williams [mailto:3ch...@gmail.com]
发送时间: 2018年11月30日 11:27
收件人: Linhaifeng ; dev@dpdk.org
抄送: ch...@att.com
主题: Re: [dpdk-dev] [PATCH
p loss" problem?
-Original Message-
发件人: Kulasek, TomaszX [mailto:tomaszx.kula...@intel.com]
发送时间: 2017年12月13日 20:42
收件人: Linhaifeng ; Doherty, Declan
; dev@dpdk.org
主题: RE: [dpdk-dev] [PATCH v3 3/4] net/bond: dedicated hw queues for LACP
control traffic
Hi,
> -Original Message-
Hi,
What is the purpose of this patch? fix problem or improve performance?
在 2017/7/5 0:46, Declan Doherty 写道:
> From: Tomasz Kulasek
>
> Add support for hardware flow classification of LACP control plane
> traffic to be redirect to a dedicated receive queue on each slave which
> is not visible
在 2016/1/18 11:05, Zhihong Wang 写道:
> This patch set optimizes DPDK memcpy for AVX512 platforms, to make full
> utilization of hardware resources and deliver high performance.
>
> In current DPDK, memcpy holds a large proportion of execution time in
> libs like Vhost, especially for large packets,
在 2016/12/6 10:28, Yuanhan Liu 写道:
> On Thu, Dec 01, 2016 at 07:42:02PM +0800, Haifeng Lin wrote:
>> When reg_size < page_size the function read in
>> rte_mem_virt2phy would not return, becausue
>> host_user_addr is invalid.
>>
>> Signed-off-by: Haifeng Lin
>> ---
>> v2:
>> fix TYPO_SPELLING warni
? 2016/10/9 15:27, Yuanhan Liu ??:
> +static void
> +add_guest_pages(struct virtio_net *dev, struct virtio_memory_region *reg,
> + uint64_t page_size)
> +{
> + uint64_t reg_size = reg->size;
> + uint64_t host_user_addr = reg->host_user_addr;
> + uint64_t guest_phys_addr = r
? 2016/10/9 15:27, Yuanhan Liu ??:
> + dev->nr_guest_pages = 0;
> + if (!dev->guest_pages) {
> + dev->max_guest_pages = 8;
> + dev->guest_pages = malloc(dev->max_guest_pages *
> + sizeof(struct guest_page));
> + }
> +
w
From: Linhaifeng
We should not drop the slow packets which subtype is
not marker or lacp. Because slow packets have other subtype
like OAM,OSSP,user defined and so on.
Signed-off-by: Linhaifeng
---
drivers/net/bonding/rte_eth_bond_pmd.c | 14 +-
1 file changed, 13 insertions(+), 1
Hi,all
please ignore the patch which title is "net/bonding: not handle vlan slow
packet",
I will send another one.
? 2016/11/1 20:32, linhaifeng ??:
> ? 2016/11/1 18:46, Ferruh Yigit ??:
>> Hi Haifeng,
>>
>> On 10/31/2016 3:52 AM, linhaifeng wrote:
>>&g
? 2016/11/1 18:46, Ferruh Yigit ??:
> Hi Haifeng,
>
> On 10/31/2016 3:52 AM, linhaifeng wrote:
>> From: Haifeng Lin
>>
>> if rx vlan offload is enable we should not handle vlan slow
>> packets too.
>>
>> Signed-off-by: Haifeng Lin
>> ---
>&
From: Haifeng Lin
if rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
---
drivers/net/bonding/rte_eth_bond_pmd.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
b/drivers/net/bonding/r
From: Haifeng Lin
if rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
---
drivers/net/bonding/rte_eth_bond_pmd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
b/drivers/net/bonding/rte
if rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
b/drivers/net/bonding/rte_eth_bond_pmd.c
index 43334f7..6c74bba 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/r
From: ZengGanghui
if rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
---
drivers/net/bonding/rte_eth_bond_pmd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
b/drivers/net/bonding/rte
If rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
---
drivers/net/bonding/rte_eth_bond_pmd.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
b/drivers/net/bonding/rte_eth_bond_pmd.
? 2016/10/10 16:03, Yuanhan Liu ??:
> On Sun, Oct 09, 2016 at 06:46:44PM +0800, linhaifeng wrote:
>> ? 2016/8/23 16:10, Yuanhan Liu ??:
>>> The basic idea of Tx zero copy is, instead of copying data from the
>>> desc buf, here we let the mbuf reference the desc buf ad
? 2016/8/23 16:10, Yuanhan Liu ??:
> The basic idea of Tx zero copy is, instead of copying data from the
> desc buf, here we let the mbuf reference the desc buf addr directly.
Is there problem when push vlan to the mbuf which reference the desc buf addr
directly?
We know if guest use virtio_net(k
? 2016/8/7 4:33, Jan Viktorin ??:
> On Fri, 05 Aug 2016 09:51:06 +0200
> Thomas Monjalon wrote:
>
>> 2016-08-05 09:44, Thomas Monjalon:
>>> 2016-08-05 10:09, linhaifeng:
>>>> hi,thomas
>>>>
>>>> Could you change the name of fil
hi,thomas
Could you change the name of file in directory
app/test/test_pci_sysfs/bus/pci/devices/ ?
I think somebody like us also cann't access internet in liunux.Windows not
support file name
include ':'.
thanks
linhaifeng
2016/7/30 21:30, Wiles, Keith :
>> On Jul 30, 2016, at 1:03 AM, linhaifeng wrote:
>>
>> hi
>>
>> I use 6 ports to send pkts in VM, but can only 4 ports work, how to enable
>> more ports to work?
>>
> In the help screen the command ?ppp [1-6]? is p
hi
I use 6 ports to send pkts in VM, but can only 4 ports work, how to enable more
ports to work?
>
> + if (unlikely(alloc_err)) {
> + uint16_t i = entry_success;
> +
> + m->nb_segs = seg_num;
> + for (; i < free_entries; i++)
> + rte_pktmbuf_free(pkts[entry_success]); ->
> rte_pktmbuf_free(pkts[i]);
> + }
> +
> rte_comp
On 2015/6/10 16:30, Luke Gorrie wrote:
> On 9 June 2015 at 10:46, Michael S. Tsirkin wrote:
>
>> By the way, similarly, host side must re-check avail idx after writing
>> used flags. I don't see where snabbswitch does it - is that a bug
>> in snabbswitch?
>
>
> Good question.
>
> Snabb Switc
On 2015/6/9 21:34, Xie, Huawei wrote:
> On 6/9/2015 4:47 PM, Michael S. Tsirkin wrote:
>> On Tue, Jun 09, 2015 at 03:04:02PM +0800, Linhaifeng wrote:
>>>
>>> On 2015/4/24 15:27, Luke Gorrie wrote:
>>>> On 24 April 2015 at 03:01, Linhaifeng wrote:
>&g
On 2015/4/24 15:27, Luke Gorrie wrote:
> On 24 April 2015 at 03:01, Linhaifeng wrote:
>
>> If not add memory fence what would happen? Packets loss or interrupt
>> loss?How to test it ?
>>
>
> You should be able to test it like this:
>
> 1. Boot two
On 2014/11/14 17:08, Wang, Zhihong wrote:
> Hi all,
>
> I'd like to propose an update on DPDK memcpy optimization.
> Please see RFC below for details.
>
>
> Thanks
> John
>
> ---
>
> DPDK Memcpy Optimization
>
> 1. Introduction
> 2. Terminology
> 3. Mechanism
> 3.1 Architectural Insight
On 2015/5/13 9:18, Ravi Kerur wrote:
> If you can wait until Thursday I will probably send v3 patch which will
> have full memcmp support.
Ok, I'd like to test it:)
>
> In your program try with volatile pointer and see if it helps.
like "volatile uint8_t *src, *dst" ?
Hi, Ravi Kerur
On 2015/5/9 5:19, Ravi Kerur wrote:
> Preliminary results on Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz, Ubuntu
> 14.04 x86_64 shows comparisons using AVX/SSE instructions taking 1/3rd
> CPU ticks for 16, 32, 48 and 64 bytes comparison. In addition,
I had write a program to test rte_m
On 2015/4/23 0:33, Huawei Xie wrote:
> update of used->idx and read of avail->flags could be reordered.
> memory fence should be used to ensure the order, otherwise guest could see a
> stale used->idx value after it toggles the interrupt suppression flag.
>
> Signed-off-by: Huawei Xie
> ---
>
On 2015/4/14 4:25, Marc Sune wrote:
>
>
> On 10/04/15 07:53, Linhaifeng wrote:
>> Hi, all
>>
>> I'am trying to use valgrind to check memory leak with my dpdk application
>> but dpdk always failed to mmap hugepages.
>>
>> Without valgri
Hi, all
I'am trying to use valgrind to check memory leak with my dpdk application but
dpdk always failed to mmap hugepages.
Without valgrind it works well.How to run dpdk applications with valgrind?Is
there any other way to check memory leak
with dpdk applications?
On 2015/3/24 18:06, Xie, Huawei wrote:
> On 3/24/2015 3:44 PM, Linhaifeng wrote:
>>
>> On 2015/3/24 9:53, Xie, Huawei wrote:
>>> On 3/24/2015 9:00 AM, Linhaifeng wrote:
>>>> On 2015/3/23 20:54, Xie, Huawei wrote:
>>>>>> -Original Messa
On 2015/3/26 15:58, Qiu, Michael wrote:
> On 3/26/2015 3:52 PM, Xie, Huawei wrote:
>> On 3/26/2015 3:05 PM, Qiu, Michael wrote:
>>> Function gpa_to_vva() could return zero, while this will lead
>>> a Segmentation fault.
>>>
>>> This patch is to fix this issue.
>>>
>>> Signed-off-by: Michael Qiu
On 2015/3/24 18:06, Xie, Huawei wrote:
> On 3/24/2015 3:44 PM, Linhaifeng wrote:
>>
>> On 2015/3/24 9:53, Xie, Huawei wrote:
>>> On 3/24/2015 9:00 AM, Linhaifeng wrote:
>>>> On 2015/3/23 20:54, Xie, Huawei wrote:
>>>>>> -Original Messa
On 2015/3/24 15:14, Xie, Huawei wrote:
> On 3/22/2015 8:08 PM, Ouyang, Changchun wrote:
>>
>>> -Original Message-
>>> From: linhaifeng [mailto:haifeng.lin at huawei.com]
>>> Sent: Saturday, March 21, 2015 9:47 AM
>>> To: dev at dpdk.org
>&
On 2015/3/24 9:53, Xie, Huawei wrote:
> On 3/24/2015 9:00 AM, Linhaifeng wrote:
>>
>> On 2015/3/23 20:54, Xie, Huawei wrote:
>>>
>>>> -----Original Message-
>>>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>>>> Sent: Mon
On 2015/3/23 20:54, Xie, Huawei wrote:
>
>
>> -Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Monday, March 23, 2015 8:24 PM
>> To: dev at dpdk.org
>> Cc: Ouyang, Changchun; Xie, Huawei
>> Subject: Re: [dp
On 2015/3/21 16:07, linhaifeng wrote:
> From: Linhaifeng
>
> Same as rte_vhost_enqueue_burst we should cast used->idx
> to volatile before notify guest.
>
> Signed-off-by: Linhaifeng
> ---
> lib/librte_vhost/vhost_rxtx.c | 2 +-
> 1 file changed, 1 inserti
cc changchun.ouyang at intel.com
cc huawei.xie at intel.com
On 2015/3/21 16:07, linhaifeng wrote:
> From: Linhaifeng
>
> Same as rte_vhost_enqueue_burst we should cast used->idx
> to volatile before notify guest.
>
> Signed-off-by: Linhaifeng
> ---
> lib/librte_vho
cc changchun.ouyang at intel.com
cc huawei.xie at intel.com
On 2015/3/21 9:47, linhaifeng wrote:
> From: Linhaifeng
>
> When failed to malloc buffer from mempool we just update last_used_idx but
> not used->idx so after many times vhost thought have handle all packets
> but
From: Linhaifeng
Same as rte_vhost_enqueue_burst we should cast used->idx
to volatile before notify guest.
Signed-off-by: Linhaifeng
---
lib/librte_vhost/vhost_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vh
Hi, changchun & xie
I have modify the path with your suggestions.Please review.
Thank you.
On 2015/3/20 15:28, Ouyang, Changchun wrote:
>
>
>> -Original Message-
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Friday, March 20, 2015 2:36 PM
On 2015/3/21 0:54, Xie, Huawei wrote:
> On 3/20/2015 6:47 PM, linhaifeng wrote:
>> From: Linhaifeng
>>
>> If failed to alloc mbuf ring_size times the rx_q may be empty and can't
>> receive any packets forever because nb_used is 0 forever.
> Agreed. In curr
From: Linhaifeng
When failed to malloc buffer from mempool we just update last_used_idx but
not used->idx so after many times vhost thought have handle all packets
but virtio_net thought vhost have not handle all packets and will not
update avail->idx.
Signed-off-by: Linhaifeng
--
On 2015/3/21 0:54, Xie, Huawei wrote:
> On 3/20/2015 6:47 PM, linhaifeng wrote:
>> From: Linhaifeng
>>
>> If failed to alloc mbuf ring_size times the rx_q may be empty and can't
>> receive any packets forever because nb_used is 0 forever.
> Agreed. In curr
From: Linhaifeng
If failed to alloc mbuf ring_size times the rx_q may be empty and can't
receive any packets forever because nb_used is 0 forever.
so we should try to refill when nb_used is 0.After otherone free mbuf
we can restart to receive packets.
Signed-off-by: Linhaifeng
---
Sorry for my wrong title. Please ignore it.
On 2015/3/20 17:10, linhaifeng wrote:
> From: Linhaifeng
>
> so we should try to refill when nb_used is 0.After otherone free mbuf
> we can restart to receive packets.
>
> Signed-off-by: Linhaifeng
> ---
> lib/librte_pmd_
From: Linhaifeng
so we should try to refill when nb_used is 0.After otherone free mbuf
we can restart to receive packets.
Signed-off-by: Linhaifeng
---
lib/librte_pmd_virtio/virtio_rxtx.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c
From: Linhaifeng
When failed to malloc buffer from mempool we just update last_used_idx but
not used->idx so after many times vhost thought have handle all packets
but virtio_net thought vhost have not handle all packets and will not
update avail->idx.
Signed-off-by: Linhaifeng
--
On 2015/3/20 11:54, linhaifeng wrote:
> From: Linhaifeng
>
> When failed to malloc buffer from mempool we just update last_used_idx but
> not used->idx so after many times vhost thought have handle all packets
> but virtio_net thought vhost have not handle all packets and
From: Linhaifeng
When failed to malloc buffer from mempool we just update last_used_idx but
not used->idx so after many times vhost thought have handle all packets
but virtio_net thought vhost have not handle all packets and will not
update avail->idx.
Signed-off-by: Linhaifeng
--
On 2015/2/12 17:28, Xie, Huawei wrote:
> On 2/12/2015 4:28 PM, Linhaifeng wrote:
>>
>> On 2015/2/12 13:07, Huawei Xie wrote:
>>> +
>>> + /* This is ugly */
>>> + mapped_size = memory.regions[idx].memory_size +
>>>
On 2015/2/12 13:07, Huawei Xie wrote:
> +
> + /* This is ugly */
> + mapped_size = memory.regions[idx].memory_size +
> + memory.regions[idx].mmap_offset;
> + mapped_address = (uint64_t)(uintptr_t)mmap(NULL,
> + mapped_siz
Hi, Xie
Is librte_vhost support openvswitch?
How to attach the vhost_device_ctx to the port of openvswitch?
On 2015/1/26 11:20, Huawei Xie wrote:
> v2 changes:
> make fdset num field reflect the current number of fds vhost server manages
> allocate context for connected fd in vserver_new_vq_con
/lib/librte_vhost/vhost_user/virtio-net-user.c:104:
error: missing initializer
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/virtio-net-user.c:104:
error: (near initialization for ?tmp[0].mapped_address?)
> -Original Message-
> From: Linhaifeng [mailto:haifeng.lin at
vdev, struct rte_mbuf *m)
{
...
ret = rte_vhost_enqueue_burst(tdev, VIRTIO_RXQ, &m, 1/*you cant try to
fill with rx_count*/);
..
}
>
> -Original Message-
> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
> Sent: Friday, February 06, 2015 12:02 PM
> To: Xu, Q
/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir
/dev/hugepages -m 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0
>
> -Original Message-
> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
> Sent: Friday, February 06, 2015 12:02 PM
> To: Xu, Qian
Hi,
I used l2fwd to test ixgbe PMD's latency (packet length is 64 bytes)
found an interesting thing that latency is about 22us when tx bits rate is 4M
and latency is 103us when tx bits rate is 5M.
Who can tell me why?Is it a bug?
Thank you very much!
--
Regards,
Haifeng
On 2015/2/4 9:38, Xu, Qian Q wrote:
> 4. Launch the VM1 and VM2 with virtio device, note: you need use qemu
> version>2.1 to enable the vhost-user server's feature. Old qemu such as
> 1.5,1.6 didn't support it.
> Below is my VM1 startup command, for your reference, similar for VM2.
> /home/qem
On 2015/2/5 20:00, Damjan Marion (damarion) wrote:
> Hi,
>
> I have system with 2 NUMA nodes and 256G RAM total. I noticed that DPDK
> crashes in rte_eal_init()
> when number of available hugepages is around 4 or above.
> Everything works fine with lower values (i.e. 3).
>
> I also tri
On 2015/2/5 20:00, Damjan Marion (damarion) wrote:
> Hi,
>
> I have system with 2 NUMA nodes and 256G RAM total. I noticed that DPDK
> crashes in rte_eal_init()
> when number of available hugepages is around 4 or above.
> Everything works fine with lower values (i.e. 3).
>
> I also trie
On 2015/1/28 17:51, Xie, Huawei wrote:
>
>
>> -Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Tuesday, January 27, 2015 3:57 PM
>> To: dpd >> dev at dpdk.org; ms >> Michael S. Tsirkin
>> Cc: lilijun;
On 2015/1/27 17:37, Michael S. Tsirkin wrote:
> On Tue, Jan 27, 2015 at 03:57:13PM +0800, Linhaifeng wrote:
>> Hi,all
>>
>> I use vhost-user to send data to VM at first it cant work well but after
>> many hours VM can not receive data but can send data.
>>
return count;
}
thank you very much!
On 2015/1/27 15:57, Linhaifeng wrote:
> Hi,all
>
> I use vhost-user to send data to VM at first it cant work well but after many
> hours VM can not receive data but can send data.
>
> (gdb)p avail_idx
> $4 = 2668
> (gdb)p free_entr
On 2015/2/1 18:36, Tetsuya Mukawa wrote:
> This patch should be put on "lib/librte_vhost: vhost-user support"
> patch series written by Xie, Huawei.
>
> There are 2 type of vhost devices. One is cuse, the other is vhost-user.
> So far, one of them we can use. To use the other, DPDK is needed to b
to virtio2, the problem is
> that after 3 hours, virtio2 can't receive packets, but virtio1 is still
> sending packets, am I right? So mz is like a packet generator to send
> packets, right?
>
>
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk
: 748232 kB
>> Unevictable:3704 kB
>> Mlocked:3704 kB
>> SwapTotal: 16686076 kB
>> SwapFree: 16686076 kB
>> Dirty: 488 kB
>> Writeback: 0 kB
>> AnonPages:230800 kB
>> Mapped:
On 2015/1/30 19:40, zhangsha (A) wrote:
> Hi ?all
>
> I am suffering from the problem mmap failed as followed when init dpdk eal.
>
> Fri Jan 30 09:03:29 2015:EAL: Setting up memory...
> Fri Jan 30 09:03:34 2015:EAL: map_all_hugepages(): mmap failed: Cannot
> allocate memory
> Fri Jan 30 09:03
On 2015/1/26 11:20, Huawei Xie wrote:
> In virtnet_send_command:
>
> /* Caller should know better */
> BUG_ON(!virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VQ) ||
> (out + in > VIRTNET_SEND_COMMAND_SG_MAX));
>
> Signed-off-by: Huawei Xie
> ---
> lib/librte_vhost/vi
et generator to send
> packets, right?
Yes,you are right.
>
>
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Linhaifeng
> Sent: Thursday, January 29, 2015 9:51 PM
> To: Xie, Huawei; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PA
On 2015/1/30 0:48, Srinivasreddy R wrote:
> EAL: 512 hugepages of size 2097152 reserved, but no mounted hugetlbfs found
> for that size
Maybe you haven't mount hugetlbfs.
--
Regards,
Haifeng
On 2015/1/29 21:00, Xie, Huawei wrote:
>
>
>> -Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Thursday, January 29, 2015 8:39 PM
>> To: Xie, Huawei; dev at dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH] vhost: notify
On 2015/1/29 18:39, Xie, Huawei wrote:
>> -if (count == 0)
>> +/* If there is no buffers we should notify guest to fill.
>> +* This is need when guest use virtio_net driver(not pmd).
>> +*/
>> +if (count == 0) {
>> +
From: Linhaifeng
If we found there is no buffer we should notify virtio_net to
fill buffers.
We use mz send buffers from VM to VM,found that the other VM
stop to receive data after many hours.
Signed-off-by: Linhaifeng
---
lib/librte_vhost/vhost_rxtx.c | 9 +++--
1 file changed, 7
Hi,all
I use vhost-user to send data to VM at first it cant work well but after many
hours VM can not receive data but can send data.
(gdb)p avail_idx
$4 = 2668
(gdb)p free_entries
$5 = 0
(gdb)l
/* check that we have enough buffers */
if (unlikely(count > free_entries))
On 2014/12/19 2:07, ciara.loftus at intel.com wrote:
> From: Ciara Loftus
>
> This patch fixes the issue whereby when using userspace vhost ports
> in the context of vSwitching, the name provided to the hypervisor/QEMU
> of the vhost tap device needs to be exposed in the library, in order
Who
>>
>> Can you mmap the region if gpa is 0? When i run VM with two numa node (qemu
>> will create two hugepage file) found that always failed to mmap with the
>> region
>> which gpa is 0.
>>
>> BTW can we ensure the memory regions cover with all the memory of hugepage
>> for VM?
>>
> We had disc
Hi, Xie
could you test vhost-user with follow numa node xml:
2097152
I cann't receive data from VM with above xml.
On 2014/12/11 5:37, Huawei Xie wrote:
> This patchset refines vhost library to support both vhost-cuse and vhost-user.
>
>
> Huawei Xie (12):
> cr
On 2015/1/23 11:40, Xie, Huawei wrote:
>
>
>> -Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Thursday, December 11, 2014 1:36 PM
>> To: Xie, Huawei; dev at dpdk.org
>> Cc: haifeng.lin at intel.com
>> Subjec
On 2015/1/22 23:21, Bruce Richardson wrote:
> This (size_c) is a run-time constant, not a compile-time constant. To trigger
> the
> memcpy optimizations inside the compiler, the size value must be constant at
> compile time.
Hi, Bruce
You are right. When use compile-time constant memcpy is fa
On 2015/1/22 19:34, Bruce Richardson wrote:
> On Thu, Jan 22, 2015 at 07:23:49PM +0900, Tetsuya Mukawa wrote:
>> On 2015/01/22 16:35, Matthew Hall wrote:
>>> On Thu, Jan 22, 2015 at 01:32:04PM +0800, Linhaifeng wrote:
>>>> Do you mean if call rte_memcpy before
On 2015/1/22 12:45, Matthew Hall wrote:
> One theory. Many DPDK functions crash if they are called before
> rte_eal_init()
> is called. So perhaps this could be a cause, since that won't have been
> called
> when working on a constant
Hi, Matthew
Thank you for your response.
Do you mean if
#define rte_memcpy(dst, src, n) \
((__builtin_constant_p(n)) ? \
memcpy((dst), (src), (n)) : \
rte_memcpy_func((dst), (src), (n)))
Why call memcpy when n is constant variable?
Can i change them to the follow codes?
#define rte_memcpy(dst, sr
On 2014/12/12 1:13, Xie, Huawei wrote:
>>
>> Only support one vhost-user port ?
>
> Do you mean vhost server by "port"?
> If that is the case, yes, now only one vhost server is supported for multiple
> virtio devices.
> As stated in the cover letter, we have requirement and plan for multiple
>
1 - 100 of 121 matches
Mail list logo