I have test the VFIO driver and IGB_UIO driver by l2fwd for many times. I find
that the VFIO driver?s performance is not better than the IGB_UIO.
Is something wrong with my test? My test is as follow:
1. bind two 82599 ether to VFIO ./tools/dpdk_nic_bind.py -b vfio-pci
03:00.0 03:00.1
2
Thank you very much.
My cpu is "Intel(R) Xeon(R) CPU E5620 @ 2.40GHz"
--
???: Vincent JARDIN [mailto:vincent.jardin at 6wind.com]
: 2014?8?8? 15:46
???: Linhaifeng
??: dev at dpdk.org; lixiao (H); Guofeng (E)
??: Re: [dpdk-dev] Is VFIO driver's per
On 2014/12/11 5:37, Huawei Xie wrote:
> vhost-user support
>
>
> Signed-off-by: Huawei Xie
> ---
> lib/librte_vhost/Makefile | 5 +-
> lib/librte_vhost/vhost-net.h | 4 +
> lib/librte_vhost/vhost_cuse/virtio-net-cdev.c | 9 +
> lib/librte_vhost/vhost
On 2014/12/11 5:37, Huawei Xie wrote:
> vhost-user support
>
>
> Signed-off-by: Huawei Xie
> ---
> lib/librte_vhost/Makefile | 5 +-
> lib/librte_vhost/vhost-net.h | 4 +
> lib/librte_vhost/vhost_cuse/virtio-net-cdev.c | 9 +
> lib/librte_vhost/vhost
On 2014/12/12 1:13, Xie, Huawei wrote:
>>
>> Only support one vhost-user port ?
>
> Do you mean vhost server by "port"?
> If that is the case, yes, now only one vhost server is supported for multiple
> virtio devices.
> As stated in the cover letter, we have requirement and plan for multiple
>
On 2015/5/13 9:18, Ravi Kerur wrote:
> If you can wait until Thursday I will probably send v3 patch which will
> have full memcmp support.
Ok, I'd like to test it:)
>
> In your program try with volatile pointer and see if it helps.
like "volatile uint8_t *src, *dst" ?
Will dpdk develop a vhost-user lib for the vhost-user backend of qemu?
On 2014/9/12 18:55, Huawei Xie wrote:
> The build of vhost lib requires fuse development package. It is turned off by
> default so as not to break DPDK build.
>
> Signed-off-by: Huawei Xie
> Acked-by: Konstantin Ananyev
> Ac
when will publish ?
On 2014/8/26 19:05, Xie, Huawei wrote:
> Hi all:
> We are implementing qemu official vhost-user interface into DPDK vhost
> library, so there would be two coexisting implementations for user space
> vhost backend.
> Pro and cons in my mind:
> Existing solution:
> Pros: works
Hi, all
I'am trying to use valgrind to check memory leak with my dpdk application but
dpdk always failed to mmap hugepages.
Without valgrind it works well.How to run dpdk applications with valgrind?Is
there any other way to check memory leak
with dpdk applications?
On 2015/4/14 4:25, Marc Sune wrote:
>
>
> On 10/04/15 07:53, Linhaifeng wrote:
>> Hi, all
>>
>> I'am trying to use valgrind to check memory leak with my dpdk application
>> but dpdk always failed to mmap hugepages.
>>
>> Without valgri
On 2015/4/23 0:33, Huawei Xie wrote:
> update of used->idx and read of avail->flags could be reordered.
> memory fence should be used to ensure the order, otherwise guest could see a
> stale used->idx value after it toggles the interrupt suppression flag.
>
> Signed-off-by: Huawei Xie
> ---
>
在 2016/12/6 10:28, Yuanhan Liu 写道:
> On Thu, Dec 01, 2016 at 07:42:02PM +0800, Haifeng Lin wrote:
>> When reg_size < page_size the function read in
>> rte_mem_virt2phy would not return, becausue
>> host_user_addr is invalid.
>>
>> Signed-off-by: Haifeng Lin
>> ---
>> v2:
>> fix TYPO_SPELLING warni
If rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
---
drivers/net/bonding/rte_eth_bond_pmd.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
b/drivers/net/bonding/rte_eth_bond_pmd.
From: ZengGanghui
if rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
---
drivers/net/bonding/rte_eth_bond_pmd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
b/drivers/net/bonding/rte
if rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
b/drivers/net/bonding/rte_eth_bond_pmd.c
index 43334f7..6c74bba 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/r
From: Haifeng Lin
if rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
---
drivers/net/bonding/rte_eth_bond_pmd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
b/drivers/net/bonding/rte
From: Haifeng Lin
if rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
---
drivers/net/bonding/rte_eth_bond_pmd.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
b/drivers/net/bonding/r
>
> + if (unlikely(alloc_err)) {
> + uint16_t i = entry_success;
> +
> + m->nb_segs = seg_num;
> + for (; i < free_entries; i++)
> + rte_pktmbuf_free(pkts[entry_success]); ->
> rte_pktmbuf_free(pkts[i]);
> + }
> +
> rte_comp
to virtio2, the problem is
> that after 3 hours, virtio2 can't receive packets, but virtio1 is still
> sending packets, am I right? So mz is like a packet generator to send
> packets, right?
>
>
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk
On 2015/2/1 18:36, Tetsuya Mukawa wrote:
> This patch should be put on "lib/librte_vhost: vhost-user support"
> patch series written by Xie, Huawei.
>
> There are 2 type of vhost devices. One is cuse, the other is vhost-user.
> So far, one of them we can use. To use the other, DPDK is needed to b
Hi, Chars
Thank you.
I use it for send pkts to the dedicated queue of slaves.
Maybe i should not use it. I would though another way.
-邮件原件-
发件人: Chas Williams [mailto:3ch...@gmail.com]
发送时间: 2018年11月30日 11:27
收件人: Linhaifeng ; dev@dpdk.org
抄送: ch...@att.com
主题: Re: [dpdk-dev] [PATCH
#define rte_memcpy(dst, src, n) \
((__builtin_constant_p(n)) ? \
memcpy((dst), (src), (n)) : \
rte_memcpy_func((dst), (src), (n)))
Why call memcpy when n is constant variable?
Can i change them to the follow codes?
#define rte_memcpy(dst, sr
On 2015/1/22 12:45, Matthew Hall wrote:
> One theory. Many DPDK functions crash if they are called before
> rte_eal_init()
> is called. So perhaps this could be a cause, since that won't have been
> called
> when working on a constant
Hi, Matthew
Thank you for your response.
Do you mean if
On 2015/1/22 19:34, Bruce Richardson wrote:
> On Thu, Jan 22, 2015 at 07:23:49PM +0900, Tetsuya Mukawa wrote:
>> On 2015/01/22 16:35, Matthew Hall wrote:
>>> On Thu, Jan 22, 2015 at 01:32:04PM +0800, Linhaifeng wrote:
>>>> Do you mean if call rte_memcpy before
On 2015/1/22 23:21, Bruce Richardson wrote:
> This (size_c) is a run-time constant, not a compile-time constant. To trigger
> the
> memcpy optimizations inside the compiler, the size value must be constant at
> compile time.
Hi, Bruce
You are right. When use compile-time constant memcpy is fa
On 2015/1/23 11:40, Xie, Huawei wrote:
>
>
>> -Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Thursday, December 11, 2014 1:36 PM
>> To: Xie, Huawei; dev at dpdk.org
>> Cc: haifeng.lin at intel.com
>> Subjec
Hi, Xie
could you test vhost-user with follow numa node xml:
2097152
I cann't receive data from VM with above xml.
On 2014/12/11 5:37, Huawei Xie wrote:
> This patchset refines vhost library to support both vhost-cuse and vhost-user.
>
>
> Huawei Xie (12):
> cr
>>
>> Can you mmap the region if gpa is 0? When i run VM with two numa node (qemu
>> will create two hugepage file) found that always failed to mmap with the
>> region
>> which gpa is 0.
>>
>> BTW can we ensure the memory regions cover with all the memory of hugepage
>> for VM?
>>
> We had disc
On 2014/12/19 2:07, ciara.loftus at intel.com wrote:
> From: Ciara Loftus
>
> This patch fixes the issue whereby when using userspace vhost ports
> in the context of vSwitching, the name provided to the hypervisor/QEMU
> of the vhost tap device needs to be exposed in the library, in order
Who
Hi,all
I use vhost-user to send data to VM at first it cant work well but after many
hours VM can not receive data but can send data.
(gdb)p avail_idx
$4 = 2668
(gdb)p free_entries
$5 = 0
(gdb)l
/* check that we have enough buffers */
if (unlikely(count > free_entries))
From: Linhaifeng
If we found there is no buffer we should notify virtio_net to
fill buffers.
We use mz send buffers from VM to VM,found that the other VM
stop to receive data after many hours.
Signed-off-by: Linhaifeng
---
lib/librte_vhost/vhost_rxtx.c | 9 +++--
1 file changed, 7
On 2015/1/29 18:39, Xie, Huawei wrote:
>> -if (count == 0)
>> +/* If there is no buffers we should notify guest to fill.
>> +* This is need when guest use virtio_net driver(not pmd).
>> +*/
>> +if (count == 0) {
>> +
On 2015/1/29 21:00, Xie, Huawei wrote:
>
>
>> -Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Thursday, January 29, 2015 8:39 PM
>> To: Xie, Huawei; dev at dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH] vhost: notify
On 2015/1/30 0:48, Srinivasreddy R wrote:
> EAL: 512 hugepages of size 2097152 reserved, but no mounted hugetlbfs found
> for that size
Maybe you haven't mount hugetlbfs.
--
Regards,
Haifeng
et generator to send
> packets, right?
Yes,you are right.
>
>
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Linhaifeng
> Sent: Thursday, January 29, 2015 9:51 PM
> To: Xie, Huawei; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PA
On 2015/1/26 11:20, Huawei Xie wrote:
> In virtnet_send_command:
>
> /* Caller should know better */
> BUG_ON(!virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VQ) ||
> (out + in > VIRTNET_SEND_COMMAND_SG_MAX));
>
> Signed-off-by: Huawei Xie
> ---
> lib/librte_vhost/vi
On 2015/1/30 19:40, zhangsha (A) wrote:
> Hi ?all
>
> I am suffering from the problem mmap failed as followed when init dpdk eal.
>
> Fri Jan 30 09:03:29 2015:EAL: Setting up memory...
> Fri Jan 30 09:03:34 2015:EAL: map_all_hugepages(): mmap failed: Cannot
> allocate memory
> Fri Jan 30 09:03
: 748232 kB
>> Unevictable:3704 kB
>> Mlocked:3704 kB
>> SwapTotal: 16686076 kB
>> SwapFree: 16686076 kB
>> Dirty: 488 kB
>> Writeback: 0 kB
>> AnonPages:230800 kB
>> Mapped:
在 2016/1/18 11:05, Zhihong Wang 写道:
> This patch set optimizes DPDK memcpy for AVX512 platforms, to make full
> utilization of hardware resources and deliver high performance.
>
> In current DPDK, memcpy holds a large proportion of execution time in
> libs like Vhost, especially for large packets,
Hi,
What is the purpose of this patch? fix problem or improve performance?
在 2017/7/5 0:46, Declan Doherty 写道:
> From: Tomasz Kulasek
>
> Add support for hardware flow classification of LACP control plane
> traffic to be redirect to a dedicated receive queue on each slave which
> is not visible
p loss" problem?
-Original Message-
发件人: Kulasek, TomaszX [mailto:tomaszx.kula...@intel.com]
发送时间: 2017年12月13日 20:42
收件人: Linhaifeng ; Doherty, Declan
; dev@dpdk.org
主题: RE: [dpdk-dev] [PATCH v3 3/4] net/bond: dedicated hw queues for LACP
control traffic
Hi,
> -Original Message-
return count;
}
thank you very much!
On 2015/1/27 15:57, Linhaifeng wrote:
> Hi,all
>
> I use vhost-user to send data to VM at first it cant work well but after many
> hours VM can not receive data but can send data.
>
> (gdb)p avail_idx
> $4 = 2668
> (gdb)p free_entr
On 2015/1/27 17:37, Michael S. Tsirkin wrote:
> On Tue, Jan 27, 2015 at 03:57:13PM +0800, Linhaifeng wrote:
>> Hi,all
>>
>> I use vhost-user to send data to VM at first it cant work well but after
>> many hours VM can not receive data but can send data.
>>
On 2015/1/28 17:51, Xie, Huawei wrote:
>
>
>> -Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Tuesday, January 27, 2015 3:57 PM
>> To: dpd >> dev at dpdk.org; ms >> Michael S. Tsirkin
>> Cc: lilijun;
On 2015/2/5 20:00, Damjan Marion (damarion) wrote:
> Hi,
>
> I have system with 2 NUMA nodes and 256G RAM total. I noticed that DPDK
> crashes in rte_eal_init()
> when number of available hugepages is around 4 or above.
> Everything works fine with lower values (i.e. 3).
>
> I also trie
On 2015/2/5 20:00, Damjan Marion (damarion) wrote:
> Hi,
>
> I have system with 2 NUMA nodes and 256G RAM total. I noticed that DPDK
> crashes in rte_eal_init()
> when number of available hugepages is around 4 or above.
> Everything works fine with lower values (i.e. 3).
>
> I also tri
On 2015/2/4 9:38, Xu, Qian Q wrote:
> 4. Launch the VM1 and VM2 with virtio device, note: you need use qemu
> version>2.1 to enable the vhost-user server's feature. Old qemu such as
> 1.5,1.6 didn't support it.
> Below is my VM1 startup command, for your reference, similar for VM2.
> /home/qem
Hi,
I used l2fwd to test ixgbe PMD's latency (packet length is 64 bytes)
found an interesting thing that latency is about 22us when tx bits rate is 4M
and latency is 103us when tx bits rate is 5M.
Who can tell me why?Is it a bug?
Thank you very much!
--
Regards,
Haifeng
/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir
/dev/hugepages -m 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0
>
> -Original Message-
> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
> Sent: Friday, February 06, 2015 12:02 PM
> To: Xu, Qian
vdev, struct rte_mbuf *m)
{
...
ret = rte_vhost_enqueue_burst(tdev, VIRTIO_RXQ, &m, 1/*you cant try to
fill with rx_count*/);
..
}
>
> -Original Message-
> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
> Sent: Friday, February 06, 2015 12:02 PM
> To: Xu, Q
/lib/librte_vhost/vhost_user/virtio-net-user.c:104:
error: missing initializer
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/virtio-net-user.c:104:
error: (near initialization for ?tmp[0].mapped_address?)
> -Original Message-
> From: Linhaifeng [mailto:haifeng.lin at
Hi, Xie
Is librte_vhost support openvswitch?
How to attach the vhost_device_ctx to the port of openvswitch?
On 2015/1/26 11:20, Huawei Xie wrote:
> v2 changes:
> make fdset num field reflect the current number of fds vhost server manages
> allocate context for connected fd in vserver_new_vq_con
On 2015/2/12 13:07, Huawei Xie wrote:
> +
> + /* This is ugly */
> + mapped_size = memory.regions[idx].memory_size +
> + memory.regions[idx].mmap_offset;
> + mapped_address = (uint64_t)(uintptr_t)mmap(NULL,
> + mapped_siz
On 2015/2/12 17:28, Xie, Huawei wrote:
> On 2/12/2015 4:28 PM, Linhaifeng wrote:
>>
>> On 2015/2/12 13:07, Huawei Xie wrote:
>>> +
>>> + /* This is ugly */
>>> + mapped_size = memory.regions[idx].memory_size +
>>>
From: Linhaifeng
When failed to malloc buffer from mempool we just update last_used_idx but
not used->idx so after many times vhost thought have handle all packets
but virtio_net thought vhost have not handle all packets and will not
update avail->idx.
Signed-off-by: Linhaifeng
--
On 2015/3/20 11:54, linhaifeng wrote:
> From: Linhaifeng
>
> When failed to malloc buffer from mempool we just update last_used_idx but
> not used->idx so after many times vhost thought have handle all packets
> but virtio_net thought vhost have not handle all packets and
From: Linhaifeng
When failed to malloc buffer from mempool we just update last_used_idx but
not used->idx so after many times vhost thought have handle all packets
but virtio_net thought vhost have not handle all packets and will not
update avail->idx.
Signed-off-by: Linhaifeng
--
From: Linhaifeng
so we should try to refill when nb_used is 0.After otherone free mbuf
we can restart to receive packets.
Signed-off-by: Linhaifeng
---
lib/librte_pmd_virtio/virtio_rxtx.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c
Sorry for my wrong title. Please ignore it.
On 2015/3/20 17:10, linhaifeng wrote:
> From: Linhaifeng
>
> so we should try to refill when nb_used is 0.After otherone free mbuf
> we can restart to receive packets.
>
> Signed-off-by: Linhaifeng
> ---
> lib/librte_pmd_
From: Linhaifeng
If failed to alloc mbuf ring_size times the rx_q may be empty and can't
receive any packets forever because nb_used is 0 forever.
so we should try to refill when nb_used is 0.After otherone free mbuf
we can restart to receive packets.
Signed-off-by: Linhaifeng
---
On 2015/3/21 0:54, Xie, Huawei wrote:
> On 3/20/2015 6:47 PM, linhaifeng wrote:
>> From: Linhaifeng
>>
>> If failed to alloc mbuf ring_size times the rx_q may be empty and can't
>> receive any packets forever because nb_used is 0 forever.
> Agreed. In curr
From: Linhaifeng
When failed to malloc buffer from mempool we just update last_used_idx but
not used->idx so after many times vhost thought have handle all packets
but virtio_net thought vhost have not handle all packets and will not
update avail->idx.
Signed-off-by: Linhaifeng
--
On 2015/3/21 0:54, Xie, Huawei wrote:
> On 3/20/2015 6:47 PM, linhaifeng wrote:
>> From: Linhaifeng
>>
>> If failed to alloc mbuf ring_size times the rx_q may be empty and can't
>> receive any packets forever because nb_used is 0 forever.
> Agreed. In curr
Hi, changchun & xie
I have modify the path with your suggestions.Please review.
Thank you.
On 2015/3/20 15:28, Ouyang, Changchun wrote:
>
>
>> -Original Message-
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Friday, March 20, 2015 2:36 PM
From: Linhaifeng
Same as rte_vhost_enqueue_burst we should cast used->idx
to volatile before notify guest.
Signed-off-by: Linhaifeng
---
lib/librte_vhost/vhost_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vh
cc changchun.ouyang at intel.com
cc huawei.xie at intel.com
On 2015/3/21 9:47, linhaifeng wrote:
> From: Linhaifeng
>
> When failed to malloc buffer from mempool we just update last_used_idx but
> not used->idx so after many times vhost thought have handle all packets
> but
cc changchun.ouyang at intel.com
cc huawei.xie at intel.com
On 2015/3/21 16:07, linhaifeng wrote:
> From: Linhaifeng
>
> Same as rte_vhost_enqueue_burst we should cast used->idx
> to volatile before notify guest.
>
> Signed-off-by: Linhaifeng
> ---
> lib/librte_vho
On 2015/3/21 16:07, linhaifeng wrote:
> From: Linhaifeng
>
> Same as rte_vhost_enqueue_burst we should cast used->idx
> to volatile before notify guest.
>
> Signed-off-by: Linhaifeng
> ---
> lib/librte_vhost/vhost_rxtx.c | 2 +-
> 1 file changed, 1 inserti
On 2015/3/23 20:54, Xie, Huawei wrote:
>
>
>> -Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Monday, March 23, 2015 8:24 PM
>> To: dev at dpdk.org
>> Cc: Ouyang, Changchun; Xie, Huawei
>> Subject: Re: [dp
On 2015/3/24 9:53, Xie, Huawei wrote:
> On 3/24/2015 9:00 AM, Linhaifeng wrote:
>>
>> On 2015/3/23 20:54, Xie, Huawei wrote:
>>>
>>>> -----Original Message-
>>>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>>>> Sent: Mon
On 2015/3/24 15:14, Xie, Huawei wrote:
> On 3/22/2015 8:08 PM, Ouyang, Changchun wrote:
>>
>>> -Original Message-
>>> From: linhaifeng [mailto:haifeng.lin at huawei.com]
>>> Sent: Saturday, March 21, 2015 9:47 AM
>>> To: dev at dpdk.org
>&
On 2015/3/24 18:06, Xie, Huawei wrote:
> On 3/24/2015 3:44 PM, Linhaifeng wrote:
>>
>> On 2015/3/24 9:53, Xie, Huawei wrote:
>>> On 3/24/2015 9:00 AM, Linhaifeng wrote:
>>>> On 2015/3/23 20:54, Xie, Huawei wrote:
>>>>>> -Original Messa
On 2015/3/26 15:58, Qiu, Michael wrote:
> On 3/26/2015 3:52 PM, Xie, Huawei wrote:
>> On 3/26/2015 3:05 PM, Qiu, Michael wrote:
>>> Function gpa_to_vva() could return zero, while this will lead
>>> a Segmentation fault.
>>>
>>> This patch is to fix this issue.
>>>
>>> Signed-off-by: Michael Qiu
On 2015/3/24 18:06, Xie, Huawei wrote:
> On 3/24/2015 3:44 PM, Linhaifeng wrote:
>>
>> On 2015/3/24 9:53, Xie, Huawei wrote:
>>> On 3/24/2015 9:00 AM, Linhaifeng wrote:
>>>> On 2015/3/23 20:54, Xie, Huawei wrote:
>>>>>> -Original Messa
Hi, Ravi Kerur
On 2015/5/9 5:19, Ravi Kerur wrote:
> Preliminary results on Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz, Ubuntu
> 14.04 x86_64 shows comparisons using AVX/SSE instructions taking 1/3rd
> CPU ticks for 16, 32, 48 and 64 bytes comparison. In addition,
I had write a program to test rte_m
On 2014/11/12 5:37, Xie, Huawei wrote:
> Hi Tetsuya:
> There are two major technical issues in my mind for vhost-user implementation.
>
> 1) memory region map
> Vhost-user passes us file fd and offset for each memory region. Unfortunately
> the mmap offset is "very" wrong. I discovered this iss
On 2014/11/12 5:37, Xie, Huawei wrote:
> Hi Tetsuya:
> There are two major technical issues in my mind for vhost-user implementation.
>
> 1) memory region map
> Vhost-user passes us file fd and offset for each memory region. Unfortunately
> the mmap offset is "very" wrong. I discovered this iss
On 2014/11/12 12:12, Tetsuya Mukawa wrote:
> Hi Xie,
>
> (2014/11/12 6:37), Xie, Huawei wrote:
>> Hi Tetsuya:
>> There are two major technical issues in my mind for vhost-user
>> implementation.
>>
>> 1) memory region map
>> Vhost-user passes us file fd and offset for each memory region.
>> Un
On 2014/11/14 9:28, Xie, Huawei wrote:
>
>
>> -Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Wednesday, November 12, 2014 11:28 PM
>> To: Xie, Huawei; 'Tetsuya Mukawa'; dev at dpdk.org
>> Subjec
On 2014/11/14 10:30, Tetsuya Mukawa wrote:
> Hi Lin,
>
> (2014/11/13 15:30), Linhaifeng wrote:
>> On 2014/11/12 12:12, Tetsuya Mukawa wrote:
>>> Hi Xie,
>>>
>>> (2014/11/12 6:37), Xie, Huawei wrote:
>>>> Hi Tetsuya:
>>>&
On 2014/11/14 11:40, Tetsuya Mukawa wrote:
> Hi Lin,
>
> (2014/11/14 12:13), Linhaifeng wrote:
>>
>> size should be same as mmap and
>> guest_mem -= (memory.regions[i].mmap_offset / sizeof(*guest_mem));
>>
>
> Thanks. It
On 2014/11/14 13:12, Tetsuya Mukawa wrote:
> ease try another value like 6000MB
i have try this value 6000MB.I can munmap success.
you mmap with size "memory_size + memory_offset" should also munmap with this
size.
--
Regards,
Haifeng
Hi,all
when i compile my program with dpdk there is a warning found by gcc.The message
is like follow.I don't know how to avoid it.Help me.
/usr/include/dpdk-1.7.0/x86_64-native-linuxapp-gcc//include/rte_common.h:176:
warning: cast from function call of type ?uintptr_t? to non-matching type ?vo
On 2014/10/29 9:26, Choonho Son wrote:
> Hi,
>
> After terminating DPDK application, it does not release hugepages.
> Is there any reason for it or to-do item?
>
> Thanks,
> Choonho Son
>
>
I have wrote a patch to release hugepages but haven't send it.
I will send this path later.
--
Regard
maybe somebody want to free hugepages when application exit.
so add this function for application to release hugepages when exit.
Signed-off-by: linhaifeng
---
.../lib/librte_eal/common/include/rte_memory.h | 11 +
.../lib/librte_eal/linuxapp/eal/eal_memory.c | 27
On 2014/10/29 11:44, Matthew Hall wrote:
> On Wed, Oct 29, 2014 at 03:27:58AM +, Qiu, Michael wrote:
>> I just saw one return path with value '0', and no any other place
>> return a negative value, so it is better to be designed as one
>> non-return function,
>>
>> +void
>> +rte_eal_hugepa
rte_eal_hugepage_free() is used for unlink all hugepages.If you want to
free all hugepages you must make sure that you have stop to use it,and you
must call this function before exit process.
Signed-off-by: linhaifeng
---
.../lib/librte_eal/common/include/rte_memory.h | 11
.../lib
On 2014/10/29 14:14, Qiu, Michael wrote:
> ? 10/29/2014 1:49 PM, linhaifeng ??:
>> rte_eal_hugepage_free() is used for unlink all hugepages.If you want to
>> free all hugepages you must make sure that you have stop to use it,and you
>> must call this function before exit
On 2014/10/29 13:26, Qiu, Michael wrote:
> ? 10/29/2014 11:46 AM, Matthew Hall ??:
>> On Wed, Oct 29, 2014 at 03:27:58AM +, Qiu, Michael wrote:
>>> I just saw one return path with value '0', and no any other place
>>> return a negative value, so it is better to be designed as one
>>> non-r
On 2014/10/29 16:04, Qiu, Michael wrote:
> 10/29/2014 2:41 PM, Linhaifeng :
>>
>> On 2014/10/29 14:14, Qiu, Michael wrote:
>>> ? 10/29/2014 1:49 PM, linhaifeng ??:
>>>> rte_eal_hugepage_free() is used for unlink all hugepages.If you want to
>>>>
hi
I use 6 ports to send pkts in VM, but can only 4 ports work, how to enable more
ports to work?
On 2014/11/14 17:08, Wang, Zhihong wrote:
> Hi all,
>
> I'd like to propose an update on DPDK memcpy optimization.
> Please see RFC below for details.
>
>
> Thanks
> John
>
> ---
>
> DPDK Memcpy Optimization
>
> 1. Introduction
> 2. Terminology
> 3. Mechanism
> 3.1 Architectural Insight
2016/7/30 21:30, Wiles, Keith :
>> On Jul 30, 2016, at 1:03 AM, linhaifeng wrote:
>>
>> hi
>>
>> I use 6 ports to send pkts in VM, but can only 4 ports work, how to enable
>> more ports to work?
>>
> In the help screen the command ?ppp [1-6]? is p
We nead isb rather than dsb to sync system counter to cntvct_el0.
Signed-off-by: Haifeng Lin
---
lib/librte_eal/common/include/arch/arm/rte_atomic_64.h | 3 +++
lib/librte_eal/common/include/arch/arm/rte_cycles_64.h | 2 ++
2 files changed, 5 insertions(+)
diff --git a/lib/librte_eal/common/inc
We should use isb rather than dsb to sync system counter to cntvct_el0.
Signed-off-by: Haifeng Lin
---
lib/librte_eal/common/include/arch/arm/rte_atomic_64.h | 3 +++
lib/librte_eal/common/include/arch/arm/rte_cycles_64.h | 2 ++
2 files changed, 5 insertions(+)
diff --git a/lib/librte_eal/common/
We should use isb rather than dsb to sync system counter to cntvct_el0.
Signed-off-by: Haifeng Lin
---
lib/librte_eal/common/include/arch/arm/rte_atomic_64.h | 3 +++
lib/librte_eal/common/include/arch/arm/rte_cycles_64.h | 2 ++
2 files changed, 5 insertions(+)
diff --git a/lib/librte_eal/comm
We should use isb rather than dsb to sync system counter to cntvct_el0.
Signed-off-by: Linhaifeng
---
lib/librte_eal/common/include/arch/arm/rte_atomic_64.h | 3 +++
lib/librte_eal/common/include/arch/arm/rte_cycles_64.h | 2 ++
2 files changed, 5 insertions(+)
diff --git a/lib/librte_eal
-邮件原件-
发件人: Jerin Jacob [mailto:jerinjac...@gmail.com]
发送时间: 2020年3月9日 23:43
收件人: Linhaifeng
抄送: dev@dpdk.org; tho...@monjalon.net; Lilijun (Jerry)
; chenchanghu ; xudingke
主题: Re: [dpdk-dev] [PATCH] cycles: add isb before read cntvct_el0
On Mon, Mar 9, 2020 at 2:43 PM Linhaifeng
-邮件原件-
发件人: David Marchand [mailto:david.march...@redhat.com]
发送时间: 2020年3月9日 17:19
收件人: Linhaifeng
抄送: dev@dpdk.org; tho...@monjalon.net; Lilijun (Jerry)
; chenchanghu ; xudingke
主题: Re: [dpdk-dev] [PATCH] cycles: add isb before read cntvct_el0
On Mon, Mar 9, 2020 at 10:14 AM
> -Original Message-
> From: Gavin Hu [mailto:gavin...@arm.com]
> Sent: Tuesday, March 10, 2020 3:11 PM
> To: Linhaifeng ; dev@dpdk.org;
> tho...@monjalon.net
> Cc: chenchanghu ; xudingke
> ; Lilijun (Jerry) ; Honnappa
> Nagarahalli ; Steve Capper
> ; nd
&g
1 - 100 of 121 matches
Mail list logo