[dpdk-dev] share a table
Hi all, Now I use the DPDK multiple process example : client -server. I want the server create a hash table, then share it to client. Currently what I do is to create_hash_table function returns a pointer which points to the flow table (inside the create_hash_table function, I use rte_calloc function allocates the space for the flow table and entries). I use memzone stores the table address, then client maps the memzone can find the table address. But this method does not work. When I access the flow table, it will have a segment fault problem. Does anybody know what will be a good way which can allow the server create a hash table share to the client? Any suggestion will be appreciated.
[dpdk-dev] [PATCH v9 0/5] Support VxLAN & NVGRE checksum off-load on X550
2016-03-10 10:42, Wenzhuo Lu: > This patch set add the VxLAN & NVGRE checksum off-load support. > Both RX and TX checksum off-load can be used for VxLAN & NVGRE. > And the VxLAN port can be set, it's implemented in this patch > set either. I've removed the old function name and Applied, thanks
[dpdk-dev] [PATCH v8 1/4] lib/ether: optimize struct rte_eth_tunnel_filter_conf
2016-03-10 11:05, Jingjing Wu: > From: Xutao Sun > > Change the fields of outer_mac and inner_mac in struct > rte_eth_tunnel_filter_conf from pointer to struct in order to > keep the code's readability. It breaks compilation of examples/tep_termination. > --- a/app/test-pmd/cmdline.c > +++ b/app/test-pmd/cmdline.c > @@ -6628,8 +6628,10 @@ cmd_tunnel_filter_parsed(void *parsed_result, > struct rte_eth_tunnel_filter_conf tunnel_filter_conf; > int ret = 0; > > - tunnel_filter_conf.outer_mac = &res->outer_mac; > - tunnel_filter_conf.inner_mac = &res->inner_mac; > + rte_memcpy(&tunnel_filter_conf.outer_mac, &res->outer_mac, > + ETHER_ADDR_LEN); > + rte_memcpy(&tunnel_filter_conf.inner_mac, &res->inner_mac, > + ETHER_ADDR_LEN); Please use ether_addr_copy(). > --- a/drivers/net/i40e/i40e_ethdev.c > +++ b/drivers/net/i40e/i40e_ethdev.c > @@ -5839,10 +5839,10 @@ i40e_dev_tunnel_filter_set(struct i40e_pf *pf, > } > pfilter = cld_filter; > > - (void)rte_memcpy(&pfilter->outer_mac, tunnel_filter->outer_mac, > - sizeof(struct ether_addr)); > - (void)rte_memcpy(&pfilter->inner_mac, tunnel_filter->inner_mac, > - sizeof(struct ether_addr)); > + (void)rte_memcpy(&pfilter->outer_mac, &tunnel_filter->outer_mac, > + ETHER_ADDR_LEN); > + (void)rte_memcpy(&pfilter->inner_mac, &tunnel_filter->inner_mac, > + ETHER_ADDR_LEN); As already commented in January, please stop this useless return cast. There is a dedicated function to copy MAC addresses: ether_addr_copy()
[dpdk-dev] [PATCH v8 0/4] This patch set adds tunnel filter support for IP in GRE on i40e.
2016-03-10 11:05, Jingjing Wu: > Xutao Sun (4): > lib/ether: optimize the'rte_eth_tunnel_filter_conf' structure > lib/ether: add IP in GRE type > driver/i40e: implement tunnel filter for IP in GRE > app/test-pmd: test tunnel filter for IP in GRE I've done the changes for ether_addr_copy and fixed tep_termination. Applied, thanks
[dpdk-dev] dpdk hash lookup function crashed (segment fault)
Hi all, When I use the dpdk lookup function, I met the segment fault problem. Can anybody help to look at why this happens. I will put the aim what I want to do and the related piece of code, and my debug message, This problem is that in dpdk multi process - client and server example, dpdk-2.2.0/examples/multi_process/client_server_mp My aim is that server create a hash table, then share it to client. Client will write the hash table, server will read the hash table. I am using dpdk hash table. What I did is that server create a hash table (table and array entries), return the table address. I use memzone pass the table address to client. In client, the second lookup gets segment fault. The system gets crashed. I will put some related code here. create hash table function: struct onvm_ft* onvm_ft_create(int cnt, int entry_size) { struct rte_hash* hash; struct onvm_ft* ft; struct rte_hash_parameters ipv4_hash_params = { .name = NULL, .entries = cnt, .key_len = sizeof(struct onvm_ft_ipv4_5tuple), .hash_func = NULL, .hash_func_init_val = 0, }; char s[64]; /* create ipv4 hash table. use core number and cycle counter to get a unique name. */ ipv4_hash_params.name = s; ipv4_hash_params.socket_id = rte_socket_id(); snprintf(s, sizeof(s), "onvm_ft_%d-%"PRIu64, rte_lcore_id(), rte_get_tsc_cycles()); hash = rte_hash_create(&ipv4_hash_params); if (hash == NULL) { return NULL; } ft = (struct onvm_ft*)rte_calloc("table", 1, sizeof(struct onvm_ft), 0); if (ft == NULL) { rte_hash_free(hash); return NULL; } ft->hash = hash; ft->cnt = cnt; ft->entry_size = entry_size; /* Create data array for storing values */ ft->data = rte_calloc("entry", cnt, entry_size, 0); if (ft->data == NULL) { rte_hash_free(hash); rte_free(ft); return NULL; } return ft; } related structure: struct onvm_ft { struct rte_hash* hash; char* data; int cnt; int entry_size; }; in server side, I will call the create function, use memzone share it to client. The following is what I do: related variables: struct onvm_ft *sdn_ft; struct onvm_ft **sdn_ft_p; const struct rte_memzone *mz_ftp; sdn_ft = onvm_ft_create(1024, sizeof(struct onvm_flow_entry)); if(sdn_ft == NULL) { rte_exit(EXIT_FAILURE, "Unable to create flow table\n"); } mz_ftp = rte_memzone_reserve(MZ_FTP_INFO, sizeof(struct onvm_ft *), rte_socket_id(), NO_FLAGS); if (mz_ftp == NULL) { rte_exit(EXIT_FAILURE, "Canot reserve memory zone for flow table pointer\n"); } memset(mz_ftp->addr, 0, sizeof(struct onvm_ft *)); sdn_ft_p = mz_ftp->addr; *sdn_ft_p = sdn_ft; In client side: struct onvm_ft *sdn_ft; static void map_flow_table(void) { const struct rte_memzone *mz_ftp; struct onvm_ft **ftp; mz_ftp = rte_memzone_lookup(MZ_FTP_INFO); if (mz_ftp == NULL) rte_exit(EXIT_FAILURE, "Cannot get flow table pointer\n"); ftp = mz_ftp->addr; sdn_ft = *ftp; } The following is my debug message: I set a breakpoint in lookup table line. To narrow down the problem, I just send one flow. So the second time and the first time, the packets are the same. For the first time, it works. I print out the parameters: inside the onvm_ft_lookup function, if there is a related entry, it will return the address by flow_entry. Breakpoint 1, datapath_handle_read (dp=0x78c0) at /home/zhangwei1984/openNetVM-master/openNetVM/examples/flow_table/sdn.c:191 191 ret = onvm_ft_lookup(sdn_ft, fk, (char**)&flow_entry); (gdb) print *sdn_ft $1 = {hash = 0x7fff32cce740, data = 0x7fff32cb0480 "", cnt = 1024, entry_size = 56} (gdb) print *fk $2 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'} (gdb) s onvm_ft_lookup (table=0x7fff32cbe4c0, key=0x7fff32b99d00, data=0x768d2b00) at /home/zhangwei1984/openNetVM-master/openNetVM/onvm/shared/onvm_flow_table.c:151 151 softrss = onvm_softrss(key); (gdb) n 152 printf("software rss %d\n", softrss); (gdb) software rss 403183624 154 tbl_index = rte_hash_lookup_with_hash(table->hash, (const void *)key, softrss); (gdb) print table->hash $3 = (struct rte_hash *) 0x7fff32cce740 (gdb) print *key $4 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 11798, proto = 17 '\021'} (gdb) print softrss $5 = 403183624 (gdb) c After I hit c, it will do the second lookup, B
[dpdk-dev] [PATCH v4 0/4] Add PCAP support to source and sink port
2016-03-11 17:08, Fan Zhang: > This patchset adds feature to source and sink type port in librte_port > library, and to examples/ip_pipline. Originally, source/sink ports act > as input and output of NULL packets generator. This patchset enables > them read from and write to specific PCAP file, to generate and dump > packets. [...] > Fan Zhang (4): > lib/librte_port: add PCAP file support to source port > example/ip_pipeline: add PCAP file support > lib/librte_port: add packet dumping to PCAP file support in sink port > examples/ip_pipeline: add packets dumping to PCAP file support Applied, thanks
[dpdk-dev] [PATCH v3] ip_pipeline: add load balancing function to pass-through pipeline
2016-03-10 15:29, Jasvinder Singh: > The pass-through pipeline implementation is extended with load balancing > function. This function allows uniform distribution of the packets among > its output ports. For packets distribution, any application level logic > can be applied. For instance, in this implementation, hash value > computed over specific header fields of the incoming packets has been > used to spread traffic uniformly among the output ports. Applied, thanks
[dpdk-dev] [PATCH] app/test-pmd: add support for zero rx and tx queues
> > Added testpmd support to validate zero nb_rxq/nb_txq > > changes of librte_ether. > > > > Signed-off-by: Reshma Pattan > > Acked-by: Pablo de Lara Applied, thanks
[dpdk-dev] [PATCH] pipeline: use unsigned constants for left shift operations
2016-03-10 15:49, Panu Matilainen: > --- a/examples/ip_pipeline/pipeline/pipeline_routing.c > +++ b/examples/ip_pipeline/pipeline/pipeline_routing.c > @@ -319,7 +319,7 @@ app_pipeline_routing_add_route(struct app_params *app, > if ((depth == 0) || (depth > 32)) > return -1; > > - netmask = (~0) << (32 - depth); > + netmask = (~U0) << (32 - depth); Typo: should be 0U.
[dpdk-dev] [PATCH] pipeline: use unsigned constants for left shift operations
> > Tell the compiler to use unsigned constants for left shift ops, > > otherwise building with gcc >= 6.0 fails due to multiple warnings like: > > warning: left shift of negative value [-Wshift-negative-value] > > > > Signed-off-by: Panu Matilainen > > Acked-by: Cristian Dumitrescu Applied with typo fixed, thanks
[dpdk-dev] [PATCH v3 0/3] mk: add DT_NEEDED entries for external library deps
2016-03-10 15:15, Panu Matilainen: > Add hopefully all the remaining missing DT_NEEDED entries for external > library dependencies on the libraries side: librte_vhost, librte_sched > and librte_eal. > > Panu Matilainen (3): > mk: clear up libm and librt linkage confusion > mk: add DT_NEEDED entries for librte_vhost external dependencies > mk: add DT_NEEDED entries for librte_eal external dependencies Applied, thanks
[dpdk-dev] [PATCH 0/3] sched: patches for 2.2
2016-03-08 07:49, Dumitrescu, Cristian: > We are working on implementing an alternative solution based on 2x 64-bit > multiplication, which is supported by CPUs and compilers for more than a > decade now. The 32-bit solution proposed by Stephen requires truncation with > some loss of precision, which can potentially lead to some corner cases which > are difficult to predict, therefore I am not feeling 100% confident with it. > The 32-bit arithmetic gave me a lot of headaches when developing QoS code, > therefore I am very cautious of it. > > I am not sure we are able to finalize implementation and testing for release > 16.4, therefore it would be fair to accept Stephen's solution for release > 16.4 and consider the new safer 2x 64-bit multiplication solution which does > not involve any loss of precision once it becomes available. > > Regarding Stephen's patches, I think there is a pending issue regarding the > legal side of the Copyright, which is attributed to Intel, although Stephen's > code is relicensed with BSD license by permission from the original code > author (which also submitted the code to Linux kernel under GPL). This was > already flagged. This is a legal issue and I do not feel comfortable with > ack-ing this patch until the legal resolution on this is crystal clear. > > I also think the new files called rte_reciprocal.[hc] implement an algorithm > that is very generic and totally independent of the QoS code, therefore it > should be placed into a different folder that is globally visible to other > libraries (librte_eal/common ?) just in case other usages for this algorithm > are identified in the future. I suggest we break the patch into two separate > patches submitted independently: one introducing the rte_reciprocal.[hc] > algorithm to librte_eal/common and the second containing just the > librte_sched changes, which are small. I am thinking ahead here: once we have > the 2x64-bit multiplication solution in place, we should not have > rte_reciprocal.[hc] hanging in librte_sched folder without being used here, > while it might be used by other parts of DPDK. Let's keep the improvement as-is to test it in the first release candidate. We can move the code and/or fix the file header later. Series applied, thanks.
[dpdk-dev] [PATCH] eal:Change log output to DEBUG instead of INFO
2015-09-10 14:40, Keith Wiles: > When log level is set to 7 (INFO) these messages are still displayed > and should be set to DEBUG. > > Signed-off-by: Keith Wiles Applied, thanks
[dpdk-dev] [PATCH 0/3] sched: patches for 2.2
> -Original Message- > From: Thomas Monjalon [mailto:thomas.monjalon at 6wind.com] > Sent: Sunday, March 13, 2016 10:26 PM > To: Dumitrescu, Cristian ; Stephen > Hemminger > Cc: dev at dpdk.org > Subject: Re: [dpdk-dev] [PATCH 0/3] sched: patches for 2.2 > > 2016-03-08 07:49, Dumitrescu, Cristian: > > We are working on implementing an alternative solution based on 2x 64-bit > multiplication, which is supported by CPUs and compilers for more than a > decade now. The 32-bit solution proposed by Stephen requires truncation > with some loss of precision, which can potentially lead to some corner cases > which are difficult to predict, therefore I am not feeling 100% confident with > it. The 32-bit arithmetic gave me a lot of headaches when developing QoS > code, therefore I am very cautious of it. > > > > I am not sure we are able to finalize implementation and testing for release > 16.4, therefore it would be fair to accept Stephen's solution for release 16.4 > and consider the new safer 2x 64-bit multiplication solution which does not > involve any loss of precision once it becomes available. > > > > Regarding Stephen's patches, I think there is a pending issue regarding the > legal side of the Copyright, which is attributed to Intel, although Stephen's > code is relicensed with BSD license by permission from the original code > author (which also submitted the code to Linux kernel under GPL). This was > already flagged. This is a legal issue and I do not feel comfortable with > ack-ing > this patch until the legal resolution on this is crystal clear. > > > > I also think the new files called rte_reciprocal.[hc] implement an algorithm > that is very generic and totally independent of the QoS code, therefore it > should be placed into a different folder that is globally visible to other > libraries (librte_eal/common ?) just in case other usages for this algorithm > are identified in the future. I suggest we break the patch into two separate > patches submitted independently: one introducing the rte_reciprocal.[hc] > algorithm to librte_eal/common and the second containing just the > librte_sched changes, which are small. I am thinking ahead here: once we > have the 2x64-bit multiplication solution in place, we should not have > rte_reciprocal.[hc] hanging in librte_sched folder without being used here, > while it might be used by other parts of DPDK. > > Let's keep the improvement as-is to test it in the first release candidate. > We can move the code and/or fix the file header later. > > Series applied, thanks. Hi Thomas, I am OK with this, as long as Stephen commits to fix the copyright in the header file and move the rte_reciprocal.[hc] into a common area like librte_eal/common in time for the next release candidate. Thanks, Cristian
[dpdk-dev] [PATCH] hash: fix memcmp function pointer in multi-process environment
We found a problem in dpdk-2.2 using under multi-process environment. Here is the brief description how we are using the dpdk: We have two processes proc1, proc2 using dpdk. These proc1 and proc2 are two different compiled binaries. proc1 is started as primary process and proc2 as secondary process. proc1: Calls srcHash = rte_hash_create("src_hash_name") to create rte_hash structure. As part of this, this api initalized the rte_hash structure and set the srcHash->rte_hash_cmp_eq to the address of memcmp() from proc1 address space. proc2: calls srcHash = rte_hash_find_existing("src_hash_name"). This function call returns the rte_hash created by proc1. This srcHash->rte_hash_cmp_eq still points to the address of memcmp() from proc1 address space. Later proc2 calls rte_hash_lookup_with_hash(srcHash, (const void*) &key, key.sig); rte_hash_lookup_with_hash() invokes __rte_hash_lookup_with_hash(), which in turn calls h->rte_hash_cmp_eq(key, k->key, h->key_len). This leads to a crash as h->rte_hash_cmp_eq is an address from proc1 address space and is invalid address in proc2 address space. We found, from dpdk documentation, that " The use of function pointers between multiple processes running based of different compiled binaries is not supported, since the location of a given function in one process may be different to its location in a second. This prevents the librte_hash library from behaving properly as in a multi- threaded instance, since it uses a pointer to the hash function internally. To work around this issue, it is recommended that multi-process applications perform the hash calculations by directly calling the hashing function from the code and then using the rte_hash_add_with_hash()/rte_hash_lookup_with_hash() functions instead of the functions which do the hashing internally, such as rte_hash_add()/rte_hash_lookup(). " We did follow the recommended steps by invoking rte_hash_lookup_with_hash(). It was no issue up to and including dpdk-2.0. In later releases started crashing because rte_hash_cmp_eq is introduced in dpdk-2.1 We fixed it with the following patch and would like to submit the patch to dpdk.org. Patch is created such that, if anyone wanted to use dpdk in multi-process environment with function pointers not shared, they need to define RTE_LIB_MP_NO_FUNC_PTR in their Makefile. Without defining this flag in Makefile, it works as it is now. Signed-off-by: Dhana Eadala --- lib/librte_hash/rte_cuckoo_hash.c | 28 1 file changed, 28 insertions(+) diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c index 3e3167c..0946777 100644 --- a/lib/librte_hash/rte_cuckoo_hash.c +++ b/lib/librte_hash/rte_cuckoo_hash.c @@ -594,7 +594,11 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, prim_bkt->signatures[i].alt == alt_hash) { k = (struct rte_hash_key *) ((char *)keys + prim_bkt->key_idx[i] * h->key_entry_size); +#ifdef RTE_LIB_MP_NO_FUNC_PTR + if (memcmp(key, k->key, h->key_len) == 0) { +#else if (h->rte_hash_cmp_eq(key, k->key, h->key_len) == 0) { +#endif /* Enqueue index of free slot back in the ring. */ enqueue_slot_back(h, cached_free_slots, slot_id); /* Update data */ @@ -614,7 +618,11 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, sec_bkt->signatures[i].current == alt_hash) { k = (struct rte_hash_key *) ((char *)keys + sec_bkt->key_idx[i] * h->key_entry_size); +#ifdef RTE_LIB_MP_NO_FUNC_PTR + if (memcmp(key, k->key, h->key_len) == 0) { +#else if (h->rte_hash_cmp_eq(key, k->key, h->key_len) == 0) { +#endif /* Enqueue index of free slot back in the ring. */ enqueue_slot_back(h, cached_free_slots, slot_id); /* Update data */ @@ -725,7 +733,11 @@ __rte_hash_lookup_with_hash(const struct rte_hash *h, const void *key, bkt->signatures[i].sig != NULL_SIGNATURE) { k = (struct rte_hash_key *) ((char *)keys + bkt->key_idx[i] * h->key_entry_size); +#ifdef RTE_LIB_MP_NO_FUNC_PTR + if (memcmp (key, k->key, h->key_len) == 0) { +#else if (h->rte_hash_cmp_eq(key, k->key, h->key_len) == 0) { +#endif if (data != NULL) *data = k->pdata; /* @@ -748,7 +760,11 @@ __rte_hash_lookup_with_hash(const struct rte_hash *h, const void *key,