The trouble is:
a) how do we guarantee that the function in question is present in the 
secondary process at all? It could be only referenced by name in the primary 
process and omitted by the linker in the secondary as unused, for instance.
b) how do we find out the address of the function in the secondary process if 
it is present?
c) updating the hash function pointer in the secondary process will overwrite 
the value for the primary process, thereby breaking the hash function on it.

This is a complicated problem. The only item for which we've solved this so far 
is for the physical NICs. What happens there is that we had to:
* split up the rte_eth_dev structure into two, to create the shared 
rte_eth_dev_data structure and the process local rte_eth_dev structure
* force the secondary processes to redo the exact same driver loading (now 
obsolete) and PCI probe scan as the primary, thereby creating the process-local 
matching rte_eth_dev structures in the secondary.
This works, but has itself limitations. In this case, the secondary processes 
have to be run using the exact same parameters for PCI devices, same -d 
parameters to dynamically load drivers, same -b parameters to blacklist ports, 
etc. etc. 

For the hash table case, I think forcing secondary processes to use the 
"with_hash" versions of the API is an acceptable workaround, given the 
difficulties of making it work transparently in secondary processes. [The 
workaround may ever perform slightly better as the calls to the hash 
calculation are explicit and can be inlined by the compiler, saving the cost of 
an indirect function call per packet.]

/Bruce

> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Helmut Sim
> Sent: Tuesday, June 10, 2014 10:24 PM
> To: Venkat Thummala
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] using hash table in a MP environment
> 
> one more simple way would be to assign the desired hash function to the
> hash_func in the rte_hash structure returned by rte_hash_find_existing call
> at the secondary initialization phase. that way there is no difference
> between a primary or a secondary process.
> 
> Regards,
> 
> 
> 
> On Tue, Jun 10, 2014 at 3:25 PM, Venkat Thummala <
> venkat.thummala.1978 at gmail.com> wrote:
> 
> > Hi Shirley,
> >
> > Please refer the section 20.3 [Multi-Process Limitations] in DPDK
> > Programmers Guide.
> >
> > The use of function pointers between multiple processes running based of
> > different
> > compiled binaries is not supported, since the location of a given function
> > in one
> > process may be different to its location in a second. This prevents the
> > librte_hash library from behaving properly as in a multi-threaded instance,
> > since it uses a pointer to the hash function internally.
> > To work around this issue, it is recommended that multi-process
> > applications
> > perform the hash calculations by directly calling the hashing function from
> > the code
> > and then using the rte_hash_add_with_hash()/
> > rte_hash_lookup_with_hash() functions instead of the functions which do the
> > hashing internally, such as rte_hash_add()/rte_hash_lookup()
> >
> > Thanks
> > Venkat
> >
> >
> > On 10 June 2014 17:05, Neil Horman <nhorman at tuxdriver.com> wrote:
> >
> > > On Tue, Jun 10, 2014 at 11:02:03AM +0300, Uri Sidler wrote:
> > > > Hi,
> > > > I am currently using a hash table in a multi-process environment.
> > > > the master process creates the hash table which is later used by other
> > > > secondary processes.
> > > > but the secondary processes fail to use the hash table since the hash
> > > > function address actually points to a different fucntion. (this makes
> > > sense
> > > > since the address of the hash function is in fact different per
> > process).
> > > > How can I solve this issue?
> > > >
> > > > Thanks,
> > > > Shirley.
> > > >
> > >
> > > Use shared memory.  see shmget
> > >
> > > Neil
> > >
> > >
> >

Reply via email to