Hi Jerin,

>>>     /* Add elements back into the cache */
>>> -   for (index = 0; index < n; ++index, obj_table++)
>>> -           cache_objs[index] = *obj_table;
>>> +   rte_memcpy(&cache_objs[0], obj_table, sizeof(void *) * n);
>>>  
>>>     cache->len += n;
>>>  
>>>
>>
>> I also checked in the get_bulk() function, which looks like that:
>>
>>      /* Now fill in the response ... */
>>      for (index = 0, len = cache->len - 1;
>>                      index < n;
>>                      ++index, len--, obj_table++)
>>              *obj_table = cache_objs[len];
>>
>> I think we could replace it by something like:
>>
>>      rte_memcpy(obj_table, &cache_objs[len - n], sizeof(void *) * n);
>>
>> The only difference is that it won't reverse the pointers in the
>> table, but I don't see any problem with that.
>>
>> What do you think?
> 
> In true sense, it will _not_ be LIFO. Not sure about cache usage implications
> on the specific use cases.

Today, the objects pointers are reversed only in the get(). It means
that this code:

        rte_mempool_get_bulk(mp, table, 4);
        for (i = 0; i < 4; i++)
                printf("obj = %p\n", t[i]);
        rte_mempool_put_bulk(mp, table, 4);


        printf("-----\n");
        rte_mempool_get_bulk(mp, table, 4);
        for (i = 0; i < 4; i++)
                printf("obj = %p\n", t[i]);
        rte_mempool_put_bulk(mp, table, 4);

prints:

        addr1
        addr2
        addr3
        addr4
        -----
        addr4
        addr3
        addr2
        addr1

Which is quite strange.

I don't think it would be an issue to replace the loop by a
rte_memcpy(), it may increase the copy speed and it will be
more coherent with the put().


Olivier

Reply via email to