> On Jul 3, 2017, at 7:06 AM, Nélio Laranjeiro <nelio.laranje...@6wind.com> 
> wrote:
> 
> On Fri, Jun 30, 2017 at 12:23:31PM -0700, Yongseok Koh wrote:
>> When searching LKEY, if search key is mempool pointer, the 2nd cacheline
>> has to be accessed and it even requires to check whether a buffer is
>> indirect per every search. Instead, using address for search key can reduce
>> cycles taken. And caching the last hit entry is beneficial as well.
>> 
>> Signed-off-by: Yongseok Koh <ys...@mellanox.com>
>> ---
>> drivers/net/mlx5/mlx5_mr.c   | 17 ++++++++++++++---
>> drivers/net/mlx5/mlx5_rxtx.c | 39 +++++++++++++++++++++------------------
>> drivers/net/mlx5/mlx5_rxtx.h |  4 +++-
>> drivers/net/mlx5/mlx5_txq.c  |  3 +--
>> 4 files changed, 39 insertions(+), 24 deletions(-)
>> 
>> diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c
>> index 0a3638460..287335179 100644
>> --- a/drivers/net/mlx5/mlx5_mr.c
>> +++ b/drivers/net/mlx5/mlx5_mr.c
>> @@ -265,18 +266,28 @@ txq_mp2mr_iter(struct rte_mempool *mp, void *arg)
>>      struct txq_mp2mr_mbuf_check_data data = {
>>              .ret = 0,
>>      };
>> +    uintptr_t start;
>> +    uintptr_t end;
>>      unsigned int i;
>> 
>>      /* Register mempool only if the first element looks like a mbuf. */
>>      if (rte_mempool_obj_iter(mp, txq_mp2mr_mbuf_check, &data) == 0 ||
>>                      data.ret == -1)
>>              return;
>> +    if (mlx5_check_mempool(mp, &start, &end) != 0) {
>> +            ERROR("mempool %p: not virtually contiguous",
>> +                  (void *)mp);
>> +            return;
>> +    }
>>      for (i = 0; (i != RTE_DIM(txq_ctrl->txq.mp2mr)); ++i) {
>> -            if (unlikely(txq_ctrl->txq.mp2mr[i].mp == NULL)) {
>> +            struct ibv_mr *mr = txq_ctrl->txq.mp2mr[i].mr;
>> +
>> +            if (unlikely(mr == NULL)) {
>>                      /* Unknown MP, add a new MR for it. */
>>                      break;
>>              }
>> -            if (txq_ctrl->txq.mp2mr[i].mp == mp)
>> +            if (start >= (uintptr_t)mr->addr &&
>> +                end <= (uintptr_t)mr->addr + mr->length)
>>                      return;
>>      }
>>      txq_mp2mr_reg(&txq_ctrl->txq, mp, i);
> 
> if (start >= (uintptr_t)mr->addr &&
>     end <= (uintptr_t)mr->addr + mr->length)
> 
> Is this expected to have a memory region bigger than the memory pool
> space?  I mean I was expecting to see strict equality in the addresses.
In mlx5_mp2mr(), start/end of a memory region are rounded up to make it
aligned to its hugepage size.

struct ibv_mr *
mlx5_mp2mr(struct ibv_pd *pd, struct rte_mempool *mp)
{
[...]
        /* Round start and end to page boundary if found in memory segments. */
        for (i = 0; (i < RTE_MAX_MEMSEG) && (ms[i].addr != NULL); ++i) {
                uintptr_t addr = (uintptr_t)ms[i].addr;
                size_t len = ms[i].len;
                unsigned int align = ms[i].hugepage_sz;

                if ((start > addr) && (start < addr + len))
                        start = RTE_ALIGN_FLOOR(start, align);
                if ((end > addr) && (end < addr + len))
                        end = RTE_ALIGN_CEIL(end, align);
        }

Thanks,
Yongseok

Reply via email to