On 16-Apr-18 2:24 PM, Andrew Rybchenko wrote:
Size of memory chunk required to populate mempool objects depends
on how objects are stored in the memory. Different mempool drivers
may have different requirements and a new operation allows to
calculate memory size in accordance with driver requirements and
advertise requirements on minimum memory chunk size and alignment
in a generic way.

Bump ABI version since the patch breaks it.

Suggested-by: Olivier Matz <olivier.m...@6wind.com>
Signed-off-by: Andrew Rybchenko <arybche...@solarflare.com>
---
v3 -> v4:
  - rebased on top of memory rework
  - dropped previous Ack's since rebase is not trivial
  - check size calculation failure in rte_mempool_populate_anon() and
    rte_mempool_memchunk_anon_free()

v2 -> v3:
  - none

v1 -> v2:
  - clarify min_chunk_size meaning
  - rebase on top of patch series which fixes library version in meson
    build

RFCv2 -> v1:
  - move default calc_mem_size callback to rte_mempool_ops_default.c
  - add ABI changes to release notes
  - name default callback consistently: rte_mempool_op_<callback>_default()
  - bump ABI version since it is the first patch which breaks ABI
  - describe default callback behaviour in details
  - avoid introduction of internal function to cope with deprecation
    (keep it to deprecation patch)
  - move cache-line or page boundary chunk alignment to default callback
  - highlight that min_chunk_size and align parameters are output only

  doc/guides/rel_notes/deprecation.rst         |   3 +-
  doc/guides/rel_notes/release_18_05.rst       |   8 +-
  lib/librte_mempool/Makefile                  |   3 +-
  lib/librte_mempool/meson.build               |   5 +-
  lib/librte_mempool/rte_mempool.c             | 114 +++++++++++++++------------
  lib/librte_mempool/rte_mempool.h             |  86 +++++++++++++++++++-
  lib/librte_mempool/rte_mempool_ops.c         |  18 +++++
  lib/librte_mempool/rte_mempool_ops_default.c |  38 +++++++++
  lib/librte_mempool/rte_mempool_version.map   |   7 ++
  9 files changed, 225 insertions(+), 57 deletions(-)
  create mode 100644 lib/librte_mempool/rte_mempool_ops_default.c

<...>

-       total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
        for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
+               size_t min_chunk_size;
                unsigned int flags;
+
                if (try_contig || no_pageshift)
-                       size = rte_mempool_xmem_size(n, total_elt_sz, 0,
-                               mp->flags);
+                       mem_size = rte_mempool_ops_calc_mem_size(mp, n,
+                                       0, &min_chunk_size, &align);
                else
-                       size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift,
-                               mp->flags);
+                       mem_size = rte_mempool_ops_calc_mem_size(mp, n,
+                                       pg_shift, &min_chunk_size, &align);
+
+               if (mem_size < 0) {
+                       ret = mem_size;
+                       goto fail;
+               }
ret = snprintf(mz_name, sizeof(mz_name),
                        RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
@@ -692,27 +678,31 @@ rte_mempool_populate_default(struct rte_mempool *mp)
                if (try_contig)
                        flags |= RTE_MEMZONE_IOVA_CONTIG;
- mz = rte_memzone_reserve_aligned(mz_name, size, mp->socket_id,
-                               flags, align);
+               mz = rte_memzone_reserve_aligned(mz_name, mem_size,
+                               mp->socket_id, flags, align);
- /* if we were trying to allocate contiguous memory, adjust
-                * memzone size and page size to fit smaller page sizes, and
-                * try again.
+               /* if we were trying to allocate contiguous memory, failed and
+                * minimum required contiguous chunk fits minimum page, adjust
+                * memzone size to the page size, and try again.
                 */
-               if (mz == NULL && try_contig) {
+               if (mz == NULL && try_contig && min_chunk_size <= pg_sz) {

This is a bit pessimistic. There may not have been enough IOVA-contiguous memory to reserve `mem_size`, but there may be enough contiguous memory to try and reserve `min_chunk_size` contiguous memory if it's bigger than minimum page size. This is *minimum* page size, so there may be bigger pages, and ideally if (min_chunk_size >= pg_sz) && (min_chunk_size < mem_size), we might've tried to allocate some IOVA-contiguous memory, and succeed.

However, that's not a huge issue, so...

Acked-by: Anatoly Burakov <anatoly.bura...@intel.com>

                        try_contig = false;
                        flags &= ~RTE_MEMZONE_IOVA_CONTIG;
-                       align = pg_sz;
-                       size = rte_mempool_xmem_size(n, total_elt_sz,
-                               pg_shift, mp->flags);
- mz = rte_memzone_reserve_aligned(mz_name, size,
+                       mem_size = rte_mempool_ops_calc_mem_size(mp, n,
+                                       pg_shift, &min_chunk_size, &align);
+                       if (mem_size < 0) {
+                               ret = mem_size;
+                               goto fail;
+                       }
+
+                       mz = rte_memzone_reserve_aligned(mz_name, mem_size,
                                mp->socket_id, flags, align);
                }

<...>
--
Thanks,
Anatoly

Reply via email to