> -----Original Message-----
> From: Jerin Jacob <jerinjac...@gmail.com>
> Sent: Wednesday, May 24, 2023 4:00 PM
> To: Yan, Zhirun <zhirun....@intel.com>
> Cc: dev@dpdk.org; jer...@marvell.com; kirankum...@marvell.com;
> ndabilpu...@marvell.com; step...@networkplumber.org;
> pbhagavat...@marvell.com; Liang, Cunming <cunming.li...@intel.com>; Wang,
> Haiyue <haiyue.w...@intel.com>
> Subject: Re: [PATCH v6 09/15] graph: introduce stream moving cross cores
> 
> On Tue, May 9, 2023 at 11:35 AM Zhirun Yan <zhirun....@intel.com> wrote:
> >
> > This patch introduces key functions to allow a worker thread to enable
> > enqueue and move streams of objects to the next nodes over different
> > cores.
> 
> different cores-> different cores for mcore dispatch model.
> 
Got it. Thanks.

> 
> >
> > Signed-off-by: Haiyue Wang <haiyue.w...@intel.com>
> > Signed-off-by: Cunming Liang <cunming.li...@intel.com>
> > Signed-off-by: Zhirun Yan <zhirun....@intel.com>
> > ---
> >  lib/graph/graph.c                    |   6 +-
> >  lib/graph/graph_private.h            |  30 +++++
> >  lib/graph/meson.build                |   2 +-
> >  lib/graph/rte_graph.h                |  15 ++-
> >  lib/graph/rte_graph_model_dispatch.c | 157
> > +++++++++++++++++++++++++++  lib/graph/rte_graph_model_dispatch.h |  37
> +++++++
> >  lib/graph/version.map                |   2 +
> >  7 files changed, 244 insertions(+), 5 deletions(-)
> >
> > diff --git a/lib/graph/graph.c b/lib/graph/graph.c index
> > e809aa55b0..f555844d8f 100644
> > --- a/lib/graph/graph.c
> > +++ b/lib/graph/graph.c
> > @@ -495,7 +495,7 @@ clone_name(struct graph *graph, struct graph
> > *parent_graph, const char *name)  }
> >
> >  static rte_graph_t
> > -graph_clone(struct graph *parent_graph, const char *name)
> > +graph_clone(struct graph *parent_graph, const char *name, struct
> > +rte_graph_param *prm)
> >  {
> >         struct graph_node *graph_node;
> >         struct graph *graph;
> > @@ -566,14 +566,14 @@ graph_clone(struct graph *parent_graph, const
> > char *name)  }
> >
> > --- a/lib/graph/rte_graph.h
> > +++ b/lib/graph/rte_graph.h
> > @@ -169,6 +169,17 @@ struct rte_graph_param {
> >         bool pcap_enable; /**< Pcap enable. */
> >         uint64_t num_pkt_to_capture; /**< Number of packets to capture. */
> >         char *pcap_filename; /**< Filename in which packets to be
> > captured.*/
> > +
> > +       RTE_STD_C11
> > +       union {
> > +               struct {
> > +                       uint64_t rsvd[8];
> > +               } rtc;
> > +               struct {
> > +                       uint32_t wq_size_max;
> > +                       uint32_t mp_capacity;
> 
> Add doxgen comment for all please.
> 
> > +               } dispatch;
> > +       };
> >  };
> >
> >  /**
> > @@ -260,12 +271,14 @@ int rte_graph_destroy(rte_graph_t id);
> >   *   Name of the new graph. The library prepends the parent graph name to
> the
> >   * user-specified name. The final graph name will be,
> >   * "parent graph name" + "-" + name.
> > + * @param prm
> > + *   Graph parameter, includes model-specific parameters in this graph.
> >   *
> >   * @return
> >   *   Valid graph id on success, RTE_GRAPH_ID_INVALID otherwise.
> >   */
> >  __rte_experimental
> > -rte_graph_t rte_graph_clone(rte_graph_t id, const char *name);
> > +rte_graph_t rte_graph_clone(rte_graph_t id, const char *name, struct
> > +rte_graph_param *prm);
> >
> >  /**
> > +void
> > +__rte_graph_sched_wq_process(struct rte_graph *graph) {
> > +       struct graph_sched_wq_node *wq_node;
> > +       struct rte_mempool *mp = graph->mp;
> > +       struct rte_ring *wq = graph->wq;
> > +       uint16_t idx, free_space;
> > +       struct rte_node *node;
> > +       unsigned int i, n;
> > +       struct graph_sched_wq_node *wq_nodes[32];
> 
> Use RTE_GRAPH_BURST_SIZE instead of 32, if it is anything do with burst size?
> else ignore.

No, wq_nodes[32] is just a temporary space to consume the task.

I will add a macro WQ_SIZE to define 32.

> 
> 
> > +
> > +       n = rte_ring_sc_dequeue_burst_elem(wq, wq_nodes,
> sizeof(wq_nodes[0]),
> > +                                          RTE_DIM(wq_nodes), NULL);
> > +       if (n == 0)
> > +               return;
> > +
> > +       for (i = 0; i < n; i++) {
> > +               wq_node = wq_nodes[i];
> > +               node = RTE_PTR_ADD(graph, wq_node->node_off);
> > +               RTE_ASSERT(node->fence == RTE_GRAPH_FENCE);
> > +               idx = node->idx;
> > +               free_space = node->size - idx;
> > +
> > +               if (unlikely(free_space < wq_node->nb_objs))
> > +                       __rte_node_stream_alloc_size(graph, node,
> > + node->size + wq_node->nb_objs);
> > +
> > +               memmove(&node->objs[idx], wq_node->objs, wq_node->nb_objs *
> sizeof(void *));
> > +               node->idx = idx + wq_node->nb_objs;
> > +
> > +               __rte_node_process(graph, node);
> > +
> > +               wq_node->nb_objs = 0;
> > +               node->idx = 0;
> > +       }
> > +
> > +       rte_mempool_put_bulk(mp, (void **)wq_nodes, n); }
> > +
> > +/**
> > + * @internal
> 
> For both internal function, you can add Doxygen comment as @note to tell this
> must not be used directly.

Yes, I will add a note here.

> 
> > + *
> > + * Process all nodes (streams) in the graph's work queue.
> > + *
> > + * @param graph
> > + *   Pointer to the graph object.
> > + */
> > +__rte_experimental
> > +void __rte_graph_sched_wq_process(struct rte_graph *graph);
> > +
> >  /**
> >   * Set lcore affinity with the node.
> >   *
> > diff --git a/lib/graph/version.map b/lib/graph/version.map index
> > aaa86f66ed..d511133f39 100644
> > --- a/lib/graph/version.map
> > +++ b/lib/graph/version.map
> > @@ -48,6 +48,8 @@ EXPERIMENTAL {
> >
> >         rte_graph_worker_model_set;
> >         rte_graph_worker_model_get;
> > +       __rte_graph_sched_wq_process;
> > +       __rte_graph_sched_node_enqueue;
> 
> Please add _mcore_dispatch_ name space.

Yes.

> 
> 
> 
> >
> >         rte_graph_model_dispatch_lcore_affinity_set;
> >
> > --
> > 2.37.2
> >

Reply via email to