> -----Original Message-----
> From: Thomas Monjalon <tho...@monjalon.net>
> Sent: 16 March 2023 23:18
> To: Srikanth Yalavarthi <syalavar...@marvell.com>
> Cc: dev@dpdk.org; Shivah Shankar Shankar Narayan Rao
> <sshankarn...@marvell.com>; Jerin Jacob Kollanukkaran
> <jer...@marvell.com>; Anup Prabhu <apra...@marvell.com>; Prince Takkar
> <ptak...@marvell.com>; Parijat Shukla <pshu...@marvell.com>
> Subject: [EXT] Re: [PATCH v6 09/12] app/mldev: enable support for inference
> batches
> 
> External Email
> 
> ----------------------------------------------------------------------
> 11/03/2023 16:09, Srikanth Yalavarthi:
> > @@ -528,8 +533,8 @@ ml_request_initialize(struct rte_mempool *mp,
> void *opaque, void *obj, unsigned
> >     req->niters = 0;
> >
> >     /* quantize data */
> > -   rte_ml_io_quantize(t->cmn.opt->dev_id, t->model[t->fid].id,
> > -                      t->model[t->fid].info.batch_size, t->model[t-
> >fid].input, req->input);
> > +   rte_ml_io_quantize(t->cmn.opt->dev_id, t->model[t->fid].id, t-
> >model[t->fid].nb_batches,
> > +                      t->model[t->fid].input, req->input);
> >  }
> >
> >  int
> > @@ -547,7 +552,7 @@ ml_inference_iomem_setup(struct ml_test *test,
> struct ml_options *opt, uint16_t
> >     int ret;
> >
> >     /* get input buffer size */
> > -   ret = rte_ml_io_input_size_get(opt->dev_id, t->model[fid].id, t-
> >model[fid].info.batch_size,
> > +   ret = rte_ml_io_input_size_get(opt->dev_id, t->model[fid].id,
> > +t->model[fid].nb_batches,
> >                                    &t->model[fid].inp_qsize, &t-
> >model[fid].inp_dsize);
> >     if (ret != 0) {
> >             ml_err("Failed to get input size, model : %s\n",
> > opt->filelist[fid].model); @@ -555,9 +560,8 @@
> ml_inference_iomem_setup(struct ml_test *test, struct ml_options *opt,
> uint16_t
> >     }
> >
> >     /* get output buffer size */
> > -   ret = rte_ml_io_output_size_get(opt->dev_id, t->model[fid].id,
> > -                                   t->model[fid].info.batch_size, &t-
> >model[fid].out_qsize,
> > -                                   &t->model[fid].out_dsize);
> > +   ret = rte_ml_io_output_size_get(opt->dev_id, t->model[fid].id, t-
> >model[fid].nb_batches,
> > +                                   &t->model[fid].out_qsize, &t-
> >model[fid].out_dsize);
> >     if (ret != 0) {
> >             ml_err("Failed to get input size, model : %s\n", opt-
> >filelist[fid].model);
> >             return ret;
> > @@ -702,7 +706,7 @@ ml_request_finish(struct rte_mempool *mp, void
> *opaque, void *obj, unsigned int
> >             return;
> >
> >     t->nb_used++;
> > -   rte_ml_io_dequantize(t->cmn.opt->dev_id, model->id, t->model[req-
> >fid].info.batch_size,
> > +   rte_ml_io_dequantize(t->cmn.opt->dev_id, model->id,
> > +t->model[req->fid].nb_batches,
> >                          req->output, model->output);
> 
> These changes look unrelated with the topic of the patch.
> You should probably fix it when adding those lines at first.

The changes are related to the patch. Initially the number of batches run per 
inference is set to the default batch_size value of the mode, which is reported 
to the user through rte_ml_model_info_get.

This patch adds support to specify the number of batches to be run per 
inference. Hence, the default batch_size is replace with nb_batches values 
specified by the user.
> 

Reply via email to