> On Tue, Oct 3, 2023 at 7:43 PM Bruce Richardson
> <bruce.richard...@intel.com> wrote:
> >
> > On Tue, Oct 03, 2023 at 04:06:10PM +0530, Jerin Jacob wrote:
> > > On Tue, Oct 3, 2023 at 3:17 PM <pbhagavat...@marvell.com> wrote:
> > > >
> > > > From: Pavan Nikhilesh <pbhagavat...@marvell.com>
> > > >
> > > > A collection of event queues linked to an event port can be associated
> > > > with unique identifier called as a link profile, multiple such profiles
> > > > can be configured based on the event device capability using the
> function
> > > > `rte_event_port_profile_links_set` which takes arguments similar to
> > > > `rte_event_port_link` in addition to the profile identifier.
> > > >
> > > > The maximum link profiles that are supported by an event device is
> > > > advertised through the structure member
> > >
> > > ...
> > >
> > > >
> > > > v6 Changes:
> > >
> > > Series applied to dpdk-next-net-eventdev/for-main with following
> changes. Thanks
> > >
> >
> > I'm doing some investigation work on the software eventdev, using
> > eventdev_pipeline, and following these patches the eventdev_pipeline
> sample
> > no longer is working for me. Error message is as shown below:
> >
> >     Config:
> >         ports: 2
> >         workers: 22
> >         packets: 33554432
> >         Queue-prio: 0
> >         qid0 type: ordered
> >         Cores available: 48
> >         Cores used: 24
> >         Eventdev 0: event_sw
> >     Stages:
> >         Stage 0, Type Ordered   Priority = 128
> >
> >   EVENTDEV: rte_event_port_profile_unlink() line 1092: Invalid profile_id=0
> >   Error setting up port 0
> >
> > Parameters used when running the app:
> >   -l 24-47 --in-memory --vdev=event_sw0 -- \
> >         -r 1000000 -t 1000000 -e 2000000 -w FFFFFC000000  -c 64 -W 500
> 
> 
> Following max_profiles_per_port = 1 is getting overridden in [1]. I
> was suggested to take this path to avoid driver changes.
> Looks like we can not rely on common code. @Pavan Nikhilesh  Could you
> change to your old version(where every driver changes to add
> max_profiles_per_port = 1).
> I will squash it.
> 
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 60509c6efb..5ee8bd665b 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -96,6 +96,7 @@  rte_event_dev_info_get(uint8_t dev_id, struct
> rte_event_dev_info *dev_info)
>   return -EINVAL;
> 
>   memset(dev_info, 0, sizeof(struct rte_event_dev_info));
> + dev_info->max_profiles_per_port = 1;


Should be fixed with the following patch, @Bruce Richardson could you please 
verify 
https://patchwork.dpdk.org/project/dpdk/patch/20231003152535.10177-1-pbhagavat...@marvell.com/

> 
> [1]
> static void
> sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
> {
>         RTE_SET_USED(dev);
> 
>         static const struct rte_event_dev_info evdev_sw_info = {
>                         .driver_name = SW_PMD_NAME,
>                         .max_event_queues = RTE_EVENT_MAX_QUEUES_PER_DEV,
>                         .max_event_queue_flows = SW_QID_NUM_FIDS,
>                         .max_event_queue_priority_levels = SW_Q_PRIORITY_MAX,
>                         .max_event_priority_levels = SW_IQS_MAX,
>                         .max_event_ports = SW_PORTS_MAX,
>                         .max_event_port_dequeue_depth =
> MAX_SW_CONS_Q_DEPTH,
>                         .max_event_port_enqueue_depth =
> MAX_SW_PROD_Q_DEPTH,
>                         .max_num_events = SW_INFLIGHT_EVENTS_TOTAL,
>                         .event_dev_cap = (
>                                 RTE_EVENT_DEV_CAP_QUEUE_QOS |
>                                 RTE_EVENT_DEV_CAP_BURST_MODE |
>                                 RTE_EVENT_DEV_CAP_EVENT_QOS |
>                                 RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
>                                 RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
>                                 RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
>                                 RTE_EVENT_DEV_CAP_NONSEQ_MODE |
>                                 RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
>                                 RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
>         };
> 
>         *info = evdev_sw_info;
> }
> 
> 
> >
> > Regards,
> > /Bruce

Reply via email to