> -----Original Message-----
> From: Haifei Luo <haif...@nvidia.com>
> Sent: Thursday, April 15, 2021 14:19
> To: dev@dpdk.org
> Cc: Ori Kam <or...@nvidia.com>; Slava Ovsiienko <viachesl...@nvidia.com>;
> Raslan Darawsheh <rasl...@nvidia.com>; Xueming(Steven) Li
> <xuemi...@nvidia.com>; Haifei Luo <haif...@nvidia.com>; Matan Azrad
> <ma...@nvidia.com>; Shahaf Shuler <shah...@nvidia.com>
> Subject: [PATCH v3 2/2] net/mlx5: add mlx5 APIs for single flow dump
> feature
>
> Modify API mlx5_flow_dev_dump to support the feature.
> Modify mlx5_socket since one extra arg flow_ptr is added.
>
> The data structure sent to DPDK application from the utility triggering the
> flow dumps should be packed and endianness must be specified.
> The native host endianness can be used, all exchange happens within the
> same host (we use sendmsg aux data and share the file handle, remote
> approach is not applicable, no inter-host communication happens).
>
> The message structure to dump one/all flow(s):
> struct mlx5_flow_dump_req {
> uint32_t port_id;
> uint64_t flow_ptr;
> } __rte_packed;
>
> If flow_ptr is 0, all flows for the specified port will be dumped.
>
> Signed-off-by: Haifei Luo <haif...@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viachesl...@nvidia.com>