> -----Original Message-----
> From: Vamsi Krishna Attunuru <vattun...@marvell.com>
> Sent: Thursday, June 1, 2023 8:14 AM
> To: Jerin Jacob <jerinjac...@gmail.com>
> Cc: Yan, Zhirun <zhirun....@intel.com>; dev@dpdk.org;
> tho...@monjalon.net; Jerin Jacob Kollanukkaran <jer...@marvell.com>;
> Nithin Kumar Dabilpuram <ndabilpu...@marvell.com>; Liang, Cunming
> <cunming.li...@intel.com>; Wang, Haiyue <haiyue.w...@intel.com>; Sunil
> Kumar Kori <sk...@marvell.com>
> Subject: RE: [EXT] Re: [PATCH v2 4/4] app: add testgraph application
> 
> 
> 
> > -----Original Message-----
> > From: Jerin Jacob <jerinjac...@gmail.com>
> > Sent: Tuesday, May 30, 2023 1:05 PM
> > To: Vamsi Krishna Attunuru <vattun...@marvell.com>
> > Cc: Yan, Zhirun <zhirun....@intel.com>; dev@dpdk.org;
> > tho...@monjalon.net; Jerin Jacob Kollanukkaran <jer...@marvell.com>;
> > Nithin Kumar Dabilpuram <ndabilpu...@marvell.com>; Liang, Cunming
> > <cunming.li...@intel.com>; Wang, Haiyue <haiyue.w...@intel.com>;
> Sunil
> > Kumar Kori <sk...@marvell.com>
> > Subject: [EXT] Re: [PATCH v2 4/4] app: add testgraph application
> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > On Mon, May 22, 2023 at 12:37 PM Vamsi Krishna Attunuru
> > <vattun...@marvell.com> wrote:
> >
> > > > > +static int
> > > > > +link_graph_nodes(uint64_t valid_nodes, uint32_t lcore_id) {
> > > > > +   int ret = 0;
> > > > > +
> > > > > +   num_patterns = 0;
> > > > > +
> > > > > +   if (valid_nodes == (TEST_GRAPH_ETHDEV_TX_NODE |
> >
> >
> > I think, if we need to extend the C code for adding new use case, then
> > it will not scale.
> > IMO, We should look at more runtime and file based interface.
> > Something like https://urldefense.proofpoint.com/v2/url?u=https-
> > 3A__github.com_DPDK_dpdk_blob_main_examples_ip-
> >
> 5Fpipeline_examples_l2fwd.cli&d=DwIFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r
> >
> =WllrYaumVkxaWjgKto6E_rtDQshhIhik2jkvzFyRhW8&m=__0iDgw_2lXr0YyyV
> > pN3e1Ma9M5SyYMs1pUKnOgvix6u5jfs6MSprWyUh-
> > sCmTDw&s=janRFWyPd7Ma3bCIvlVW1YqkAS_U9ouMGNfP1x98pmQ&e=
> > In nutshell,
> > 1) File based interface to kick-start the valid use case enablement
> > 2) Less logic in C code and everything should be driven from config
> > file
> > 3) Allow runtime change. examples/ip_pipeline provides telent
> > interface to update . Similar concept can be followed.
> >
> > I think, we should push the app for next release. Not on this release.
> > Sorry for reviewing late.
> Sure, agree with your new proposal. I will drop this patch for this release.

As discussed we are proposing command based interface for running different
usecases with configuration file as input per usecase. Please review and let us 
know your comments meanwhile we have started implementing v1 with below 
specification.

Graph application Interface file
Configure Use cases
=============
This section consists of which use cases are needed to be configured and model 
to be used along with number of coremask 
to run graph on that. Following is the exposed syntax to configure given use 
case.

Syntax:

graph <usecases> [usecase specific configuration] <model> [model specific 
configuration]

Where:

usecases: It is comma separated list defining the requested use cases. Example 
values are -
l3fwd
ipsec
usecase specific configuration: It defines following usecase specific 
configuration -
burst size (bsz)
timeout (tmo)
coremask
model: It defines the model for dequeuing packets.  Example models are -
run to completion (rtc)
multi core dispatch (mcd)
Global specific configuration
==================
This section consists of device specific configuration which are needed to make 
a DPDK port usable such as number

of Rx/TX queues, MTU, mempool etc. Along with it consists global network table 
configuration required for each use case

such as configure IP address to device, arp entries for given IP etc. Supported 
hardware offloads to be used by 
this use case is also added under this configuration. Graph is created for this 
use case at the end of this configuration.

Syntax:

mempool <mempool_name> size <num> buffers <num> cache <val> <cpuid>

Where:

mempool_name  : Name of the mempool used for further pool operations.
size <num>         : Size of each element in mempool
buffers <num>   : Number of elements in mempool
cache <val>        : Size of the per-core object cache
<cpuid>              : Socket id


Syntax:

ethdev <dbdf> rxq <num> txq <num> <mempool_name> promiscuous <on/off>
ethdev <dbdf> tx_offload <bitmask>
ethdev <dbdf> rx_offload <bitmask>
ethdev <dbdf> promiscuous <on/off>
ethdev <dbdf> mtu <size>

    Where:

dbdf                    : PCI id of device in DBDF format or vdev name for 
non-pci devices.
rxq                      : Number of Rx queues on device
txq                      : Number of Tx queues on device
mempool_name : Mempool to be attached on RQ.
rx_offload           : Supported offloads on ingress. It is bitmask of required 
offloads. Valid offloads are combination of RTE_ETH_RX_OFFLOAD_*
tx_offload           : Supported offloads on egress. It is bitmask of required 
offloads. valid offloads are combination of RTE_ETH_TX_OFFLOAD_*
promiscuous       : Toggle promiscuous mode
mtu                      : MTU size


Syntax:

neigh add ipv4 <ip> <mac>

Where:

ip    : IPv4/IPv6 address for which MAC address is to be added.
mac: MAC  address which is to be configured corresponding to given IP. 


Syntax:

ip4 addr add <ip> netmask <mask> <dbdf>

Where:

ip       : IPv4 address which is to assigned to device.
mask : Subnet mask.
dbdf  : PCI id of device in DBDF format or vdev name for non-pci devices.


Syntax:

ip6 addr add <ip> netmask <mask> <dbdf>

Where:

ip       : IPv6 address which is to assigned to device.
mask : Subnet mask.
dbdf  : PCI id of device in DBDF format or vdev name for non-pci devices.    


Node specific configuration
==================
This section consists of configuration used by nodes in graph. Based on the use 
case, some configurations 
can be modified on the fly. Like for l3fwd use case, route entries can be added 
or deleted while running
the application unlike other configuration. Following are exposed 
configurations:

Syntax:

route add ipv4 <node_name><ip> netmask <mask> via <gateway>

Where:

node_name : Name of node where route is to be added. Currently only supported 
node is ip4_lookup.
ip                 : IPv4 address which is to be added to route table.
mask           : Subnet mask.
gateway     : Gateway IP to redirect packet to next hop.


Syntax:

route add ipv6 <node_name><ip> netmask <mask> via <gateway>

Where:

node_name : Name of node where route is to be added. Currently only supported 
node is ip6_lookup.
ip                 : IPv6 address which is to be added to route table.
mask           : Subnet mask.
gateway     : Gateway IP to redirect packet to next hop.


Syntax:

map <node_name> port <dbdf> queue <rxq> core <core_id>

Where:

node_name : Node name which will be receiving packets as per above mapping. 
Currently only supported node id ethdev_rx.
rxq               : Rx queue id which is to be mapped
core_id        : Core ID to be mapped where <node_name> instance will be 
running.
Run use case
=========
Command under this section can be used to run the application and start to 
receive and transmit  packets using graph walk.

Syntax:

graph start



Note: 
    <> : Mandatory fields
    [] : Optional fields
    ;  : To add any comments, Strings added after semicolon is not used by any 
usecase


Example use case: l3fwd
================
;Configure usecase
graph l3fwd default 0xff

;global specific configuration
mempool mempool0 size 2046 buffers 32000 cache 256 cpu 0
ethdev 0002:02:00.0 rxq 1 txq 1 mempool0 promiscuous off
ethdev 0002:03:00.0 rxq 1 txq 1 mempool0 promiscuous off 
ip4 addr add 10.0.2.1/24 0002:02:00.0
ip4 addr add 10.0.3.1/24 0002:03:00.0
neigh add ipv4 10.0.2.2 52:20:DA:4F:68:70
neigh add ipv4 10.0.2.5 62:20:DA:4F:68:70
neigh add ipv4 10.0.3.2 72:20:DA:4F:68:70
neigh add ipv4 10.0.3.5 82:20:DA:4F:68:70

;node specific configuration
route add ipv4 ipv4_lookup 10.0.2.0 netmask 255.255.255.0 via 10.0.2.1
route add ipv4 ipv4_lookup  10.0.3.0 netmask 255.255.255.0 via 10.0.3.1


map ethdev_rx  port 0002:02:00.0 queue 0 core 6
map ethdev_rx  port 0002:03:00.0 queue 1 core 5

;Run usecase
graph start

Reply via email to