On 2/20/2019 11:52 AM, Jakub Grajciar wrote:
> Memory interface (memif), provides high performance
> packet transfer over shared memory.
> 
> Signed-off-by: Jakub Grajciar <jgraj...@cisco.com>
> ---
>  MAINTAINERS                                 |    6 +
>  config/common_base                          |    5 +
>  config/common_linuxapp                      |    1 +
>  doc/guides/nics/features/memif.ini          |   14 +
>  doc/guides/nics/index.rst                   |    1 +
>  doc/guides/nics/memif.rst                   |  194 ++++
>  drivers/net/Makefile                        |    1 +
>  drivers/net/memif/Makefile                  |   28 +
>  drivers/net/memif/memif.h                   |  178 +++
>  drivers/net/memif/memif_socket.c            | 1092 ++++++++++++++++++
>  drivers/net/memif/memif_socket.h            |  104 ++
>  drivers/net/memif/meson.build               |   13 +
>  drivers/net/memif/rte_eth_memif.c           | 1124 +++++++++++++++++++
>  drivers/net/memif/rte_eth_memif.h           |  203 ++++
>  drivers/net/memif/rte_pmd_memif_version.map |    4 +
>  drivers/net/meson.build                     |    1 +
>  mk/rte.app.mk                               |    1 +
Can you please update release notes to document new PMD.

<...>

> 
> requires patch: http://patchwork.dpdk.org/patch/49009/

Thanks for highlighting this dependency, can you please elaborate the relation
with interrupt and memif PMD?

<...>

> +Example: testpmd and testpmd
> +----------------------------
> +In this example we run two instances of testpmd application and transmit 
> packets over memif.
> +
> +First create master interface::
> +
> +    #./testpmd -l 0-1 --proc-type=primary --file-prefix=pmd1 
> --vdev=net_memif,role=master -- -i
> +
> +Now create slave interace (master must be already running so the slave will 
> connect)::

s/interace/interface/

> +
> +    #./testpmd -l 2-3 --proc-type=primary --file-prefix=pmd2 
> --vdev=net_memif -- -i
> +
> +Set forwarding mode in one of the instances to 'rx only' and the other to 
> 'tx only'::
> +
> +    testpmd> set fwd rxonly
> +    testpmd> start
> +
> +    testpmd> set fwd txonly
> +    testpmd> start

Is shared mem used as a ring, with single producer, single consumer? Is there a
way to use memif PMD for both sending and receiving packets.

It is possible to create two ring PMDs in a single testpmd application, and loop
packets between them continuously via tx_first param [1], can same be done via
memif?

[1]
./build/app/testpmd -w 0:0.0 --vdev net_ring0 --vdev net_ring1 -- -i
testpmd> start tx_first
testpmd> show port stats all
  ######################## NIC statistics for port 0  ########################
  RX-packets: 1365649088 RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 1365649120 TX-errors: 0          TX-bytes:  0
.....


<...>

> +
> +CFLAGS += -O3
> +CFLAGS += $(WERROR_FLAGS)
> +CFLAGS += -DALLOW_EXPERIMENTAL_API
> +LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring

Is rte_ring library used?

<...>

> @@ -0,0 +1,4 @@
> +EXPERIMENTAL {
> +
> +        local: *;
> +};

Please use the version name instead of "EXPERIMENTAL" which is for experimental
APIs.

Reply via email to