I don't think you can change the queues max number on the client side, however 
if you create the memif master with X queues, your client should be able to 
create between 1 and X queues.
IOW create your memif master with enough queues beforehand and then you can 
decide how many you want to use in the client.

Best
ben

> -----Original Message-----
> From: Catalin Iordache <catalinn.iorda...@gmail.com>
> Sent: Thursday, April 7, 2022 19:28
> To: Benoit Ganne (bganne) <bga...@cisco.com>
> Cc: Dave Wallace <dwallac...@gmail.com>; vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] In regards of the memif library from the vpp Git
> repository
> 
> Hi Ben,
> 
> Thanks for getting back to me. Can you tell me if there is a way to create
> multiple queues from the client side? I do not have control on what the
> master does in my case.
> 
> Best regards,
> Catalin
> 
> 
> 
>       On 5 Apr 2022, at 12:38, Benoit Ganne (bganne) <bga...@cisco.com
> <mailto:bga...@cisco.com> > wrote:
> 
>       To me the usual way of using memif with multiple threads should be
> to use a single memif socket per interface but 1 queue per thread.
> 
>       Best
>       Ben
> 
> 
> 
>               -----Original Message-----
>               From: Catalin Iordache <catalinn.iorda...@gmail.com
> <mailto:catalinn.iorda...@gmail.com> >
>               Sent: Sunday, April 3, 2022 15:27
>               To: Benoit Ganne (bganne) <bga...@cisco.com
> <mailto:bga...@cisco.com> >
>               Cc: Dave Wallace <dwallac...@gmail.com
> <mailto:dwallac...@gmail.com> >
>               Subject: Re: [vpp-dev] In regards of the memif library from
> the vpp Git
>               repository
> 
>               Hi Benoit, Dave,
> 
>               I am coming back to you with further questions about multi-
> threading usage
>               of memif. Reading over the documentation:
> 
>       https://github.com/FDio/vpp/blob/master/extras/libmemif/docs/gettin
> gstarte
>               d_doc.rst#multi-threading it states that multiple client
> threads can have
>               their own memif socket and interface, all pointing to the
> same UNIX
>               socket.
> 
>               My approach is to have a map of sockets and memif
> connections, with each
>               entry dedicated to one thread. However, when I am trying to
> create a new
>               memif connection to a UNIX socket that already has one, I
> get the `Already
>               connected` error message from the library.
> 
>               Here is the master interface:
> 
> 
>       {"level":"warn","ts":1648991883.3501909,"logger":"DPDK","msg":"rte_
> pmd_mem
>               if_probe(): Failed to register mp action callback: Operation
> not
>               supported"}
> 
>       {"level":"info","ts":1648991883.350239,"logger":"eal","msg":"vdev
> 
>       initialized","name":"net_memifW000000000015G","args":"role=server,b
> size=90
>               00,rsize=12,socket=/run/ndn/ndnc-memif-5574-
>               1648991883347378250.sock,socket-
>               abstract=no,mac=F2:6D:65:6D:69:66,id=0","socket":"any"}
> 
>       {"level":"info","ts":1648991883.3502984,"logger":"ethport","msg":"p
> ort
>               opened","port":5,"rxImpl":"RxMemif"}
> 
>       {"level":"info","ts":1648991883.3507469,"logger":"ethdev","msg":"et
> hdev
> 
>       started","id":5,"name":"net_memifW000000000015G","driver":"net_memi
> f","mtu
>               ":9000,"rxq":1,"txq":1,"promisc":false}
> 
>       {"level":"info","ts":1648991883.3507726,"logger":"iface","msg":"add
> ing
>               RxGroup to RxLoop","rxl-ptr":6463904128,"rxl-lc":6,"rxg-
> 
>       ptr":6457272640,"rxg":"EthRxFlow(face=26399,port=5,queue=0)"}
> 
>       {"level":"info","ts":1648991883.3507807,"logger":"iface","msg":"add
> ing
>               face to TxLoop","txl-ptr":6388813760,"txl-
> lc":5,"face":26399}
> 
>       {"level":"info","ts":1648991883.350785,"logger":"ethport","msg":"fa
> ce
>               started","port":5,"id":26399}
> 
>       {"level":"info","ts":1648991883.3507934,"logger":"iface","msg":"fac
> e
> 
>       created","id":26399,"socket":0,"mtu":9000,"locator":{"dataroom":900
> 0,"id":
> 
>       0,"ringCapacity":4096,"role":"server","rxQueueSize":64,"scheme":"me
> mif","s
>               ocketName":"/run/ndn/ndnc-memif-5574-
>               1648991883347378250.sock","txQueueSize":64}}
> 
>               on which, from what I understand, only one thread can
> process it, while on
>               the client side I get the following output:
> 
>               04-03 13:18:03.351 5574 5574 I createFace mutation done.
>               id=K7N5THI3NRNU50KI7N65M6M6N0
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_add_epoll_fd
> :260:
>               fd 4 added to epoll
>               04-03 13:18:03.351 5574 5574 I Create interface 0
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_add_epoll_fd
> :260:
>               fd 5 added to epoll
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:588: recvmsg fd 5
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:618: Message type 2 received
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_init_queues:
> 1254:
>               RING: 0x7fd8885f2000 I: 0
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_init_queues:
> 1271:
>               RING: 0x7fd888602080 I: 0
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_init_queues:
> 1300:
>               RING: 0x7fd8885f2000 I: 0
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_init_queues:
> 1322:
>               RING: 0x7fd888602080 I: 0
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_send_f
> rom_que
>               ue:73: Message type 3 sent
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:588: recvmsg fd 5
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:618: Message type 1 received
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_send_f
> rom_que
>               ue:73: Message type 4 sent
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:588: recvmsg fd 5
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:618: Message type 1 received
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_send_f
> rom_que
>               ue:73: Message type 4 sent
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:588: recvmsg fd 5
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:618: Message type 1 received
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_send_f
> rom_que
>               ue:73: Message type 5 sent
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:588: recvmsg fd 5
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:618: Message type 1 received
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_send_f
> rom_que
>               ue:73: Message type 5 sent
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:588: recvmsg fd 5
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:618: Message type 1 received
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_send_f
> rom_que
>               ue:73: Message type 6 sent
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:588: recvmsg fd 5
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:618: Message type 7 received
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_add_epoll_fd
> :260:
>               fd 9 added to epoll
>               04-03 13:18:10.352 5574 5574 I memif details. app name:
> ndncft-client
>               04-03 13:18:10.352 5574 5574 I memif details. interface
> name:
>               04-03 13:18:10.352 5574 5574 I memif details. id: 0
>               04-03 13:18:10.352 5574 5574 I memif details. secret: (null)
>               04-03 13:18:10.352 5574 5574 I memif details. role: slave
>               04-03 13:18:10.352 5574 5574 I memif details. mode: ethernet
>               04-03 13:18:10.352 5574 5574 I memif details. socket path:
>               /run/ndn/ndnc-memif-5574-1648991883347378250.sock
>               04-03 13:18:10.352 5574 5574 I memif details. rx_queue(0)
> queue id: 0
>               04-03 13:18:10.352 5574 5574 I memif details. rx_queue(0)
> ring size:
>               4096
>               04-03 13:18:10.352 5574 5574 I memif details. rx_queue(0)
> buffer size:
>               16384
>               04-03 13:18:10.352 5574 5574 I memif details. tx_queue(0)
> queue id: 0
>               04-03 13:18:10.352 5574 5574 I memif details. tx_queue(0)
> ring size:
>               4096
>               04-03 13:18:10.352 5574 5574 I memif details. tx_queue(0)
> buffer size:
>               16384
>               04-03 13:18:10.352 5574 5574 I memif details. link: up
> 
> 
>               FIRST INTERFACE IS CREATED ^
> 
> 
> 
> 
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_add_epoll_fd
> :260:
>               fd 11 added to epoll
>               04-03 13:18:10.352 5574 5574 I Create interface 1
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_add_epoll_fd
> :260:
>               fd 12 added to epoll
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:588: recvmsg fd 12
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:618: Message type 2 received
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_init_queues:
> 1254:
>               RING: 0x7fd8805d1000 I: 0
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_init_queues:
> 1271:
>               RING: 0x7fd8805e1080 I: 0
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_init_queues:
> 1300:
>               RING: 0x7fd8805d1000 I: 0
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_init_queues:
> 1322:
>               RING: 0x7fd8805e1080 I: 0
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_send_f
> rom_que
>               ue:73: Message type 3 sent
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:588: recvmsg fd 12
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_receiv
> e_and_p
>               arse:618: Message type 8 received
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/socket.c:memif_msg_parse_
> disconn
>               ect:562: disconnect received: Already connected, mode: 0
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_del_epoll_fd
> :300:
>               fd 12 removed from epoll
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_del_epoll_fd
> :297:
>               epoll_ctl: No such file or directory fd 15
> 
>       MEMIF_DEBUG:/root/vpp/extras/libmemif/src/main.c:memif_del_epoll_fd
> :297:
>               epoll_ctl: No such file or directory fd 16
> 
>               On the second one that I want to create, I get the error
> above.
> 
>               Do you have any suggestions for me to what I might do wrong?
> 
>               Kind regards,
>               Catalin Iordache
> 
> 
>               On 25 Mar 2022, at 15:53, Catalin Iordache
>               <catalinn.iorda...@gmail.com
> <mailto:catalinn.iorda...@gmail.com>  <mailto:catalinn.iorda...@gmail.com>
> > wrote:
> 
>               Email resent since there was an error with forwarding to the
>               mailing list as well. Thanks!
> 
> 
> 
>               On 23 Mar 2022, at 20:03, Catalin Iordache
>               <catalinn.iorda...@gmail.com
> <mailto:catalinn.iorda...@gmail.com>  <mailto:catalinn.iorda...@gmail.com>
> > wrote:
> 
>               Hi Benoit, Dave,
> 
>               Thanks for getting back to me and for sharing the gerrit
>               link. I’ve looked over the code and it already helped me to
> check that
>               everything I am doing on my side is good enough.
>               It would be extremely useful for me if someone would add a
>               multi-threaded example and documentation in that merge
> request as well.
> 
>               Thanks for helping me so far.
> 
>               Best regards,
>               Catalin
> 
> 
> 
>               On 22 Mar 2022, at 20:22, Benoit Ganne (bganne)
>               <bga...@cisco.com <mailto:bga...@cisco.com>
> <mailto:bga...@cisco.com> > wrote:
> 
>               That's great news, thanks Dave!
> 
>               Best
>               ben
> 
> 
> 
>               -----Original Message-----
>               From: Dave Wallace <dwallac...@gmail.com
> <mailto:dwallac...@gmail.com>
>               <mailto:dwallac...@gmail.com> >
>               Sent: Tuesday, March 22, 2022 19:17
>               To: Benoit Ganne (bganne) <bga...@cisco.com
> <mailto:bga...@cisco.com>
>               <mailto:bga...@cisco.com> >; Catalin Iordache
>               <catalinn.iorda...@gmail.com
> <mailto:catalinn.iorda...@gmail.com>
>               <mailto:catalinn.iorda...@gmail.com> >
>               Cc: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
> <mailto:vpp-
>               d...@lists.fd.io <mailto:d...@lists.fd.io> >
>               Subject: Re: [vpp-dev] In regards of the memif
>               library from the vpp Git
>               repository
> 
>               Funny you should mention 30573 -- yesterday I
>               noticed this was in limbo &
>               broken. I am in the process of fixing the bugs
>               in this documentation so
>               that it can be merged.
> 
>               While I'm at it, I will be be updating the CI
>               verify job to upload the doc
>               verify results to an AWS S3 bucket with a 7 day
>               retention policy to
>               enhance the documentation review process.
> 
>               Thanks,
>               -daw-
> 
> 
>               On 3/22/22 5:21 AM, Benoit Ganne (bganne) via
>               lists.fd.io <http://lists.fd.io/>  <http://lists.fd.io/>
> wrote:
> 
> 
>               Hi Catalin,
> 
>               CC'ed vpp-dev, but examples and doc can be
>               found here
>               https://gerrit.fd.io/r/c/vpp/+/30573
>               It would be good to have this patch merged I
>               guess but it looks
>               like there are some issues to fix in the doc.
> 
>               Best
>               Ben
> 
> 
>               From: Catalin Iordache
>               <catalinn.iorda...@gmail.com
> <mailto:catalinn.iorda...@gmail.com>  <mailto:catalinn.iorda...@gmail.com>
> >
>               <mailto:catalinn.iorda...@gmail.com>
>               Date: Mar 20, 2022 15:36
>               Subject: In regards of the memif library from
>               the vpp Git
>               repository
>               To: benoit.ga...@gmail.com <mailto:benoit.ga...@gmail.com>
>               <mailto:benoit.ga...@gmail.com>
> <mailto:benoit.ga...@gmail.com>
>               Cc:
> 
> 
> 
>               Hello,
> 
>               I noticed that the latest version of the memif
>               library
>               is lacking
>               documentation and the things that are in there
>               seem a bit
>               outdated.
> 
>               I am mostly interested in the multi threaded
>               support
>               in memif,
>               which in older versions seemed to be offered by
>               the library
>               but now
>               they’ve been removed.
> 
>               Do you have some documentation and code
>               examples on
>               how memif
>               should be handled by multiple threads
>               concurrently?
> 
>               Best regards,
>               Catalin
> 
> 
> 
> 
> 
> 
>               
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21232): https://lists.fd.io/g/vpp-dev/message/21232
Mute This Topic: https://lists.fd.io/mt/89948379/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to