Hi Murthy, Inline.
> On Mar 13, 2020, at 4:54 AM, Satya Murthy <satyamurthy1...@gmail.com> wrote: > > Hi Florin, > > Thank you very much for the inputs. > These are very difficult to understand unless we go through the code in > detail. > Today, Whole day, I was trying to follow your instructions and get this > working by looking at the code as well. I’d recommend starting here https://wiki.fd.io/images/9/9c/Vpp-hoststack-kc-eu19.pdf > However, I am not fully successful. > Before going further, I would like to get my understanding clear, so that it > will form basis for my debugging. > > Here are couple of questions: > 1) > == > The message queue used between vpp and a vcl worker can do both mutex/condvar > and eventfd notifications. The former is the default but you can switch to > eventfds by adding to vcl.conf "use-mq-eventfd”. You can then use > vppcom_worker_mqs_epfd to retrieve a vcl worker's epoll fd (it’s an epoll fd > for historic reasons) which you should be able to nest into your own linux > epoll fd. > == > The message queues between VPP and VCL can be present either in "shared > memory" (or) "memfd segments". > For eventfd to work, the queues need to be present in the "memfd segments". > Is this correct understanding ? FC: VCL workers and vpp use message queues to exchange io/ctrl messages and fifos to exchange data. Both are allocated in shared memory segments which can be of two flavors, POSIX shm segments (shm_open) or memfd based (memfd exchanged over a unix socket). For the latter to work, we use the binary api’s socket transport to exchange the fds. If configured to use eventds for message queue signaling, vpp must exchange those eventfds with the vcl workers. For that, we also use the binary api’s socket. That should explain why the binary api’s socket transport is needed. Note also that this is just a configuration from vcl consumer perspective, it should not otherwise affect the app. > > 2) > == > Note that you’ll also need to force memfd segments for vpp’s message queues, > i.e., session { evt_qs_memfd_seg }, and use the socket transport for binary > api, i.e., in vpp’s startup.conf add "socksvr { /path/to/api.sock }" and in > vcl.conf "api-socket-name /path/to/api.sock”. > == > I didnt understand the reason for moving the binary api to sockets. > Is this due to shm/memfd wont be used at the same time ? FC: I hope it’s clear now why you need to move to the binary api’s socket transport. > > 3) > In a nut shell: > > VCL-APP <----Message Queues in MemFD segments ( signalled using eventd ) > -------------> VPP > VCL-APP <---Binary Api via LinuxDomain > Sockets------------------------------------------------------> VPP > > We will have two api clients with this model. One is shared memory client and > other is a socket client. FC: If you mean to point out that there are two channels from vcl to vpp, that’s correct. The first is the one described above, but note that the message queues are not bidirectional. VPP has another set of message queues the apps use to enqueue notifications towards vpp. As for the second channel, the binary api, it’s used to 1) attach vcl to the session layer, 2) its socket is used for exchanging fds (memfds and eventfds) 3) sometimes for exchanging configuration. But again, apart from configuration changes, this should be completely transparent to vcl consumers. Regards, Florin > > Is my understanding correct ? > -- > Thanks & Regards, > Murthy
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15778): https://lists.fd.io/g/vpp-dev/message/15778 Mute This Topic: https://lists.fd.io/mt/71899986/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-