In my eyes, this e-mail thread contains two largely independent questions. The first question is how we can make stats available over a "wire". Remote user asks over the wire, the request is seen on VPP side of the wire as a byte sequence. Some (currently missing) VPP code parses that request, looks into the shared memory, and serializes the requested stats into a response (sequence of bytes), put to the wire for user to read from.
Ssh + vpp_get_stats acts as a wire, but the format is human (not machine) friendly. Similarly to how CLI is less machine friendly than binary API. I expect the first implementation of machine friendly "stats over wire" to use Unix Domain Socket as the wire, just because VPP already has a dispatcher for handling binary messages of prescribed type there. Also, UDS has smaller overhead than other wires, especially if VPP is in a container. A sub-question is whether to support explicit polling, or subscribing for notifications (or both). In CSIT we want explicit polling. The second question is what wire is the best for stats transport. (And whether we should add support for more wire types.) > There's the (naive) prometheus example in the repo, vpp_get_stats, > there is a Telegraf plugin, a simple gNMI/gRPC plugin. I suspect the Prometheus example is for publish/subscribe, and vpp_get_stats is for explicit polling. Not sure about the others. Vratko. From: Paul Vinciguerra <pvi...@vinciconsulting.com> Sent: Thursday, January 9, 2020 11:39 PM To: Ole Troan <otr...@employees.org> Cc: Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco) <vrpo...@cisco.com>; Christian Hopps <cho...@chopps.org>; vpp-dev <vpp-dev@lists.fd.io> Subject: Re: [vpp-dev] python api over tcp? My thought was to add a daemon that listened to tls and wrapped the shm transport/libapiclient, that would stay coresident with vpp then connect with the papi client from a remote system via tls. On Jan 9, 2020, at 1:38 PM, Ole Troan <otr...@employees.org<mailto:otr...@employees.org>> wrote: Hi Paul, On 9 Jan 2020, at 19:10, Paul Vinciguerra <pvi...@vinciconsulting.com<mailto:pvi...@vinciconsulting.com>> wrote: Sounds like a little scope creep going on there. If you provide protobuf3 encoded api messages to/from vpp, I'll add a grpc listener option to vpp_papi, in the interim, I'd be glad to add a tls wrapped listener if there is interest. Were you thinking of adding a simple tls/tcp to uds proxy? Must be something off the shelf that can be used. Netcat? Then you want to amend vpp_papi to be able to sit at the other end of that? What did you plan to do with security? The grpc idea isn’t too dissimilar. Just pass vpp api messages encapsulated in grpc. One request, one reply and an event message. Cheers Ole On Thu, Jan 9, 2020 at 12:00 PM Ole Troan <otr...@employees.org<mailto:otr...@employees.org>> wrote: On 9 Jan 2020, at 16:50, Paul Vinciguerra <pvi...@vinciconsulting.com<mailto:pvi...@vinciconsulting.com>> wrote: Is there any objection to adding a tls listener and an instance to the stats client to vpp_papi? Use grpc as transport? Cheers, Ole On Jan 9, 2020, at 6:45 AM, Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via Lists.Fd.Io<http://Lists.Fd.Io> <vrpolak=cisco....@lists.fd.io<mailto:cisco....@lists.fd.io>> wrote: CSIT uses VPP API via socket (tunneled over SSH) for most interactions. We also read stats for just one (I think) purpose, reading runtime stats (/sys/node). The way we do that is historical and convoluted, for the result see INFO line at [1]. Looking at the result, the appropriate API way would be to use send some _dump message and process the _details responses, one per node name. Vratko. [1] https://logs.fd.io/production/vex-yul-rot-jenkins-1/csit-vpp-perf-verify-master-2n-clx/58/archives/log.html.gz#s1-s1-s1-s1-s1-t1-k2-k9-k1-k1-k4-k1 From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> On Behalf Of Christian Hopps Sent: Thursday, January 9, 2020 12:05 PM To: Ole Troan <otr...@employees.org<mailto:otr...@employees.org>> Cc: Christian Hopps <cho...@chopps.org<mailto:cho...@chopps.org>>; vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> Subject: Re: [vpp-dev] python api over tcp? > On Jan 9, 2020, at 5:44 AM, Ole Troan > <otr...@employees.org<mailto:otr...@employees.org>> wrote: > > Christian, > >>> For exporting data out of the stats segment, I believe there is already >>> quite a few solutions. >>> There's the (naive) prometheus example in the repo, vpp_get_stats, there is >>> a Telegraf plugin, a simple gNMI/gRPC plugin. >> >> Right I've used vpp_get_stats and may run that with ssh and awk. I guess it >> just seems odd on first encountering this that the CLI provided the data, >> but the binary API didn't. I suppose the view is that exposing the stats >> segment in shared memory *is* the binary API. :) > > What certainly would make sense to do, is to put a wrapper on top of > vpp_stats.py that gives you a higher level API of accessing the stats. > E.g. a get_interface_counters(). The stat segment also contains the name to > interface index mapping (/if/names). > Want to have a go? I'm actually going to use vpp_get_stats (run remotely using ssh) for now. I'm using vpp_papi on a single testing server (so it connects to each of the VPP /run/vpp/api.sock over ssh forwarded sockets), so it doesn't have access to their shared memory segments. > > I am also exploring putting much more information into the stat segment, > essentially making it into an operational data store (RFC8342). Don't hold > your breath. But any help appreciated. I will be looking at doing some YANG models later this year, so if the timing aligns.. :) Thanks, Chris. > Cheers, > Ole
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15130): https://lists.fd.io/g/vpp-dev/message/15130 Mute This Topic: https://lists.fd.io/mt/69538850/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-