On 08/17/2018 07:45 AM, Cédric Jeanneret wrote:
On 08/17/2018 12:25 AM, Steve Baker wrote:
On 15/08/18 21:32, Cédric Jeanneret wrote:
Dear Community,
As you may know, a move toward Podman as replacement of Docker is starting.
One of the issues with podman is the lack of daemon, precisely the lack
of a socket allowing to send commands and get a "computer formatted
output" (like JSON or YAML or...).
In order to work that out, Podman has added support for varlink¹, using
the "socket activation" feature in Systemd.
On my side, I would like to push forward the integration of varlink in
TripleO deployed containers, especially since it will allow the following:
# proper interface with Paunch (via python link)
I'm not sure this would be desirable. If we're going to all container
management via a socket I think we'd be better supported by using CRI-O.
One of the advantages I see of podman is being able to manage services
with systemd again.
Using the socket wouldn't prevent a "per service" systemd unit. Varlink
would just provide another way to manage the containers.
It's NOT like the docker daemon - it will not manage the containers on
startup for example. It's just an API endpoint, without any "automated
powers".
See it as an interesting complement to the CLI, allowing to access
containers data easily with a computer-oriented language like python3.
# a way to manage containers from within specific containers (think
"healthcheck", "monitoring") by mounting the socket as a shared volume
# a way to get container statistics (think "metrics")
# a way, if needed, to get an ansible module being able to talk to
podman (JSON is always better than plain text)
# a way to secure the accesses to Podman management (we have to define
how varlink talks to Podman, maybe providing dedicated socket with
dedicated rights so that we can have dedicated users for specific tasks)
Some of these cases might prove to be useful, but I do wonder if just
making podman calls would be just as simple without the complexity of
having another host-level service to manage. We can still do podman
operations inside containers by bind-mounting in the container state.
I wouldn't mount the container state as-is for mainly security reasons.
I'd rather get the varlink abstraction rather than the plain `podman'
CLI - in addition, it is far, far easier for applications to get a
proper JSON instead of some random plain text - even if `podman' seems
to get a "--format" option. I really dislike calling "subprocess" things
when there is a nice API interface - maybe that's just me ;).
In addition, apparently the state is managed by some sqlite DB -
concurrent accesses to that DB isn't really a good idea, we really don't
want a corruption, do we?
IIRC sqlite handles concurrent accesses, it just does them slowly.
That said, I have some questions:
° Does any of you have some experience with varlink and podman interface?
° What do you think about that integration wish?
° Does any of you have concern with this possible addition?
I do worry a bit that it is advocating for a solution before we really
understand the problems. The biggest unknown for me is what we do about
healthchecks. Maybe varlink is part of the solution here, or maybe its a
systemd timer which executes the healthcheck and restarts the service
when required.
Maybe. My main concern is: would it be interesting to compare both
solutions?
The Healthchecks are clearly docker-specific, no interface exists atm in
the libpod for that. So we have to mimic it in the best way.
Maybe the healthchecks place is in systemd, and varlink would be used
only for external monitoring and metrics. That would also be a nice way
to explore.
I would not focus on only one of the possibilities I've listed. There
are probably even more possibilities I didn't see - once we get a proper
socket, anything is possible, the good and the bad ;).
Thank you for your feedback and ideas.
Have a great day (or evening, or whatever suits the time you're reading
this ;))!
C.
¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev