Hello!

I am working on logging for our Flink/Kubernetes infrastructure to our
external corporate ElasticSearch cluster. I have a few ideas to explore and
wondered if anyone had any feedback/experience to share.

Ideas I am exploring right now:
1) Add a K8s configmap that contains an updated log4j that writes directly
to a logstash deployment inside K8s which translates and forwards to the
corporate ES Cluster. Pro:
     Pros: Simple, gives both Flink and App Logs, not local disk space used
     Cons: Possible app downtime if Logstash crashes

2) Add a K8s configmap that updates the log4j config to write to a shared
folder on the node. Then have a second pod running on the machine which
runs FileBeat to read the file and forwards to a
     Pros: Simple, gives both Flink and App Logs
     Cons: Uses local node disk space, need to make sure it gets cleaned up

3) Use a K8s mechanism to forward all of the pod logs to a logstash
deployment inside K8s that forwards to the corporate ES Cluster
     Pros: Very generic solution, all of our K8s pods log the same way
     Cons: Need a mechanism to split the logs into proper indexes based on
App

Thoughts?
-Steve

Reply via email to