Hi all,

I agree with Konstantin, this feels like a problem that shouldn't be solved
via Apache Flink but via the logging ecosystem itself.

Best regards,

Martijn

On Tue, 11 Jan 2022 at 13:11, Konstantin Knauf <kna...@apache.org> wrote:

> I've now read over the discussion on the ticket, and I am personally not in
> favor of adding this functionality to Flink via the REST API or Web UI. I
> believe that changing the logging configuration via the existing
> configuration files (log4j or logback) is good enough, to justify not
> increasing the scope of Flink in that direction. As you specifically
> mention YARN: doesn't Cloudera's Hadoop platform, for example, offer means
> to manage the configuration files for all worker nodes from a central
> configuration management system? It overall feels like we are trying to
> solve a problem in Apache Flink that is already solved in its ecosystem and
> increases the scope of the project without adding core value. I also expect
> that over time the exposed logging configuration options would become more
> and more complex.
>
> I am curious to hear what others think.
>
> On Tue, Jan 11, 2022 at 10:34 AM Chesnay Schepler <ches...@apache.org>
> wrote:
>
> > Reloading the config from the filesystem  is already enabled by default;
> > that was one of the things that made us switch to Log4j 2.
> >
> > The core point of contention w.r.t. this topic is whether having the
> > admin ssh into the machine is too inconvenient.
> >
> > Personally I still think that the the current capabilities are
> > sufficient, and I do not want us to rely on internals of the logging
> > backends in production code.
> >
> > On 10/01/2022 17:26, Konstantin Knauf wrote:
> > > Thank you for starting the discussion. Being able to change the logging
> > > level at runtime is very valuable in my experience.
> > >
> > > Instead of introducing our own API (and eventually even persistence),
> > could
> > > we just periodically reload the log4j or logback configuration from the
> > > environment/filesystem? I only quickly googled the topic and [1,2]
> > suggest
> > > that this might be possible?
> > >
> > > [1] https://stackoverflow.com/a/16216956/6422562?
> > > [2] https://logback.qos.ch/manual/configuration.html#autoScan
> > >
> > >
> > >
> > >
> > >
> > > On Mon, Jan 10, 2022 at 5:10 PM Wenhao Ji <predator....@gmail.com>
> > wrote:
> > >
> > >> Hi everyone,
> > >>
> > >> Hope you enjoyed the Holiday Season.
> > >>
> > >> I would like to start the discussion on the improvement purpose
> > >> FLIP-210 [1] which aims to provide a way to change log levels at
> > >> runtime to simplify issues and bugs detection as reported in the
> > >> ticket FLINK-16478 [2].
> > >> Firstly, thanks Xingxing Di and xiaodao for their previous effort. The
> > >> FLIP I drafted is largely influenced by their previous designs [3][4].
> > >> Although we have reached some agreements under the jira comments about
> > >> the scope of this feature, we still have the following questions
> > >> listed below ready to be discussed in this thread.
> > >>
> > >> ## Question 1
> > >>
> > >>> Creating as custom DSL and implementing it for several logging
> backend
> > >> sounds like quite a maintenance burden. Extensions to the DSL, and
> > >> supported backends, could become quite an effort. (by Chesnay
> Schepler)
> > >>
> > >> I tried to design the API of the logging backend to stay away from the
> > >> details of implementations but I did not find any slf4j-specific API
> > >> that is available to change the log level of a logger. So what I did
> > >> is to introduce another kind of abstraction on top of the slf4j /
> > >> log4j / logback so that we will not depend on the logging provider's
> > >> api directly. It will be convenient for us to adopt any other logging
> > >> providers. Please see the "Logging Abstraction" section.
> > >>
> > >> ## Question 2
> > >>
> > >>> Do we know whether other systems support this kind of feature? If
> yes,
> > >> how do they solve it for different logging backends? (by Till
> Rohrmann)
> > >>
> > >> I investigated several Java frameworks including Spark, Storm, and
> > >> Spring Boot. Here is what I found.
> > >> Spark & Storm directly depend on the log4j implementations, which
> > >> means they do not support any other slf4j implementation at all. They
> > >> simply call the log4j api directly. (see SparkContext.scala#L381 [5],
> > >> Utils.scala#L2443 [6] in Spark, and LogConfigManager.java#L144 [7] in
> > >> Storm). They are pretty different from what Flink provides.
> > >> However, I found Spring Boot has implemented what we are interested
> > >> in. Just as Flink, Spring boot also supports many slf4j
> > >> implementations. Users are not limited to log4j. They have the ability
> > >> to declare different logging frameworks by importing certain
> > >> dependencies. After that spring will decide the activated one by
> > >> scanning its classpath and context. (see LoggingSystem.java#L164 [8]
> > >> and LoggersEndpoint.java#L99 [9])
> > >>
> > >> ## Question 3
> > >>
> > >> Besides the questions raised in the jira comments, I also find another
> > >> thing that has not been discussed. Considering this feature as an MVP,
> > >> do we need to introduce a HighAvailabilityService to store the log
> > >> settings so that they can be synced to newly-joined task managers and
> > >> also job manager followers for consistency? This issue is included in
> > >> the "Limitations" section in the flip.
> > >>
> > >> Finally, thanks for your time for joining this discussion and
> > >> reviewing this FLIP. I would appreciate it if you could have any
> > >> comments or suggestions on this.
> > >>
> > >>
> > >> [1]:
> > >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-210%3A+Change+logging+level+dynamically+at+runtime
> > >> [2]: https://issues.apache.org/jira/browse/FLINK-16478
> > >> [3]:
> > >>
> >
> https://docs.google.com/document/d/1Q02VSSBzlZaZzvxuChIo1uinw8KDQsyTZUut6_IDErY
> > >> [4]:
> > >>
> >
> https://docs.google.com/document/d/19AyuTHeERP6JKmtHYnCdBw29LnZpRkbTS7K12q4OfbA
> > >> [5]:
> > >>
> >
> https://github.com/apache/spark/blob/11596b3b17b5e0f54e104cd49b1397c33c34719d/core/src/main/scala/org/apache/spark/SparkContext.scala#L381
> > >> [6]:
> > >>
> >
> https://github.com/apache/spark/blob/11596b3b17b5e0f54e104cd49b1397c33c34719d/core/src/main/scala/org/apache/spark/util/Utils.scala#L2433
> > >> [7]:
> > >>
> >
> https://github.com/apache/storm/blob/3f96c249cbc17ce062491bfbb39d484e241ab168/storm-client/src/jvm/org/apache/storm/daemon/worker/LogConfigManager.java#L144
> > >> [8]:
> > >>
> >
> https://github.com/spring-projects/spring-boot/blob/main/spring-boot-project/spring-boot/src/main/java/org/springframework/boot/logging/LoggingSystem.java#L164
> > >> [9]:
> > >>
> >
> https://github.com/spring-projects/spring-boot/blob/main/spring-boot-project/spring-boot-actuator/src/main/java/org/springframework/boot/actuate/logging/LoggersEndpoint.java#L99
> > >>
> > >> Thanks,
> > >> Wenhao
> > >>
> > >
> >
> >
>
> --
>
> Konstantin Knauf
>
> https://twitter.com/snntrable
>
> https://github.com/knaufk
>

Reply via email to