ctubbsii commented on PR #5019: URL: https://github.com/apache/accumulo/pull/5019#issuecomment-2448345613
> Using MDC (Mapped Diagnostic Context) might make this easier I don't think it would make it easier. slf4j, log4j 1.2, and log4j2 all have different thread-local context concepts, and it requires a converter to switch between them. It really introduces a lot of complexity, from using more of the slf4j API rather than treating it as a simple logging facade, to adding extra configuration and possibly code to convert between them, and modifying the logging templates to include the map context. It also forces developers to think about thread-local views of the context, every time you want to include thread-local context in a log message, and is likely going to lead to a lot of programmer errors with regard to logging (like not realizing that a log message originating from a class executed in an executor won't inherit the context, when refactoring code that logs). It'd be much easier to just avoid MDC and ThreadLocalContext for logging, and limit our use of slf4j to a very simple logging facade for the basic log level methods `logger.info()`, `logger.error()`, etc. If we want to log additional contextual details, it's much easier to maintain if we just pass around the contextual data we want to log, and include it directly in the message string. MDC/ThreadLocalContext is really best used for general-purpose context information you want to include in every log message, but aren't directly related to what is being logged (like the client IP address and username associated with a request, in a server-side error message that handles the request - not directly related to the error, but perhaps useful to know who is triggering it or who is impacted by it). I don't think that's quite the use case we have here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
