SLF4J / Logback is a fine system for logging, including reduced log line 
work done when not logging (instead of the normal filtering system), and 
also features an optional asynchronous appender. Even without it, it can 
perfectly handle the load when configured correctly.

However, when run through the Tanuki for Windows Java executable/service 
wrapper, any loglines written to stdout will be redirected by the service 
wrapper to not only being logged to the intended logfiles, but also to the 
console and the Tanuki logs. This splitting / redirection creates a 
blocking bottleneck that reduced log throughput between 1000x and 10000x. 
No amount of buffering could alleviate that.

That's why I suggested performance should be checked with and without 
wrapper to see if there is a difference there.

In the last case I hinted at, a developer config ended up in a production 
environment, causing logs not going through their specialized channels to 
the logfiles but through stdout, and therefore through the wrapper 
bottleneck. This caused end user actions which would normally be sub 
second, to take up to 10-100 seconds.

On Wednesday, July 8, 2015 at 12:37:36 PM UTC+2, Fluid Dynamics wrote:
>
> On Wednesday, July 8, 2015 at 5:41:47 AM UTC-4, Niels van Klaveren wrote:
>>
>> Colin, be aware that running through service wrappers can introduce a lot 
>> of overhead / blocking on logging due to stdout redirects.
>> Our application had to switch to SLF4J / Logback, since logging with JUL, 
>> Log4J or to console / stdout and incorrectly configuring the wrapper could 
>> lead to 100 ms pauses caused by log blocking.
>> Even then, as I all too recently noticed, a simple configuration error 
>> could bring these problems back even when using SLF4J / Logback.
>>
>> That's something you might look into when there's no problem running as a 
>> jar, but occurring solely when run through a service wrapper.
>>
>
> What about putting some type of async construct between the generation of 
> log events and the actual logging?
>
> If thorough logging trumps throughput, maybe an async channel with a large 
> buffer size. If logging activity is heavy enough backpressure will 
> eventually reach and slow the producer.
>
> If throughput trumps thorough logging, send-off log messages to an agent 
> that does something like (swap! queue-atom #(conj (take queue-max %1) msg)) 
> and calls .notify on a latch. Elsewhere a thread polls the queue, consumes 
> the last item from it and outputs it to the log it if it's nonempty and 
> .waits the latch otherwise, and loops. Now log items pile up up to 
> queue-max if the producer overruns the consumer, but instead of 
> backpressure you get older items from the queue being dropped on the floor 
> if the log-writer can't keep up with the main system's generation of log 
> notifications. (Consumer now looks something like (loop [] (swap! 
> queue-atom #(do (reset! holder (last %)) (butlast %))) (if @holder (do (log 
> @holder) (reset! @holder nil)) (.wait latch)) (recur)).)
>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to