[ https://issues.apache.org/jira/browse/KAFKA-1590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14142913#comment-14142913 ]
Abhishek Sharma commented on KAFKA-1590: ---------------------------------------- [~guozhang] and [~nehanarkhede] - Do we need daily rolling or rolling based on size of file functionality in the appender class? Apart from rolling any other specific functionality you are looking for ?? I have used writeUTF method of DataInputStream for writing data in to binary format. WriteUTF method follows Modified-UTF-8 encoding which is slightly different from UTF-8 encoding and any editor capable of reading UTF-8 is good enough for reading it. I am using Gedit editor and it seems good to me. > Binarize trace level request logging along with debug level text logging > ------------------------------------------------------------------------ > > Key: KAFKA-1590 > URL: https://issues.apache.org/jira/browse/KAFKA-1590 > Project: Kafka > Issue Type: Bug > Reporter: Guozhang Wang > Assignee: Abhishek Sharma > Labels: newbie > Fix For: 0.9.0 > > > With trace level logging, the request handling logs can grow very fast > depending on the client behavior (e.g. consumer with 0 maxWait and hence keep > sending fetch requests). Previously we have changed it to debug level which > only provides a summary of the requests, omitting request details. However > this does not work perfectly since summaries are not sufficient for > trouble-shooting, and turning on trace level upon issues will be too late. > The proposed solution here, is to default to debug level logging with trace > level logging printed as binary format at the same time. The generated binary > files can then be further compressed / rolled out. When needed, we will then > decompress / parse the trace logs into texts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)