Re: Larger Size Error Message

2016-03-29 Thread Fang Wong
After upgrading to kafka 0.9.0 from 0.8.2. I didn't see the larger size INFO message any more. I will keep watching this for some time. Thanks, Fang On Fri, Mar 18, 2016 at 12:04 AM, Manikumar Reddy wrote: > DumpLogSegments tool is used to dump partition data logs (not application > logs). >

Re: Larger Size Error Message

2016-03-20 Thread Fang Wong
Hi Guozhang, The problem is that server "10.225.36.226" is not one of my kafka clients, nslookup shows it is another internal server, my servers are like 10.224.146.6 #, I can't even login to that server. All of my messages are at most a few KB. Is it possible anybody

Re: Larger Size Error Message

2016-03-19 Thread Guozhang Wang
Fang, You can use the kafka.tools.DumpLogSegments to scan and view the logs, but you need the right deserializers to illustrate the content. Guozhang On Wed, Mar 16, 2016 at 4:03 PM, Fang Wong wrote: > Thanks Guozhang! > We are in the process of upgrading to 0.9.0.0. We will look into using >

Re: Larger Size Error Message

2016-03-19 Thread Guozhang Wang
Before 0.9 and before anyone knows your server host / port can produce request to you unless you have a hardware LB or firewall. In the recent release of 0.9, there is a Security feature added to Kafka, including encryption / authentication / authorization. For your case, I would suggest you upgra

Re: Larger Size Error Message

2016-03-19 Thread Manikumar Reddy
DumpLogSegments tool is used to dump partition data logs (not application logs). Usage: ./bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files /tmp/kafka-logs/TEST-TOPIC-0/.log Use --key-decoder-class , --key-decoder-class options to pass deserializers. On Fri, Mar 18,

Re: Larger Size Error Message

2016-03-19 Thread Manikumar Reddy
DumpLogSegments tool is used to dump partition data logs (not application logs). Usage: ./bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files /tmp/kafka-logs/TEST-TOPIC-0/.log Use --key-decoder-class , --value-decoder-class options to pass deserializers. On Fri, Mar 1

Re: Larger Size Error Message

2016-03-18 Thread Fang Wong
Thanks Guozhang! We are in the process of upgrading to 0.9.0.0. We will look into using ACLs. Is there a way to see what is the request in the kafka server, the request for my case is byte[]? Is there a way to turn on kafka logging to see the request on the kafka server side? Thanks, Fang On Wed

Re: Larger Size Error Message

2016-03-18 Thread Fang Wong
Thanks Guozhang: I put server.log in the command line, got the the following error: -bash-4.1$ ./kafka-run-class.sh kafka.tools.DumpLogSegments --files /home/kafka/logs/server.log Dumping /home/sfdc/logs/liveAgent/kafka/logs/server.log Exception in thread "main" java.lang.NumberFormatException: F

Re: Larger Size Error Message

2016-03-15 Thread Guozhang Wang
Fang, >From the logs you showed above there is a single produce request with very large request size: "[2016-03-14 06:43:03,579] INFO Closing socket connection to /10.225.36.226 due to invalid request: Request of length *808124929* is not valid, it is larger than the maximum size of 104857600 byt

Re: Larger Size Error Message

2016-03-14 Thread Fang Wong
After changing log level from INFO to TRACE, here is kafka server.log: [2016-03-14 06:43:03,568] TRACE 156 bytes written. (kafka.network.BoundedByteBufferSend) [2016-03-14 06:43:03,575] TRACE 68 bytes read. (kafka.network.BoundedByteBufferReceive) [2016-03-14 06:43:03,575] TRACE [ReplicaFetcherT

Re: Larger Size Error Message

2016-03-08 Thread Guozhang Wang
I cannot think of an encoding or partial message issue at top of my head (browsed through 0.8.2.2 tickets, none of them seems related either). Guozhang On Tue, Mar 8, 2016 at 11:45 AM, Fang Wong wrote: > Thanks Guozhang! > > No I don't have a way to reproduce this issue. It randomly happens, I

Re: Larger Size Error Message

2016-03-08 Thread Fang Wong
Thanks Guozhang! No I don't have a way to reproduce this issue. It randomly happens, I am changing the log level from INFO to trace to see if I can get the exact message what was sent when this happens. Could it also be some encoding issue or partial message related? Thanks, Fang On Mon, Mar 7,

Re: Larger Size Error Message

2016-03-07 Thread Guozhang Wang
John, There is not a specific JIRA for this change as it is only implemented in the new Java producer: https://issues.apache.org/jira/browse/KAFKA-1239 Related classes are RecordAccumulator and MemoryRecords: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clie

Re: Larger Size Error Message

2016-03-07 Thread Fang Wong
No, we don't have compression turned on the batch size is the default: 16384. But the message size is very small, even with that batch size, it is impossible to exceed the size limit. Thanks, Fang On Sun, Mar 6, 2016 at 6:09 PM, John Dennison wrote: > Guozhang, > > Do you know the ticket for fo

Re: Larger Size Error Message

2016-03-06 Thread John Dennison
Guozhang, Do you know the ticket for for changing the "batching criterion from #.messages to bytes." I am unable to find it. Working on porting a similar change to pykafka. John On Sat, Mar 5, 2016 at 4:29 PM, Guozhang Wang wrote: > Hello, > > Did you have compression turned on and batching

Re: Larger Size Error Message

2016-03-05 Thread Guozhang Wang
Hello, Did you have compression turned on and batching (in terms of #.messages)? In that case the whole compressed message set is treated as a single message on the broker and hence could possibly exceed the limit. In newer versions we have changed the batching criterion from #.messages to bytes,