I haven't gone back to check the code, but it feels like every size that's
given can be used to verify how to proceed.
In the case I was having, 200k made no sense because there wasn't 200k
worth of data to write, but the system still flushed the message out to
disk, presumably due to using an ear
>> Am I correct in believing that the broker doesn't sanity check the message
size field against the received data?
I'm not sure there is a good way the broker can do that. The only way
the broker knows if the message was corrupted is when it is unable to
read the header of the next message.
When
Neha, thanks for the tip. Useful util!
My problem was simple -- I missed one of the size field changes in the
producer, which led to a completely wrong size field.
Am I correct in believing that the broker doesn't sanity check the message
size field against the received data? In this case, the
Ben,
I would try to run DumpLogSegments to check if the server's data is
not corrupted due to a bug in the producer.
Thanks,
Neha
On Fri, Dec 7, 2012 at 7:17 AM, ben fleis wrote:
> I was testing my own code, and using the console consumer against my
> seemingly-working-producer code. Since the
I was testing my own code, and using the console consumer against my
seemingly-working-producer code. Since the last update, the console
consumer crashes. I am going to try to track it down in the debugger and
will come back with a patch if found.
Command line:
KAFKA_OPTS="-Xmx512M -server
-Dlog