This should happen when there is a backlog of data larger than the
fetch size the consumer is using.
Also, just to be clear this problem is something client implementation
needs to handle but not something the user of the client needs to
handle.
-Jay
On Thu, Jun 27, 2013 at 12:29 PM, Vadim Keyli
Jay is correct. ..it manifests itself when the size threshold condition is
violated.
For erlang client, this is fixed in 0.4.6 of mps.
Github.com/milindparikh/mps
Regards
Milind
On Jun 27, 2013 3:30 PM, "Bob Potter" wrote:
> Vadim,
>
> I don't know under exactly what conditions it happens but
Vadim,
I don't know under exactly what conditions it happens but the behavior in
this test seems to reliably reproduce it:
https://github.com/v-a/poseidon/commit/d0ac928e0967e1eaf5b92b403103c4f0dc8fd7f7
-Bob
On 27 June 2013 14:29, Vadim Keylis wrote:
> Jay. I assume this is problem exists in
Jay. I assume this is problem exists in the consumer. How this can this problem
be triggered so I could test my high level consumer.
Thanks
On Jun 26, 2013, at 9:21 AM, Jay Kreps wrote:
> Yeah, that is true. I thought I documented that, but looking at the
> protocol docs, it looks like I didn'
Yeah, that is true. I thought I documented that, but looking at the
protocol docs, it looks like I didn't.
I agree this is kind of a pain in the ass. It was an important
optimization in 0.7 because we didn't know where the message
boundaries were but in 0.8 we have a fast way to compute message
bo
Howdy,
I'm developing a client for kafka 0.8. It looks like a fetch response will
sometimes end with a partial message. I understand why this might be the
case but it was unexpected and as far as I can tell undocumented.
Is my understanding correct or am i missing something?
-Bob