Agree that the docs can be better. Perhaps you want to open a JIRA (at
issues.apache.org) with this suggestion?
On Wed, Dec 10, 2014 at 4:03 PM, Solon Gordon wrote:
> I see, thank you for the explanation. You might consider being more
> explicit about this in your documentation. We didn't realize
I see, thank you for the explanation. You might consider being more
explicit about this in your documentation. We didn't realize we needed to
take the (partitions * fetch size) calculation into account when choosing
partition counts for our topics, so this is a bit of a rude surprise.
On Wed, Dec
Ah, found where we actually size the request as partitions * fetch size.
Thanks for the correction, Jay and sorry for the mix-up, Solon.
On Wed, Dec 10, 2014 at 10:41 AM, Jay Kreps wrote:
> Hey Solon,
>
> The 10MB size is per-partition. The rationale for this is that the fetch
> size per-partiti
Hey Solon,
The 10MB size is per-partition. The rationale for this is that the fetch
size per-partition is effectively a max message size. However with so many
partitions on one machine this will lead to a very large fetch size. We
don't do a great job of scheduling these to stay under a memory bou
If you have replica.fetch.max.bytes set to 10MB, I would not expect
2GB allocation in BoundedByteBufferReceive when doing a fetch.
Sorry, out of ideas on why this happens...
On Wed, Dec 10, 2014 at 8:41 AM, Solon Gordon wrote:
> Thanks for your help. We do have replica.fetch.max.bytes set to 10M
Thanks for your help. We do have replica.fetch.max.bytes set to 10MB to
allow larger messages, so perhaps that's related. But should that really be
big enough to cause OOMs on an 8GB heap? Are there other broker settings we
can tune to avoid this issue?
On Wed, Dec 10, 2014 at 11:05 AM, Gwen Shapi
There is a parameter called replica.fetch.max.bytes that controls the
size of the messages buffer a broker will attempt to consume at once.
It defaults to 1MB, and has to be at least message.max.bytes (so at
least one message can be sent).
If you try to support really large messages and increase t
I just wanted to bump this issue to see if anyone has thoughts. Based on
the error message it seems like the broker is attempting to consume nearly
2GB of data in a single fetch. Is this expected behavior?
Please let us know if more details would be helpful or if it would be
better for us to file
Hi,
We were recently trying to replace a broker instance and were getting an
OutOfMemoryException when the new node was coming up. The issue happened
during the log replication phase. We were able to circumvent this issue by
copying over all of the logs to the new node before starting it.
Details