log cleaner should handle the case when the size of a message set is larger
> than the max message size
> -
>
> Key: KAFKA-6834
> URL: https://issues.apac
Jun Rao created KAFKA-6834:
--
Summary: log cleaner should handle the case when the size of a
message set is larger than the max message size
Key: KAFKA-6834
URL: https://issues.apache.org/jira/browse/KAFKA-6834
uest at:
https://github.com/apache/kafka/pull/1758
> LogCleaner should grow read/write buffer to max message size for the topic
> --
>
> Key: KAFKA-4019
> URL: https://iss
request 1758
[https://github.com/apache/kafka/pull/1758]
> LogCleaner should grow read/write buffer to max message size for the topic
> --
>
> Key: KAFKA-4019
> URL: https://issues.
[
https://issues.apache.org/jira/browse/KAFKA-1756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ismael Juma resolved KAFKA-1756.
Resolution: Not A Problem
> never allow the replica fetch size to be less than the max message s
P-74, if the first message in the
first non-empty partition of the fetch is returned even if it's larger than the
fetch limits set by the consumer or a replica.
> never allow the replica fetch size to be less than the max
max message size
> ---
>
> Key: KAFKA-1756
> URL: https://issues.apache.org/jira/browse/KAFKA-1756
> Project: Kafka
> Issue Type: Bug
>Affects Ve
[
https://issues.apache.org/jira/browse/KAFKA-4019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ismael Juma updated KAFKA-4019:
---
Priority: Critical (was: Major)
> LogCleaner should grow read/write buffer to max message size
Grant Henke created KAFKA-4203:
--
Summary: Java producer default max message size does not align
with broker default
Key: KAFKA-4203
URL: https://issues.apache.org/jira/browse/KAFKA-4203
Project: Kafka
.1.0.
> LogCleaner should grow read/write buffer to max message size for the topic
> --
>
> Key: KAFKA-4019
> URL: https://issues.apache.org/jira/browse/KAFKA-4019
> Project:
[
https://issues.apache.org/jira/browse/KAFKA-4019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ismael Juma updated KAFKA-4019:
---
Fix Version/s: 0.10.1.0
> LogCleaner should grow read/write buffer to max message size for the to
nents: core
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Rajini Sivaram
>
> Currently, the LogCleaner.growBuffers() only grows the buffer up to the
> default max message size. However, since the max message size can be
> customized at
pull request:
https://github.com/apache/kafka/pull/1758
KAFKA-4019: Update log cleaner to handle max message size of topics
Grow read and write buffers of cleaner up to the maximum message size of
the log being cleaned if the topic has larger max message size than the default
config of
[
https://issues.apache.org/jira/browse/KAFKA-4019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rajini Sivaram reassigned KAFKA-4019:
-
Assignee: Rajini Sivaram
> LogCleaner should grow read/write buffer to max message s
Jun Rao created KAFKA-4019:
--
Summary: LogCleaner should grow read/write buffer to max message
size for the topic
Key: KAFKA-4019
URL: https://issues.apache.org/jira/browse/KAFKA-4019
Project: Kafka
ons: 0.8.0
>Reporter: Jun Rao
>Assignee: Joel Koshy
>Priority: Blocker
> Labels: p4
> Attachments: KAFKA-598-v1.patch, KAFKA-598-v2.patch,
> KAFKA-598-v3.patch
>
>
> Currently, a consumer has to set fetch siz
.0.0 allow easier detecting of "there's
larger message then fetch size" situation, and using different fetch size for
different requests programmatically?
> decouple fetch size from max message size
> -
>
> Key: KAFKA-5
max message size
> ---
>
> Key: KAFKA-1756
> URL: https://issues.apache.org/jira/browse/KAFKA-1756
> Project: Kafka
> Issue Type: Bug
>Affects Ve
#x27;t we just use one? If not, in
the program we can always use the one which is the greater of the two. Seems we
can solve that issue relatively easy.
> never allow the replica fetch size to be less than the max
obal configuration management (where we
store broker configs in ZK) https://issues.apache.org/jira/browse/KAFKA-1786 so
once that is done it should make this easier to implement.
> never allow the replica fetch size to be less than the max messa
As [~gwenshap] suggested if storing server config in zookeeper is too much of a
work how about opening up a kafka api which returns broker config and topic
command can use that.
> never allow the replica fetch size to be less than the max messa
ess than the max message size
> ---
>
> Key: KAFKA-1756
> URL: https://issues.apache.org/jira/browse/KAFKA-1756
> Project: Kafka
> Issue Type: Bug
>A
ntly only maxMessageSize will make
use of it. Also, it's probably too big a change for 0.8.2.
> never allow the replica fetch size to be less than the max message size
> ---
>
> Key: KAFKA-1756
s to persist their configs in their
respective ZK nodes, so we can validate topic configs against broker configs?
> never allow the replica fetch size to be less than the max message size
> ---
>
>
vel config. My point is that
we probably shouldn't make max.message.size a topic level config given the
implication to downstream consumers such as mirrormaker and the replica
fetchers.
> never allow the replica fetch size to be less tha
hout bouncing the entire cluster is
the main benefit of the topic-level config.
> never allow the replica fetch size to be less than the max message size
> ---
>
> Key: KAFKA-1756
>
ytes a topic level config.
This makes replica fetchers as well as tools like MirrorMaker harder to
configure since they have to be aware of all per topic level values. I am not
sure if there is a strong use case to customize a different max message size
per topic.
> never allow the replic
Joe Stein created KAFKA-1756:
Summary: never allow the replica fetch size to be less than the
max message size
Key: KAFKA-1756
URL: https://issues.apache.org/jira/browse/KAFKA-1756
Project: Kafka
You need to change max.message.size on the brokers.
Thanks,
Jun
On Fri, May 16, 2014 at 11:02 AM, Bhavesh Mistry wrote:
> Hi Kafka Dev Group,
>
> We are using Kafka version 0.8 and I am getting following exception:
>
>
> WARN warn, Produce request with correlation id 1617 failed due to
> [r
>From the documentation, the correct is message.max.bytes
​Regards​
--
Lucas Zago
48 9617 6763
Hi Kafka Dev Group,
We are using Kafka version 0.8 and I am getting following exception:
WARN warn, Produce request with correlation id 1617 failed due to
[rawlog,19]: kafka.common.MessageSizeTooLargeException
WARN warn, Produce request with correlation id 1819 failed due to
[rawlog,24]: kaf
max.message.bytes
This is largest message size Kafka will allow to be appended to this topic.
Note that if you increase this size you must also increase your consumer's
fetch size so they can fetch messages this large.
When the message length is larger than max,message.bytes maybe throw an
excep
Hi Kafka Team,
Is there any message size limitation from producer side ? If there, is
what happens to message, does it get truncated or message is lost ?
Thanks,
Bhavesh
[
https://issues.apache.org/jira/browse/KAFKA-298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jun Rao resolved KAFKA-298.
---
Resolution: Won't Fix
> Go Client support max mess
[
https://issues.apache.org/jira/browse/KAFKA-298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jun Rao closed KAFKA-298.
-
> Go Client support max message size
> --
>
> Ke
are no longer being
maintained in the main project.
Please close. Won't Fix.
> Go Client support max message size
> --
>
> Key: KAFKA-298
> URL: https://issues.apache.org/jira/browse/KAFKA-298
>
[
https://issues.apache.org/jira/browse/KAFKA-598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Neha Narkhede updated KAFKA-598:
Labels: p4 (was: )
> decouple fetch size from max message s
).
I haven't had time to look at this lately, but if people are okay with the
above, then I can revisit one of the
earlier revisions of the patches.
> decouple fetch size from max message size
> -
>
> Ke
0.8 ?
> decouple fetch size from max message size
> -
>
> Key: KAFKA-598
> URL: https://issues.apache.org/jira/browse/KAFKA-598
> Project: Kafka
> Issue Type: Bug
> C
use a larger fetch size.
> decouple fetch size from max message size
> -
>
> Key: KAFKA-598
> URL: https://issues.apache.org/jira/browse/KAFKA-598
> Project: Kafka
>
ze from max message size
> -
>
> Key: KAFKA-598
> URL: https://issues.apache.org/jira/browse/KAFKA-598
> Project: Kafka
> Issue Type: Bug
> Components: core
>Affects Vers
her manager is currently managing and just return the size of the set.
> decouple fetch size from max message size
> -
>
> Key: KAFKA-598
> URL: https://issues.apache.org/jira/browse/KAFKA-598
Scala
doc didn't say HashMap.size() is linear to # of entries. If this is just a
limitation in scala 2.8. I suggest that we just pay the overhead of traversing
the map for now. Typically, the # of fetcher threads is small.
> decouple fetch size fro
chose that over a method only to
avoid traversal (for each request) to determine the count.
> decouple fetch size from max message size
> -
>
> Key: KAFKA-598
> URL: https://issues.apache.org/ji
[
https://issues.apache.org/jira/browse/KAFKA-598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jun Rao updated KAFKA-598:
--
Priority: Blocker (was: Major)
> decouple fetch size from max message s
efault MaxQueuedChunks 1.
> decouple fetch size from max message size
> -
>
> Key: KAFKA-598
> URL: https://issues.apache.org/jira/browse/KAFKA-598
> Project: Kafka
> Issue Type: Bug
>
plied to trunk as well (unless we
decide it is not a "blocker" that should go only into trunk). After this, we
can add the additional error code in the FetchResponse if people are okay
with the overall approach.
>
at has infinite data to read can't starve out other
partitions.
> decouple fetch size from max message size
> -
>
> Key: KAFKA-598
> URL: https://issues.apache.org/jira/browse/KAFKA-598
&
ird option: keep fetch size as is, and issue pipelined fetch requests
to build up and complete incomplete partition, one at a time.
What do people think?
> decouple fetch size from max message size
> -
>
>
ing it as the number of
topic/partitions changes.
> decouple fetch size from max message size
> -
>
> Key: KAFKA-598
> URL: https://issues.apache.org/jira/browse/KAFKA-598
> Proj
n case (though there are definitely I/O benefits).
Let's figure this out and then I will do a more detailed review of the patch.
> decouple fetch size from max message size
> -
>
> Key: KAFKA-5
cking it.
BTW, for the ReplicaFetchTest change to make sense I could have it expect to
"fail" with a smaller upper fetch size, and then repeat with a higher upper
fetch size, but that would add to the test duration - and it's not mocked
out.
52 matches
Mail list logo