"From what I understand, there's currently no way to prevent this type of
shuffling of partitions from worker to worker while the consumers are under
maintenance. I'm also not sure if this an issue I don't need to worry
about."
If you don't want rebalance, consumers can also manually subscribe to
If you don't want or need automated rebalancing or partition reassignment
amongst clients then you could always just have each worker/client subscribe
directly to individual partitions using consumer.assign() rather than
consumer.subscribe(). That way when client 1 is restarted the data in its
What I mean by "flapping" in this context is unnecessary rebalancing
happening. The example I would give is what a Hadoop Datanode would do in
case of a shutdown. By default, it will wait 10 minutes before replicating
the blocks owned by the Datanode so routine maintenance wouldn't cause
unnecessar
On Fri, Jan 6, 2017 at 3:57 AM, Mike Gould wrote:
> Hi
>
> I'm trying to configure log compaction + deletion as per KIP-71 in kafka
> 0.10.1 but so far haven't had any luck. My tests show more than 50%
> duplicate keys when reading from the beginning even several minutes after
> all the events we
It would return the earlier one, offset 0.
-Ewen
On Thu, Jan 5, 2017 at 10:15 PM, Vignesh wrote:
> Thanks. I didn't realize ListOffsetRequestV1 is only available 0.10.1
> (which has KIP-33, time index).
> When timestamp is set by user (CreationTime), and it is not always
> increasing, would thi
Yeah, you'd set the key.converter and/or value.converter in your connector
config.
-Ewen
On Thu, Jan 5, 2017 at 9:50 PM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:
> Thanks!
> So I just override the conf while doing the API call? It’d be great to
> have this documented somewhere on
The kafka brokers have a maximum message size limit, this is a protection
measure and avoids sending monster messages to kafka.
You have two options:
1. On the brokers, increase the max.request.size, default is at ~2mb,
making it 5 or even 10 is not an issue normally. Java applications can
happil
You can use the partition reassignment tool to move larger partitions from the
full node over to the lighter used nodes.
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-Selectivelymovingsomepartitionstoabroker
There are also some open source and commercial to
Hello, I have a 3node cluster on identical server setups. We've noticed that
one of the kafka's write a lot more data than the other two. Recently, that
kafka server has completely filled our data partition while the other two kafka
servers were still at, for example, 30% capacity. Is there a
flume-kafka-sink send message to kafka-0.9
06 Jan 2017 13:13:36,595 ERROR
[SinkRunner-PollingRunner-DefaultSinkProcessor]
(org.apache.flume.SinkRunner$PollingRunner.run:158) - Unable to deliver
event. Exception follows
org.apache.flume.EventDeliveryException: Failed to publish events
at org.ap
It's perfect with the retries>0.
Thanks a lot, James.
Best regards
On Thu, Jan 5, 2017 at 10:51 PM, James Cheng wrote:
>
> > On Jan 5, 2017, at 8:23 AM, Hoang Bao Thien
> wrote:
> >
> > Yes, the problem is from producer configuration. And James Cheng has told
> > me how to fix it.
> > However
Hi
I'm trying to configure log compaction + deletion as per KIP-71 in kafka
0.10.1 but so far haven't had any luck. My tests show more than 50%
duplicate keys when reading from the beginning even several minutes after
all the events were sent.
The documentation in section 3.1 doesn't seem very cle
Thanks Joel. I'll fix up the pics to make them consistent on nomenclature.
B
On Fri, Jan 6, 2017 at 2:39 AM Joel Koshy wrote:
> (adding the dev list back - as it seems to have gotten dropped earlier in
> this thread)
>
> On Thu, Jan 5, 2017 at 6:36 PM, Joel Koshy wrote:
>
> > +1
> >
> > This i
13 matches
Mail list logo