Yu Yang,
There does exist a broker-side config named 'controller.socket.timeout.ms'.
Decrease it to a reasonably smaller value might be a help but please use it
with caution.
发件人: Yu Yang
发送时间: 2018年1月25日 15:42
收件人: users@kafka.apache.org
主题: kafka controller
I have a scneario, let say due to GC or any other issue, my consumer takes
longer than max.poll.interval.ms to process data, what is the alternative
for preventing the consumer to be marked dead and not shun it out of the
consumer group.
Though the consumer has not died and session.timeout.ms is b
Hi everyone,
I´m trying to understand the best practice to define the partition key. I
have defined some topics that they are related with entities in cassandra
data model, the relationship is one-to-one, one entity - one topic, because
I need to ensure the properly ordering in the events. I have
You may not be surprised that after further investigation it turns out this
was related to some logic in my topology.
On Wed, Jan 24, 2018 at 5:43 PM, Dmitry Minkovsky
wrote:
> Hi Gouzhang,
>
> Here it is:
>
> topology.stream(MAILBOX_OPERATION_REQUESTS,
> Consumed.with(byteStringSerde, mailboxOp
> one entity - one topic, because I need to ensure the properly ordering in
the events.
This is a great in insight. I discovered that keeping entity-related things
on one topic is much easier than splitting entity-related things onto
multiple topics. If you have one topic, replaying that topic is
> I know that could be a best practice use the partition key of Cassandra
(e.g Customer ID) as a partition key in kafka
Yeah, the Kafka Producer will hash that key with murmur so all entities
coming out of cassandra with the same partition key will end up on the same
kafka partition. Then you can
Yes, I´m capturing different events from the same entity/resource (create,
update and delete) for that reason I´ve choosen that options however my
question is if i can improve my solution if I want to use kafka as
datastore including the partition key of cassandra for each entity as
partition key o
Thanks for the reply, Xi! The default value of 'controller.socket.timeout.ms'
is 3. That is 30 seconds. What we have observed was that the controller
would not assign another replica as the leader, even if it failed to send
updated topic metadata information too the problematic broker for >30
m
Hello Dmitry,
What does your distributeMailboxOperation in the flatMap do? Would it
possibly generates multiple records for the follow-up aggregation for each
input?
Guozhang
On Thu, Jan 25, 2018 at 6:54 AM, Dmitry Minkovsky
wrote:
> You may not be surprised that after further investigation
Think, new versions have better ways of doing this. In 0.10.2, because
poll() ensure liveness, you can disable auto commits and use consumer
pause() to avoid calling poll() (so brokers may ignore max.poll.interval.ms)
so those partitions are not assigned to other consumers and also handle
ConsumerR
Hello Guozhang:
Actually, you are right. I implemented a custom partitioner to distribute
messages evenly among all partitions and started to see all consumers
working!
Thanks a lot!
Best
Gustavo
2018-01-24 19:40 GMT-02:00 Guozhang Wang :
> Hello Gustavo,
>
> How did you check that the second
Hi Gouzhang,
I am sorry to have bothered you, but I figured out the problem and it was
related to logic in my topology. Please disregard the question.
Thank you!
Dmitry
On Thu, Jan 25, 2018 at 2:55 PM, Guozhang Wang wrote:
> Hello Dmitry,
>
> What does your distributeMailboxOperation in the fl
Hello everyone,
I have a question about how reassignments work.
When I issue a reassignment for similar topicpartitions, the throughput
between the reassignments is very different even though all settings are
similar. There is a huge difference on when the partitions finish their
reassignment.
You may want to check replication factor of _consumer_offsets topic. By
default, it is 1. It should be increased to 3 in your case.
Regards,
Chintan
On 25-Jan-2018 12:24 PM, "Siva A" wrote:
> Kafka version i am using is 0.10.0.1
>
> On Thu, Jan 25, 2018 at 12:23 PM, Siva A wrote:
>
> > Hi All,
Yes it was the issue. Fixed yesterday. Thanks for your update.
On Jan 26, 2018 11:27 AM, "chintan mavawala" wrote:
> You may want to check replication factor of _consumer_offsets topic. By
> default, it is 1. It should be increased to 3 in your case.
>
> Regards,
> Chintan
>
> On 25-Jan-2018 12:
15 matches
Mail list logo