Hi all,
Is anyone having issue with the NFS silly rename for the kafka stream state?
Seems like most of the issue in jira are resolved since 0.10 but I'm currently
using 1.0.0 and still having issue. (KAFKA-4392, KAFKA-5070)
06:17:42,741 ERROR 31 [StateDirectory] stream-thread, Failed to lock
implementation?
Regards,
Brilly
-Original Message-----
From: TSANG, Brilly [mailto:brilly.ts...@hk.daiwacm.com]
Sent: Wednesday, February 14, 2018 11:01 AM
To: users@kafka.apache.org
Subject: RE: Kafka Stream tuning.
Hey Damian and folks,
I've also tried 1000 and 500 and the performance state
Kafka Stream tuning.
Hi Brilly,
My initial guess is that it is the overhead of committing. Commit is
synchronous and you have the commit interval set to 50ms. Perhaps try
increasing it.
Thanks,
Damian
On Tue, 13 Feb 2018 at 07:49 TSANG, Brilly
wrote:
> Hi kafka users,
>
> I creat
Hi kafka users,
I created a filtering stream with the Processor API; input topic that have
input rate at ~5 records per millisecond. The filtering function on average
takes 0.05milliseconds to complete which in ideal case would translate to
(1/0.05) 20 records per millisecond. However, when
Just make sure metrics-core-x.x.x.jar is in your class path. That jar should
be in your /libs.
I am using kafka_2.11-1.0.0 so I don't have the exact version number of
metrics-core for you.
Regards,
Brilly
-Original Message-
From: ? ?? [mailto:wangchunc...@outlook.com]
Sent: Thursday,
If you are doing dynamic assignment ( consumer.subscription), you can try this
in your code:
KafkaConsumer consumer = new KafkaConsumer<>(props);
consumer.subscribe(Collections.singletonList("your_topic"), this);
consumer.poll(0) //Just so you are connected and will have TopicPartition
dynam
Hi Rotem,
I'm not 100% sure but you can try set listeners in
\config\server.properties and see if that help.
Regards,
Brilly
-Original Message-
From: Rotem Jacobi [mailto:rot...@radcom.com]
Sent: Monday, January 22, 2018 11:16 PM
To: users@kafka.apache.org
Subject: can't feed remote bro
Hi folks,
I'm working with a topic that have many messages. Kafka is scale horizontally.
As a result, when they spread out to multiple processes, only 1 will work with
the specific key. The rest are not related and should stop the processing.
Is there a way from the client API to hash the part