Does broker 100 keeps acting as the controller afterwards? What you observe
is possible and should be transient since "unsubscribeChildChanges" on
ZkClient and listener fired procedure are executed on different threads and
they are not strictly synchronized. But if you continuously see broker
100's
The default partition assignment strategy is the RangePartitioner. Note it
is per-topic, so if you use the default partitioner then in your case 160
partitions of each of the topic will be assigned to the first 160 consumer
instances, each getting two partitions, one partition from each. So the
con
Java consumer. 0.9.1
Thanks
Achintya
-Original Message-
From: Guozhang Wang [mailto:wangg...@gmail.com]
Sent: Thursday, November 24, 2016 8:28 PM
To: users@kafka.apache.org
Subject: Re: Kafka consumers are not equally distributed
Which version of Kafka are you using with your consumer?
for anyone that runs into this. turns out i also had to set:
producer.security.protocol=SASL_PLAINTEXT
producer.sasl.kerberos.service.name=kafka
On Thu, Nov 24, 2016 at 8:54 PM, Koert Kuipers wrote:
> i have a secure kafka 0.10.1 cluster using SASL_PLAINTEXT
>
> the kafka servers seem fine, and
i have a secure kafka 0.10.1 cluster using SASL_PLAINTEXT
the kafka servers seem fine, and i can start console-consumer and
console-producer and i see the message i type in the producer pop up in the
consumer. no problems so far.
for example to start console-producer:
$ kinit
$ export KAFKA_OPTS=
Which version of Kafka are you using with your consumer? Is it Scala or
Java consumers?
Guozhang
On Wed, Nov 23, 2016 at 6:38 AM, Ghosh, Achintya (Contractor) <
achintya_gh...@comcast.com> wrote:
> No, that is not the reason. Initially all the partitions were assigned the
> messages and those
Hi Damian,
It processes correctly when using KStreamTestDriver.
Best,
Hamid
Sure, I guess the topic is auto-created the first time I start the topology
and the second time its there already. It could be possible to create
topics up front for us, or even use an admin call from inside the code.
That said, as a user, I think it would be great with a function in the
Kafka Str
Hi Zach
there is a rumour that today thursday is a holiday?
in server.properties how are you configuring your server?
specifically what are these attributes?
num.network.threads=
num.io.threads=
socket.send.buffer.bytes=
socket.receive.buffer.bytes=
socket.request.max.bytes=
num.pa
Mikeal,
When you use `through(..)` topics are not created by KafkaStreams. You need
to create them yourself before you run the application.
Thanks,
Damian
On Thu, 24 Nov 2016 at 11:27 Mikael Högqvist wrote:
> Yes, the naming is not an issue.
>
> I've tested this with the topology described ear
Hi Hamid,
Out of interest - what are the results if you use KStreamTestDriver?
Thanks,
Damian
On Thu, 24 Nov 2016 at 12:05 Hamidreza Afzali <
hamidreza.afz...@hivestreaming.com> wrote:
> The map() returns non-null keys and values and produces the following
> stream:
>
> [KSTREAM-MAP-01]
Anybody?!? This is very disconcerting!
From: Zac Harvey
Sent: Wednesday, November 23, 2016 5:07:45 AM
To: users@kafka.apache.org
Subject: Messages intermittently get lost
I am playing around with Kafka and have a simple setup:
* 1-node Kafka (Ubuntu) server
*
The map() returns non-null keys and values and produces the following stream:
[KSTREAM-MAP-01]: A , 1
[KSTREAM-MAP-01]: A , 2
[KSTREAM-MAP-01]: B , 3
The issue arises when the combination of map() and groupByKey().count() is used
with ProcessorTopologyTestDriver.
I have
Yes, the naming is not an issue.
I've tested this with the topology described earlier. Every time I start
the topology with a call to .through() that references a topic that does
not exist, I get an exception from the UncaughtExceptionHandler:
Uncaught exception org.apache.kafka.streams.errors.St
Hi,
Thanks Sachin for your great effort. 😃
The last settings you suggested actually did the job for me. I still do not
understand why my previous settings did not work. However I learned a lot
about log cleaning.
Thank you so much for your quick replies and time.
Cheers,
Regards,
Eranga Hesha
15 matches
Mail list logo