Hi Rajib,
We can't see the args you're passing the consumer, and the error message
indicates the consumer can't find the cluster.
Thanks,
Liam Clarke-Hutchinson
On Fri, 8 May 2020, 3:04 pm Rajib Deb, wrote:
> I wanted to check if anyone has faced this issue
>
> Thanks
> Rajib
>
> From: Rajib
Thanks! That helps a lot.
On Thu, May 7, 2020 at 8:10 PM Chris Toomey wrote:
> Right -- offset storage is an optional feature with Kafka, you can always
> choose to not use it and instead manage offsets yourself.
>
>
> On Thu, May 7, 2020 at 8:07 PM Boyuan Zhang wrote:
>
> > Thanks for the poin
Right -- offset storage is an optional feature with Kafka, you can always
choose to not use it and instead manage offsets yourself.
On Thu, May 7, 2020 at 8:07 PM Boyuan Zhang wrote:
> Thanks for the pointer! Does that mean I don't need to commit the offset
> with managing partitions and offset
You really have to decide what behavior it is you want when one of your
consumers gets "stuck". If you don't like the way the group protocol
dynamically manages topic partition assignments or can't figure out an
appropriate set of configuration settings that achieve your goal, you can
always elect
Thanks for the pointer! Does that mean I don't need to commit the offset
with managing partitions and offset manually?
On Thu, May 7, 2020 at 8:02 PM Chris Toomey wrote:
> If you choose to manually assign topic partitions, then you won't be using
> the group protocol to dynamically manage partit
I wanted to check if anyone has faced this issue
Thanks
Rajib
From: Rajib Deb
Sent: Sunday, May 3, 2020 9:51 AM
To: users@kafka.apache.org
Subject: Kafka - FindCoordinator error
Hi
I have written a Python consumer using confluent-kafka package. After few hours
of running the consumer is dying w
If you choose to manually assign topic partitions, then you won't be using
the group protocol to dynamically manage partition assignments and thus
don't have a need to poll or heartbeat at any interval. See "Manual
Partition Assignment" in
https://kafka.apache.org/24/javadoc/org/apache/kafka/client
So looking at the code of InsertField, it looks like there isn't an obvious
way, unless there's some way to chaining SMTs to achieve it.
Question then is, is it worth adding it to the InsertField SMT? The change
looks reasonably straightforward, and I'm happy to do a PR if it fits with
the aims of
Hi all,
I've been double checking the docs, and before I write a custom transform,
am I correct that there's no supported way in the InsertField SMT to insert
the key as a field of the value?
Cheers,
Liam Clarke-Hutchinson
Hi team,
I'm building an application which uses Kafka Consumer APIs to read messages
from topics. I plan to manually assign TopicPartitions to my consumer and
seek a certain offset before starting to read. I'll also materialize the
last read offset and reuse it when creating the consumer later.
W
Thanks John... I got to finish the work in few days so need to get it
quick, so looking for something ready. I will take a look at jackson json.
By the way, what is the byteArrayserializer? As the name suggests, it is
for byte arrays so won't work for java ArrayList, right?
On Thu, May 7, 2020 at
Hi Pushkar,
If you’re not too concerned about compactness, I think Jackson json
serialization is the easiest way to serialize complex types.
There’s also a kip in progress to add a list serde. You might take a look at
that proposal for ideas to write your own.
Thanks,
John
On Thu, May 7, 20
Hi Pushkar,
To answer your question about tuning the global store latency, I think the
biggest impact thing you can do is to configure the consumer that loads the
data for global stores. You can pass configs specifically to the global
consumer with the prefix: “ global.consumer.”
Regarding the
Hey Henry, this was done with MM1 at LinkedIn at one point, but it requires
support for shallow iteration in KafkaConsumer, which was removed from
Apache Kafka a long time ago. Given recent talk of breaking changes in
Kafka 3.0, this might be a good time to revisit this.
Ryanne
On Thu, May 7, 20
Won't say it's a good idea to use java serialized classes for messages, but
you should use a byteArraySerializer if you want to do such things
Le jeu. 7 mai 2020 à 14:32, Pushkar Deole a écrit :
> Hi All,
>
> I have a requirement to store a record with key as java String and value as
> java's Ar
Hi All,
I have a requirement to store a record with key as java String and value as
java's ArrayList in the kafka topic. Kafka has by default provided a
StringSerializer and StringDeserializer, however for java ArrayList, how
can get serializer. Do I need to write my own? Can someone share if some
If you don't want to send the schema each time then serialise your data
using Avro (or Protobuf), and then the schema is held in the Schema
Registry. See https://www.youtube.com/watch?v=b-3qN_tlYR4&t=981s
If you want to update a record insert of insert, you can use the upsert
mode. See https://www
Hi Vishnu,
I wrote an implementation of org.apache.kafka.connect.storage.Converter,
included it in the KC worker classpath (then set it with the property
value.converter) to provide the schema that the JDBC sink needs.
That approach may work for 1).
For 2) KC can use upsert if your DB supports i
I saw this feature mentioned in the cloudera blog post:
https://blog.cloudera.com/a-look-inside-kafka-mirrormaker-2/
High Throughput Identity Mirroring
The batch does not need to be decompressed and compressed and deserialized
and serialized if nothing had to be changed. Identity mirroring can ha
To help understanding my case in more details, the error I can see
constantly is the consumer losing heartbeat and hence apparently the group
get rebalanced based on the log I can see from Kafka side:
GroupCoordinator 11]: Member
consumer-3-f46e14b4-5998-4083-b7ec-bed4e3f374eb in group foo has fai
20 matches
Mail list logo