Hi Williams,
Thank you for quick reply.
I am new to Kafka and stream processing. Wanted to implement solution through
docker. It is expected to contain Zookeeper, Kafka, Kafkka-Connect and
Cassandra.
Tried example from Getting started with the Kafka Connect Cassandra Source
Unable to proceed fur
Hi Williams,
Thank you for quick reply.
Wanted to implement solution through docker. It is expected to contain
Zookeeper, Kafka, Kafkka-Connect and Cassandra.
Tried example in Getting started with the Kafka Connect Cassandra Source
Getting below error while starting kafka.
Thanks and regards
Hi,
Found the reason that the latest schema-registry module version isn't
compatible with our internal kafka version 0.11 and hence the above error
message.
Investigating if the previous version, 4.1.0-SNAPSHOT would work with our
internal version as per this change -
https://github.com/confluent
whats in docker? kafka? kafka-connnect? Did you try setting up outside of a
container first?
-B
On Thu, May 3, 2018 at 2:22 PM, Jagannath Bilgi
wrote:
> Hi Team,
> Trying to load data from Cassandra to Kafka using kafka-connect. Tried
> results from Google search. However unable to complete suc
Hi Team,
Trying to load data from Cassandra to Kafka using kafka-connect. Tried results
from Google search. However unable to complete successfully.
Could you please help me in resolving this.
Note : trying to deploy using Docker.
Thanks and regards
Jagannath S Bilgi
Hi,
I'm trying to build kafka-connect-hdfs separately by following this FAQ -
https://github.com/confluentinc/kafka-connect-hdfs/wiki/FAQ
While compiling schema-registry, I get the following error:
[*INFO*] -
[*ERROR*] COMPILATION ERRO
Kafka Streams itself is backward compatible to 0.10.2.1 brokers.
However, the embedded cluster you are using is not part of public API
and the 0.10.2.1 embedded cluster might have a different API than the
1.1 embedded cluster. Thus, you would need to rewrite your tests.
-Matthias
On 4/21/18 10:3
Hi Johnathan.
Yes I decreased the retention on all topics simultaneously. I realized my
mistake later when I saw the cluster overloaded :)
I wasn't 100% sure so I looked it up, but it looks to me like
log.cleaner.threads and log.cleaner.io.max.bytes.per.second only apply when a
topic is using