//best to do your generic type declaration at class level
and refer to later
KafkaConsumer consumer = new KafkaConsumer<>(props);
//best to do generic type declaration at class level and
refer to later
java.util.HashMap hashMap=new
java.util.HashMap();
hashMap.put("topic1","topi
good to see that you've got it figured out, almost. Is there data in
Cassandra? did you check that? I have never used the Cassandra connector,
so i don't know if you've set it up correctly. you'll have to start by
checking at the source for data. Thats all I can help with at this point.
Sorry
-B
Thanks Matthias.
> On May 9, 2018, at 10:57 AM, Matthias J. Sax wrote:
>
> You might want to look into Kafka Streams. In particular KTable and
> Interactive Queries (IQ).
>
> A `put` would be a write to the table source topic, while a `get` can be
> implemented via IQ.
>
> For subscribe to par
Hello,
Based on the Quick start on Kafka site, I was trying to use the
kafka-consumer-groups command line script
PS C:\kafka_2.11-1.1.0\bin\windows> .\kafka-consumer-groups.bat
>>> --new-consumer --bootstrap-server localhost:9092 --list
>>> The [new-consumer] option is deprecated and will be remo
Can you give us more information:
release of Spark and Kafka you're using
anything interesting from broker / client logs - feel free to use pastebin
to pass snippet if needed
Thanks
On Wed, May 9, 2018 at 3:42 AM, Pena Quijada Alexander <
a.penaquij...@reply.it> wrote:
> Hi all,
>
> We're faci
You might want to look into Kafka Streams. In particular KTable and
Interactive Queries (IQ).
A `put` would be a write to the table source topic, while a `get` can be
implemented via IQ.
For subscribe to particular key, you would consume the whole source
topic and filter for the key you are inter
Hi,
Assuming I got your question right - in 3-node setup, that's a "Cluster
down" scenario if one of your broker goes down.
The rule of thumb in DComp is ceil(N/2)-1 total failures are allowed -
where N is your node.
So what you are testing for, will require probably 2 more nodes.
Regards,
On
Hard to say. Might be a Spark issue though...
On 5/9/18 3:42 AM, Pena Quijada Alexander wrote:
> Hi all,
>
> We're facing some problems with ours Spark Streaming jobs, from yesterday we
> have got the following error into our logs when the jobs fail:
>
> java.lang.AssertionError: assertion fail
The tests have passed and my changes are covered by existing tests written
for LogSegmentTest. I would be grateful if someone can confirm the same.
On windows platform, some log/segment/index tests will always fail because
of the file lock/unlock issue. but they all pass on Linux. Also, the build
Hi,
I have 3-node cluster with a single replica topic with three partitions. I
am not using 3-way replica topic because my broker is using distributed
backed.
Node1: broker1 -> log.dir=/nfsexport/broker1 (say partition1 owner)
Node2: broker2 -> log.dir=/nfsexport/broker2 (say partition2 owner)
N
a patch could be rejected if:
1)there is no TestCase to prove the feature works
2)the patch causes failure in an existing testcase
3)the patch errors in an existing testcase
4)the patch works in only 1 version of OS (and doesnt work in other versions of
OS)
5)implementing the patch will place a d
We would like to use Kafka as a key/value store. We need put, get and subscribe
a particular “key”.
Any pointers how to do this?
Thanks
Sudhir
Hello,
I suspect the answer is no, but I'm curious if there is any way to either
change a running cluster's zookeeper chroot or perform a "merge" of two
clusters such that their individual workloads can be distributed across the
combined set of brokers.
Thanks!
Luke
Hello,
This issue has been outstanding for a while and impacting us both in
development and deployment time. We have had to manually build kafka core
jar and use it to work with Windows for over a year. The auto log/index
cleanup feature is very important for us on Windows because it helps us
avoi
Hi all,
I would like to apply log compaction configuration for any topic in my
kafka cluster, as default properties. These configuration properties are:
- cleanup.policy
- delete.retention.ms
- segment.ms
- min.cleanable.dirty.ratio
I have tried to place them in the server.properties
Hi,
yes, your case is the exception. In usual deployments kafka has to be
there 100% all the time.
So as the name rolling restart suggest, you usually upgrade / do
maitenance on boxes (a few at a time) depending how your
topics are laied our across brokers.
On 09.05.2018 12:13, M. Manna
Hi all,
We're facing some problems with ours Spark Streaming jobs, from yesterday we
have got the following error into our logs when the jobs fail:
java.lang.AssertionError: assertion failed: Beginning offset 562747 is after
the ending offset 562743 for topic elk-topic partition 0.
Any help ab
Thanks Jan. We have 9 broker-zookeeper setup in production and during
monthly maintenance we need to shut it down gracefully (or reasonably) to
do our work.
Are you saying that it's okay not to shut down the entire cluster?
Also, will this hold true even when we are trying to do rolling upgrade to
Hi,
this is expected. A gracefully shutdown means the broker is only
shutting down when it is not the leader of any partition.
Therefore you should not be able to gracefully shut down your entire
cluster.
Hope that helps
Best Jan
On 09.05.2018 12:02, M. Manna wrote:
Hello,
I have follo
Hello,
I have followed the graceful shutdown process by using the following (in
addition to the default controlled.shutdown.enable)
controlled.shutdown.max.retries=10
controlled.shutdown.retry.backoff.ms=3000
I am always having issues where not all the brokers are shutting
gracefully. And it's a
Thank you Williams.
Tried in different way and able to add Cassandra connector.
However unable to fetch data in console. Below are details.
{"name": "packs2","config" : { "tasks.max": "1", "connector.class" :
"com.datamountaineer.streamreactor.connect.cassandra.source.CassandraSourceConnector",
Hi
We were on Kafka 0.10.2.1. While upgrading to 1.1, we bring down all the 3
kafka brokers, and make the change in the config file as shown below which
is recommend in http://kafka.apache.org/11/documentation.html#upgrade and
restart the brokers:
*inter.broker.protocol.version=1.1*
*log.message.
22 matches
Mail list logo