We ran through this a few months ago, here is a list of things and tools
I'd recommend you:
- install burrow. It monitors the consumers and make sure they are not
lagging behind, it also covers other corner cases that can get tricky with
the offset checker. We query burrow (has an API) and then g
Hi Buck,
What are your settings for:
- acks
- request.timeout.ms
- timeout.ms
- min.insync.replicas (on the broker)
Thanks,
Alex
On Fri, Dec 18, 2015 at 1:23 PM, Buck Tandyco
wrote:
> I'm stress testing my kafka setup. I have a producer that is working just
> fine and then I kill
Mark, what database are you using?
If you are using MySQL...
There is a not-yet-finished Kafka MySQL Connector at
https://github.com/wushujames/kafka-mysql-connector. It tails the MySQL binlog,
and so will handle the situation you describe.
But, as I mentioned, I haven't finished it yet.
If
I'm stress testing my kafka setup. I have a producer that is working just fine
and then I kill off one of the two brokers that I have running with replication
factor of 2. I'm able to keep receiving from my consumer thread but my
producer generates this exception: "kafka.common.FailedToSendMess
Hi,
Why don't your consumers instead subscribe to a single topic used to broadcast
to all of them? That way your consumers and producer will be much simpler.
Cheers,
Jens
–
Skickat från Mailbox
On Fri, Dec 18, 2015 at 4:16 PM, Abel . wrote:
> Hi,
> I have this scenario where I need
Hi,
I have this scenario where I need to send a message to multiple topics. I
create a single KafkaProducer, prepare the payload and then I call the send
method of the producer for each topic with the correspoding ProducerRecord
for the topic and the fixed message. However, I have noticed that thi
I have a couple of questions on how to monitor MirrorMaker using
ConsumerOffsetChecker.
1. When viewing a topic I see multiple rows. Is each row for one partition
2. I am looking to write a nagios plugin to alert if MirrorMaker is running but
isn't keeping up. For example there is a network conn
Ewen,
Thanks for the reply. We'll proceed while keeping all of your points in
mind. I looked around for a more focused forum for the jdbc connector
before posting here but didn't come across the confluent-platform group.
I'll direct any more questions about the jdbc connector there. I'll also
c
Thank you very much Gwen
-Original Message-
From: Gwen Shapira [mailto:g...@confluent.io]
Sent: Thursday, December 17, 2015 3:45 PM
To: users@kafka.apache.org
Subject: Re: Local Storage
Hi,
Kafka *is* a data store. It writes data to files on the OS file system. One
directory per partit
Yes, that’s right. It’s just work for no real gain :)
-Todd
On Fri, Dec 18, 2015 at 9:38 AM, Marko Bonaći
wrote:
> Hmm, I guess you're right Tod :)
> Just to confirm, you meant that, while you're changing the exported file it
> might happen that one of the segment files becomes eligible for cle
If you don't like messing w/ ZK directly, another alternative is to
manually seek to offset 0 on all relevant topic-partitions (via
OffsetCommitRequest or your favorite client api) and change the
auto-offset-reset policy on your consumer to earliest/smallest. Bonus is
that this should also work for
Hmm, I guess you're right Tod :)
Just to confirm, you meant that, while you're changing the exported file it
might happen that one of the segment files becomes eligible for cleanup by
retention, which would then make the imported offsets out of range?
Marko Bonaći
Monitoring | Alerting | Anomaly D
That works if you want to set to an arbitrary offset, Marko. However in the
case the OP described, wanting to reset to smallest, it is better to just
delete the consumer group and start the consumer with auto.offset.reset set
to smallest. The reason is that while you can pull the current smallest
o
You can also do this:
1. stop consumers
2. export offsets from ZK
3. make changes to the exported file
4. import offsets to ZK
5. start consumers
e.g.
bin/kafka-run-class.sh kafka.tools.ExportZkOffsets --group group-name
--output-file /tmp/zk-offsets --zkconnect localhost:2181
bin/kafka-run-class.
Hi,
I noticed that a consumer in the new consumer API supports setting the
offset for a partition to beginning. I assume doing so also would update
the offset in Zookeeper eventually.
Cheers,
Jens
On Friday, December 18, 2015, Akhilesh Pathodia
wrote:
> Hi,
>
> I want to reset the kafka offset
The way to reset to smallest is to stop the consumer, delete the consumer
group from Zookeeper, and then restart with the property set to smallest.
Once your consumer has recreated the group and committed offsets, you can
change the auto.offset.reset property back to largest (if that is your
prefer
Hi,
I want to reset the kafka offset in zookeeper so that the consumer will
start reading messages from first offset. I am using flume as a consumer to
kafka. I have set the kafka property kafka.auto.offset.reset to "smallest",
but it does not reset the offset in zookeeper and that's why flume wil
And in doing so i've answered my own question ( i think! ) - i don't
believe the topic has been created on that cluster yet...
On 18 December 2015 at 10:56, Damian Guy wrote:
> I was just trying to get it generate the json for reassignment and the
> output was empty, i.e.,
>
> offsets.json
> ===
I was just trying to get it generate the json for reassignment and the
output was empty, i.e.,
offsets.json
=
{"topics": [
{"topic": "__consumer_offsets"}
],
"version":1
}
bin/kafka-reassign-partitions.sh --zookeeper blah
--topics-to-move-json-file ~/offsets.json
Hi All,
I'm Using a 2-node cluster (with 3rd zookeeper running on one of these
machines).
Due to some reason, the data is not being replicated to another kafka
process.
Kafka Version : kafka_2.10-0.8.2.1
# ./bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic topic1
Topic:topic1
20 matches
Mail list logo