We have two applications that consume all messages from one Kafka cluster.
We found that the MessagesPerSec metric started to diverge after some time.
One of them matches the MessagesInPerSec metric from the Kafka broker,
while the other is lower than the broker metric and appears to have some
mess
Hi,
I am using high level consumer and for every 10 secs I see that
consumerInstance exists the below while loop:
ConsumerIterator it = stream.iterator();
CustomMessage customMessage;
while(it.hasNext()) {
customMessage = deSerializeObject(it.n
Hello,
We have a number of different scenarios within our company that we are
considering Kafka for.
There is one case in particular that has caused debate. The relevant
characteristics are:
-
Very high throughput - 1000's of messages/second.
-
Very bursty traffic with unpredictable su
OK, it seems your have a controller migration some time ago and the old
controller (broker 0) did not de-register its listeners while its
controller modules like "partition state machine" has been already
shutdown. You can try to verify this through the active-controller metrics.
If that is the ca
Hello all,
This is a very interesting discussion. I’ve been thinking of a similar use case
for Kafka over the last few days.
The usual data workflow with Kafka is most likely something this:
- ingest with Kafka
- process with Storm / Samza / whathaveyou
- put some processed data back on Kafk
If I recall correctly, setting log.retention.ms and log.retention.bytes to
-1 disables both.
On Fri, Jul 10, 2015 at 1:55 PM, Daniel Schierbeck <
daniel.schierb...@gmail.com> wrote:
>
> > On 10. jul. 2015, at 15.16, Shayne S wrote:
> >
> > There are two ways you can configure your topics, log co
The Kafka documentation here (
http://kafka.apache.org/081/documentation.html#topic-config) mentions the
following as an example:
*> bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic
my-topic --partitions 1
--replication-factor 1 --config max.message.bytes=64000
--config flus
> On 10. jul. 2015, at 15.16, Shayne S wrote:
>
> There are two ways you can configure your topics, log compaction and with
> no cleaning. The choice depends on your use case. Are the records uniquely
> identifiable and will they receive updates? Then log compaction is the way
> to go. If they a
Is there a definitive way to check if a kafka broker is up and ready to
accept connections. Can I check some end point etc ? This can also be used
when rolling restarts are done. Currently I look at logs or run some smoke
tests by publishing/consuming some messages. Just wanted to reach out to
the
OK, in that case then I'm thinking that you ran into issues that were a
natural result of the Zookeeper ensemble having very high CPU usage.
Unfortunate, but this would not be an unexpected situation when your ZK
ensemble is having significant problems.
-Todd
On Fri, Jul 10, 2015 at 10:21 AM, Ch
Yes, there were messages in the controller logs such as
DEBUG [OfflinePartitionLeaderSelector]: No broker in ISR is alive for
[topic1,2]. Pick the leader from the alive assigned replicas:
(kafka.controller.OfflinePartitionLeaderSelector)
ERROR [Partition state machine on Controller 0]: Error whil
Hi Simon,
The API will be available in the next release, and which is planed in a
month.
At the meantime you could start trying it out from trunk if you want.
Guozhang
On Fri, Jul 10, 2015 at 1:24 AM, Simon Cooper <
simon.coo...@featurespace.co.uk> wrote:
> I'm updating the kafka APIs we use t
Krish,
If you only add a new broker (for example broker 3) into your cluster
without doing anything else, this broker will not automatically get any
topic-partitions migrated to itself, so I suspect there are at least some
admin tools executed.
The log exceptions you showed in the previous emails
Hi everyone,
Same errors can be seen when using embedded kafka and embedded zookeeper in
unit tests. They’re absolutely normal. As long as you see a successful
connection, it’s all good!
Kind regards,
Radek Gruchalski
ra...@gruchalski.com (mailto:ra...@gruchalski.com)
(mailto:ra...
Hi Jeff,
I haven't tried this out, but I am planning to. Just a quick question : We
have TestHarness in Kafka that brings up Kafka and Zookeeper and also tears
them down. Have you tried using it?
Thanks,
Mayuresh
On Fri, Jul 10, 2015 at 10:09 AM, Jeff Gong wrote:
> To follow up and provide a
Todd, the Kafka problems started when one of three ZooKeeper nodes was
restarted.
On Thu, Jul 9, 2015 at 12:10 PM, Todd Palino wrote:
> Did you hit the problems in the Kafka brokers and consumers during the
> Zookeeper problem, or after you had already cleared it?
>
> For us, we decided to skip
To follow up and provide a little more context on my second bullet point,
when I run any command for the first time on command line that requires
connecting to this code instantiated ZK server I get the specific error:
> bin/kafka-topics.sh --list --zookeeper localhost:2181
java.net.ConnectExcept
So we think we have a process to fix this issue via ZooKeeper – If anyone has
any thoughts, please let me know.
First get the “state” from a good partition, to get the correct epochs:
In /usr/local/zookeeper/zkCli.sh
[zk: localhost:2181(CONNECTED) 4] get /brokers/topics/topic1/partitions/6/stat
There are two ways you can configure your topics, log compaction and with
no cleaning. The choice depends on your use case. Are the records uniquely
identifiable and will they receive updates? Then log compaction is the way
to go. If they are truly read only, you can go without log compaction.
We
I don't want to endorse this use of Kafka, but assuming you can give your
message unique identifiers, I believe using log compaction will keep all
unique messages forever. You can read about how consumer offsets stored in
Kafka are managed using a compacted topic here:
http://kafka.apache.org/docum
As i mentioned earlier, this feature has not yet been released, but out
pull-request has been approved by Quantifind. Here it is:
https://github.com/quantifind/KafkaOffsetMonitor/pull/58/files
If you would like to use it now, you would have to build Offset Monitor jar
yourself using code from for
Adam,
Tried to config KafkaOffsetMonitor it is working fine.
But how can we integrated with Graphite to add alerting ? can please
explain in details and if you have any doc can you please provide?
Thanks & Regards,
-Anandh Kumar
On Fri, Jul 10, 2015 at 12:58 PM, Adam Dubiel wrote:
> We are u
I'd like to use Kafka as a persistent store – sort of as an alternative to
HDFS. The idea is that I'd load the data into various other systems in
order to solve specific needs such as full-text search, analytics, indexing
by various attributes, etc. I'd like to keep a single source of truth,
howeve
I'm updating the kafka APIs we use to the new standalone ones, but it look like
the new consumer isn't ready yet (the code has got lots of placeholders etc),
and there's only the producer in the Javadoc at
http://kafka.apache.org/082/javadoc/index.html. Is there an ETA on when the new
consumer
We are using kafka offset monitor (http://quantifind.com/KafkaOffsetMonitor/),
which we recently integrated with Graphite to add alerting and better
graphing - it should be accessible in newest version, not yet released. It
works only with ZK offsets though.
2015-07-10 9:24 GMT+02:00 Anandh Kumar
Thanks Rahul for your reply
On Fri, Jul 10, 2015 at 11:11 AM, Rahul Jain wrote:
> Burrow works only if you are storing the offsets in kafka topic, not
> zookeeper. You can also take a look at Kafka web console ( it has a memory
> leak bug but a patch is available ).
> On 10 Jul 2015 09:34, "Jian
26 matches
Mail list logo