Hi,
Topic:part_1_repl_3_3 PartitionCount:1 ReplicationFactor:3 Configs:
Topic: part_1_repl_3_3 Partition: 0 Leader: 3 Replicas: 3,4,5 Isr: 4,3,5
I see that the replicas are 3,4,5 but, the ISR is 4,3,5.
I have this doubt:-
When the leader is 3, can the ISR be 4, 3, 5 ?
Does the ISR got to ha
Hi all,
I'm playing around with the kafka high level java api.
If I have multiple consumers in a group, consuming the same topic with a
single partition, only one consumer will receive messages, as is expected.
When shutting down the consumer, another consumer will automatically
consume the messa
Created https://issues.apache.org/jira/browse/KAFKA-2551
On Mon, Sep 14, 2015 at 7:22 PM, Guozhang Wang wrote:
> Yes you are right. Could you file a JIRA to edit the documents?
>
> Guozhang
>
> On Fri, Sep 11, 2015 at 4:41 PM, Stevo Slavić wrote:
>
> > That sentence is in both
> > https://svn.a
Hi,
I've been trying out the new consumer and have noticed that i get duplicate
messages when i stop the consumer and then restart (different processes,
same consumer group).
I consume all of the messages on the topic and commit the offsets for each
partition and stop the consumer. On the next ru
Hi all,
I have a cluster with 3 brokers. I've created a topic "test" with 3
partitions and replication factor 3. I produced 2 messages to
"test-2". Then I checked JMX metrices (LogEndOffset) which showed
2 for "test-2".
Now I deleted "test", the logs related to "test" are deleted in both
The first replica in the ISR is the preferred replica, but is not required
to be the leader at all times. If you execute a preferred leader election,
or enable auto.leader.rebalance.enable, then replica 4 will become the
leader again.
More can be read here:
- http://kafka.apache.org/documentat
I turned off compression and still get duplicates, but only 1 from each
topic.
Should the initial fetch offset for a partition be committed offset +1 ?
Thanks,
Damian
On 15 September 2015 at 14:07, Damian Guy wrote:
> Hi,
>
> I've been trying out the new consumer and have noticed that i get
> d
Hello Damian,
Yes, there's a +1 difference. See related discussion
http://mail-archives.apache.org/mod_mbox/kafka-users/201507.mbox/%3CCAOeJiJh2SMzVn23JsoWiNk3sfsw82Jr_-kRLcNRd-oZ7pR1yWg%40mail.gmail.com%3E
Kind regards,
Stevo Slavic.
On Tue, Sep 15, 2015 at 3:56 PM, Damian Guy wrote:
> I turn
Hi All,
Below is my partition information for the topic **xx_json_topic** .This is a
Kafka cluster with three nodes .
All nodes up :
Topic: xx_json_topicPartitionCount:4ReplicationFactor:2
Configs:
Topic: xx_json_topicPartition: 0Leader: 1 Repli
Hello,
We have a set of processing jobs (in samza) using key compacted Kafka logs as a
durable Key-Value store. Recently, after some network troubles that resulted
in various parts of the infrastructure rebooting, we discovered that a key that
we expected to be "alive" was compacted out of the
Moving the export statement to kafka-server.start.sh fixed the issue. I was
able to start kafka with JMX monitoring and run kafka-topics.sh.
Thanks Lance.
On Mon, Sep 14, 2015 at 6:43 PM, Lance Laursen
wrote:
> This is not a bug. The java process spawned by kafka-topics.sh is trying to
> bind
I put an answer to this on Stack Overflow. Basically, that's not how RF
works for Kafka. It's not a guarantee, it's just how the partitions are
created, and how it is reported when something is down (under replicated
partitions). While there is an option to do auto leader rebalancing,
there's no eq
Yep,
It looks like this was only communicated originally to the dev list (and
not the users list), so it wasn't obvious to all!
Thanks,
Jason
On Mon, Sep 14, 2015 at 12:43 AM, Stevo Slavić wrote:
> Hello Jason,
>
> Maybe this answers your question:
>
> http://mail-archives.apache.org/mod_mbox
Hi,
We run a 10 node cluster in production with 5 zk nodes.
The cluster is operating fine without issues with other topics(we have
close to 10 topics).
However, when we try to create new topic, it doesnt go through
successfully. Have tried it couple of times with same result.
Topic shows as create
Hi Apache Kafka,
I'm evaluating Apache Kafka for one of the projects that I'm into. I have
used ActiveMQ in the past which makes using Kafka pretty straightforward.
One thing that I do not understand is the need for Zookeeper?
I understand what Zookeeper is, but I fail to understand what purpose
Hi there,
We had error logs for three messages failed to produce to Kafka during last
week. All three are failed on the same day within one hour range. We checked
Kafka logs (server.log and statechange.log) but found no abnormal behaviors.
The exception is :
kafka.common.FailedToSendMes
I'd be interested to see:
https://issues.apache.org/jira/browse/KAFKA-2434 (has patch available, we
will be using 'old' consumer for some time)
https://issues.apache.org/jira/browse/KAFKA-2125 (seems rather serious,
unclear if no longer relevant with new code?)
https://issues.apache.org/jira/bro
Zookeeper is a distributed coordination service. Kafka uses Zookeeper for
various things like leader election, storing consumer-partition offsets etc.
More information on each service is available at
http://kafka.apache.org/documentation.html and https://zookeeper.apache.org/
I highly recommend re
Hi everyone,
Since Kafka doesn’t have a dead-letter queue support built in - I’m looking for
advice and best approaches to handle bad messages or cases when system is going
crazy, once you receive an exception it basically means you’re blocking the
whole kaka-stream from consuming other message
Hi everyone,
Since Kafka doesn’t have a dead-letter queue support built in - I’m looking for
advice and best approaches to handle bad messages or cases when system is going
crazy, once you receive an exception it basically means you’re blocking the
whole kaka-stream from consuming other message
So, this is expected behavior on the producer when its unable to
communicate with the kafka broker that is the leader for the message is
being sent.
First, if the design of your app allows, try to migrate to the new
producer API release is 0.8.1 It is fully asynchronous, and provides
callbacks
Hi, I think there is no difference between shutting down a consumer or
killing a consumer.
For the whole system, it only means a consumer has left for some reason
which is not interested.
So if you kill a consumer, some consumer in the same consumer group should
take over and consume messages.
Cor
As a cluster, each none in the cluster should know each other to function
properly. For a Cassandra cluster(I don't know ActiveMQ's mechanism), as an
example, it has its own
protocal to communicate with each other to know their condition.
For Kafka, each node stays independantly, they use zookeepe
23 matches
Mail list logo