I have started multiple consumers with some time delay. Even after long
period of time, later joined consumers are not getting any distribution of
partitions. Only one consumer is loaded with all the partitions. I dont see
any configuration parameter to change this behavior.
Did anyone face simila
I am using the following code to help kafka stream listener threads to exit
out of the blocking call of hasNext() on the consumerIterator. But the
threads never exit, when they receive allDone() signal. I am not sure
whether I am making any mistake. Please let me know is this right approach.
I am just wondering if I have no of replicas as 3, and
if min.insync.replicas > 1, in that case does a read require from read
quorum or just the leader always serves the read request?
Thanks & Regards,
2 PM, Ewen Cheslack-Postava
wrote:
> It has already been released, including a minor revision to fix some
> critical bugs. The latest release is 0.8.2.1. The downloads page has links
> and release notes: http://kafka.apache.org/downloads.html
>
> On Wed, Apr 29, 2015 at 10:22 P
I see lot of interesting features with Kafka 0.8.2 beta. I am just
wondering when that will be released. Is there any timeline for that?
Thanks & Regards,
is that are you using the group Id all the
> time?
>
> Jiangjie (Becket) Qin
>
> On 4/29/15, 3:17 PM, "Gomathivinayagam Muthuvinayagam"
> wrote:
>
> >I am using Kafka 0.8.2 and I am using Kafka based storage for offset.
> >Whenever I restart a consume
I am using Kafka 0.8.2 and I am using Kafka based storage for offset.
Whenever I restart a consumer (high level consumer api) it is not consuming
messages whichever were posted when the consumer was down.
I am using the following consumer properties
Properties props = new Properties();
the client should read messages from.
>
> Use the High level consumer, if you don'nt care about offsets.
>
> Offsets are per partition and stored on zookeeper.
>
> regards
>
> On Tue, Apr 28, 2015 at 4:38 AM, Gomathivinayagam Muthuvinayagam <
> sankarm...@g
I am trying to setup a cluster where messages should never be lost once it
is published. Say if I have 3 brokers, and if I configure the replicas to
be 3 also, and if I consider max failures as 1, and I can achieve the above
requirement. But when I post a message, how do I prevent kafka from
accept
I have just posted the following question in stackoverflow. Could you
answer the following questions?
I would like to use Kafka high level consumer API, and at the same time I
would like to disable auto commit of offsets. I tried to achieve this
through the following steps.
1) auto.commit.enable
I am trying to commit offset request in a background thread. I am able to
commit it so far. I am using high level consumer api.
So if I just use high level consumer api, and if I have disabled auto
commit, with kafka as the storage for offsets, will the high level consumer
api use automatically t
11 matches
Mail list logo