Hi Zakee,
>message.send.max.retries is 1
Regards,
Madhukar
On Tue, Apr 28, 2015 at 6:17 PM, Madhukar Bharti
wrote:
> Hi Zakee,
>
> Thanks for your reply.
>
> >message.send.max.retries
> 3
>
> >retry.backoff.ms
> 100
>
> >topic.metadata.refresh.interval.ms
> 600*1000
>
> This is my properties.
Hi,
I was wondering what options there are for horizontally scaling kafka
consumers? Basically if I have 100 partitions and 10 consumers, and want to
temporarily scale up to 50 consumers, what options do I have?
So far I've thought of just simply tracking consumer membership somehow
(either throu
If the 100 partitions are all for the same topic, you can have up to 100
consumers working as part of a single consumer group for that topic.
You cannot have more consumers than there are partitions within a given
consumer group.
On 29 April 2015 at 08:41, Nimi Wariboko Jr wrote:
> Hi,
>
> I was
Please correct me if wrong, but I think it is really not hard constraint
that one cannot have more consumers (from same group) than partitions on
single topic - all the surplus consumers will not be assigned to consume
any partition, but they can be there and as soon as one active consumer
from sam
You're right Stevo, I should re-phrase to say that there can be no more
_active_ consumers than there are partitions (within a single consumer
group).
I'm guessing that's what Nimi is alluding to asking, but perhaps he can
elaborate on whether he's using consumer groups and/or whether the 100
parti
Hi All,
I am trying to get a multi threaded HL consumer working against a 2 broker
Kafka cluster with a 4 partition 2 replica topic.
The consumer code is set to run with 4 threads, one for each partition.
The producer code uses the default partitioner and loops indefinitely
feeding events into
example.shutdown(); in ConsumerGroupExample closes all consumer connections
to Kafka. remove this line the consumer threads will run forever
On Wed, Apr 29, 2015 at 9:42 PM, christopher palm wrote:
> Hi All,
>
> I am trying to get a multi threaded HL consumer working against a 2 broker
> Kafka c
Unfortunately sounds like a Zookeeper data corruption issue on the node in
question:
https://issues.apache.org/jira/browse/ZOOKEEPER-1546
The fix from the Jira is to clean out the Zookeeper data on the affected
node (if that's possible)
On 28 April 2015 at 16:59, Emley, Andrew wrote:
> Hi
>
> I
Any pointers on this feature?
Thanks.
On Thu, Apr 23, 2015 at 9:57 PM, Bharath Srinivasan
wrote:
> Thanks Gwen.
>
> I'm specifically looking for the consumer rewrite API (
> org.apache.kafka.clients.consumer.KafkaConsumer). Based on the wiki, this
> feature is available only in 0.9.
>
> The spe
In current high-level consumer, you can still manually control when you
commit offsets (see this blog for details:
http://ingest.tips/2014/10/12/kafka-high-level-consumer-frequently-missing-pieces/
)
While you can't explicitly roll-back a commit, you can simply avoid
committing when you have an ex
I have an application producing Avro-encoded keyed messages (Martin
Kleppmann's new Bottled Water project).
It encodes a delete as a keyed message with an id as a key and a null
payload. I have log compaction turned on.
The Avro console consumer correctly displays this as "null" in my terminal,
b
Commenting out Example shutdown did not seem to make a difference, I added
the print statement below to highlight the fact.
The other threads still shut down, and only one thread lives on, eventually
that dies after a few minutes as well
Could this be that the producer default partitioner is isn'
Update here, we resolved this by deleting the kafka-data directory on that
host ( which had file inconsistencies from 'fsck' run log last week in the
kafka-data dir ) and restarting kafka. Note we also never reimaged the host
( that was another host, which we got confused over ).
Thanks,
Kartheek
Hi,
I was wondering what options there are/what other people are doing for
horizontally scaling kafka consumers? Basically if I have 100 partitions
and 10 consumers, and want to temporarily scale up to 50 consumers, what
can I do?
So far I've thought of just simply tracking consumer membership so
I am using Kafka 0.8.2 and I am using Kafka based storage for offset.
Whenever I restart a consumer (high level consumer api) it is not consuming
messages whichever were posted when the consumer was down.
I am using the following consumer properties
Properties props = new Properties();
OK, so you turned off auto.offset.commit, and set the auto.offset.reset to
largest.
That means when you consume,
1. If you did not commit offsets manually, no offsets will be committed to
Kafka.
2. If you do not have an offset stored in Kafka, you will start from the
log end and ignore the existin
Thank you, I am using the same groupId all the time.
I printed OffsetsMessageFormatter output for the consumer, and the output
is showing as
[async_force_consumers,force_msgs,9]::OffsetAndMetadata[2,associated
metadata,1430277791077]
But If I restart the consumer, it starts consuming messages fr
I'm starting to think that the old adage "If two people say you are drunk,
lie down" applies here :)
Current API seems perfectly clear, useful and logical to everyone who wrote
it... but we are getting multiple users asking for the old batch behavior
back.
One reason to get it back is to make upgr
Hi, would anyone be able to help me with this issue? Thanks.
- Dave
On Tue, Apr 28, 2015 at 1:32 PM -0700, "Dave Hamilton"
mailto:dhamil...@nanigans.com>> wrote:
1. We’re using version 0.8.1.1.
2. No failures in the consumer logs
3. We’re using the ConsumerOffsetChecker to see what partitions
Hey Dave,
It's hard to say why this is happening without more information. Even if there
are no errors in the log, is there anything to indicate that the rebalance
process on those hosts even started? Does this happen occasionally or every
time you start the consumer group? Can you paste the ou
The log suggests that the shutdown method were still called
Thread 0: 2015-04-29
12:55:54.292|3|13|Normal|-74.1892627|41.33900999753
Last Shutdown via example.shutDown called!
15/04/29 13:09:38 INFO consumer.ZookeeperConsumerConnector:,
ZKConsumerConnector shutting down
Please ensur
You can do this with the existing Kafka Consumer
https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka/consumer/SimpleConsumer.scala#L106
and probably any other Kafka client too (maybe with minor/major rework
to-do the offset management).
The new consumer approach is more transparen
I see lot of interesting features with Kafka 0.8.2 beta. I am just
wondering when that will be released. Is there any timeline for that?
Thanks & Regards,
On Wed, Apr 29, 2015 at 6:08 PM, Gwen Shapira wrote:
> I'm starting to think that the old adage "If two people say you are drunk,
> lie down" applies here :)
>
> Current API seems perfectly clear, useful and logical to everyone who wrote
> it... but we are getting multiple users asking for the ol
It has already been released, including a minor revision to fix some
critical bugs. The latest release is 0.8.2.1. The downloads page has links
and release notes: http://kafka.apache.org/downloads.html
On Wed, Apr 29, 2015 at 10:22 PM, Gomathivinayagam Muthuvinayagam <
sankarm...@gmail.com> wrote:
Thank you,
It seems the following methods are not supported in KafkaConsumer. Do you
know when they will be supported?
public OffsetMetadata commit(Map offsets, boolean
sync) {
throw new UnsupportedOperationException();
}
Thanks & Regards,
On Wed, Apr 29, 2015 at 10:52 PM, E
My mistake, it seems the Java drivers are a lot more advanced than the
Shopify's Kafka driver (or I am missing something) - and I haven't used
Kafka before.
With the Go driver - it seems you have to manage offsets and partitions
within the application code, while in Scala driver it seems you have
The Go Kafka Client also supports offset storage in ZK and Kafka
https://github.com/stealthly/go_kafka_client/blob/master/docs/offset_storage.md
and has two other strategies for partition ownership with a consensus
server (currently uses Zookeeper will be implementing Consul in near
future).
~ Joe
28 matches
Mail list logo