This gives you back two valid offsets, not a range.
Thanks,
Jun
On Fri, May 24, 2013 at 11:29 AM, Sining Ma wrote:
> Hi,
> We are currently using kafka-0.7.1 right now.
> I have two questions:
> 1. We use SimpleConsumer to aggregate messages to log files and there is
> no zookeeper. Sometimes
If you use the Java API you can do SimpleConsumer.getOffsetsBefore(topic,
partition, time, 1) which will return a long offset value
The time parameter can be kafka.api.OffsetRequest.EarliestTime() or
kafka.api.OffsetRequest.LatestTime() based on your application need.
On May 24, 2013, at 2:03
Thanks Suyog
Could you explain more about OffsetRequest?
I find new
OffsetRequest(topic: String, partition: Int, time: Long, maxNumOffsets: Int)
in Kafka api.
How can I send this request? And where can I receive a response from this
OffsetRequest?
Could you give me an example for this API?
--
sorry, I believe I answered my own question. Yes it supports this based on
the group.id.
On Fri, May 24, 2013 at 4:28 PM, Jamie Johnson wrote:
> I am very new to kafka, so I'll apologize in advance for any stupid
> questions...
>
>
> That being said is it possible within kafka to have multiple
Since you are using the simple consumer you will need to handle the
OffsetOutOfRange Exception in your code. This happens when your consumer
queries for an offset which is no longer persisted in Kafka (The logs have been
deleted based on the retention policy). Ideally when this happens, the cons
I am very new to kafka, so I'll apologize in advance for any stupid
questions...
That being said is it possible within kafka to have multiple consumers on a
single topic? I had assumed the answer was yes, but I am running into some
issues setting this up. Any information would be greatly apprec
Hi,
We are currently using kafka-0.7.1 right now.
I have two questions:
1. We use SimpleConsumer to aggregate messages to log files and there is no
zookeeper. Sometimes we can see kafka.common.OffsetOutOfRangeException.
And this exception happens when we start our consumer program. We do not know
Done: https://issues.apache.org/jira/browse/KAFKA-919.
I attached a very simple patch to the bug. I did not change any comments
around it, but I can verify that my use case now works as expected.
:)
On May 24, 2013, at 12:33 PM, "Jun Rao"
mailto:jun...@gmail.com>> wrote:
This turns out to be
This turns out to be a bug. It seems that we always call commitOffsets() in
ZookeeperConsumerConnector.closeFetchersForQueues() (which is called during
rebalance), whether auto commit is enabled or not. Could you file a jira?
Thanks,
Jun
On Fri, May 24, 2013 at 8:06 AM, Hargett, Phil <
phil.har
Timothy,
Kafka is not designed to support millions of topics. Zookeeper will become
a bottleneck, even if you deploy more brokers to get around the # of files
issue. In normal cases, it might work just fine with the right sized
cluster. However, when there are failures, the time to recovery could
That sounds odd. Can you turn on DEBUG for
kafka.consumer.ZookeeperConsumerConnector in your consumer and check if you
see the following log message -
"Committed offset for topic "
Which version of Kafka are you using?
Thanks,
Neha
On Fri, May 24, 2013 at 8:06 AM, Hargett, Phil <
phil.harg...@
In one of our applications using Kafka, we are using the high-level consumer to
pull messages from our topic.
Because we pull messages from topics in discrete units (e.g., an hour's worth
of messages), we want to control explicitly when offsets are committed.
Even though "auto.commit.enable" is
12 matches
Mail list logo