Hi Neha,

How will new Consumer help us with implementing following use case?



We have heartbeat as one of topics and all application servers publish
metric to this topic.  We have to meet near real-time consume SLA (less
than 30 seconds).

1) We would like to find out what is latest message per partition that
current consumer is connected?

2) If the consumer lags behind by certain offset or by time, consumer can
seek to particular offset(which we can use seek method for this).

3) How can we start a temp consumer for same partition to read messages
based on offset range (last consume offset from part 2 to current offset
that we jumped to in part 2) ?



Basically, is there a QOS concept per partition where consumer always needs
to consume latest message and detect a lag behind and start TEMP consumer
for back-fill.


How does Linked in handle the near real time consumption for operation
metrics ?


Thanks,


Bhavesh


On Sat, Apr 12, 2014 at 6:58 PM, Neha Narkhede <neha.narkh...@gmail.com>wrote:

> Why cant we pass a callback in subscribe itself?
>
> Mainly because it will make the processing kind of awkward since you need
> to access the other consumer APIs while processing the messages. Your
> suggestion does point out a problem with the poll() API though. Here is the
> initial proposal of the poll() API-
>
> List<ConsumerRecord> poll(long timeout, TimeUnit unit);
>
> The application subscribes to topics or partitions and expects to process
> messages per topic or per partition respectively. By just returning a list
> of ConsumerRecord objects, we make it difficult for the application to
> process messages naturally grouped by topic or partition. After some
> thought, I changed it to -
>
> Map<String, ConsumerRecordMetadata> poll(long timeout, TimeUnit unit);
>
> ConsumerRecordMetadata allows you to get records for a particular partition
> or get records for all partitions.
>
> The second change I made is to the commit APIs. To remain consistent with
> the Producer, I changed commit() to return a Future and got rid of
> commitAsync(). This will easily support the sync and async commit use
> cases.
>
> Map<TopicPartition,OffsetMetadata
> <
> http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/org/apache/kafka/clients/consumer/OffsetMetadata.html
> >>>
> commit(Map<TopicPartition,Long> offsets);
>
> I'm looking for feedback on these changes. I've published the new javadoc
> to the same location<
> http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc>.
> Appreciate if someone can take a look.
>
> Thanks,
> Neha
>
>
> On Tue, Apr 8, 2014 at 9:50 PM, pushkar priyadarshi <
> priyadarshi.push...@gmail.com> wrote:
>
> > Was trying to understand when we have subscribe then why poll is a
> separate
> > API.Why cant we pass a callback in subscribe itself?
> >
> >
> > On Mon, Apr 7, 2014 at 9:51 PM, Neha Narkhede <neha.narkh...@gmail.com
> > >wrote:
> >
> > > Hi,
> > >
> > > I'm looking for people to review the new consumers APIs. Patch is
> posted
> > at
> > > https://issues.apache.org/jira/browse/KAFKA-1328
> > >
> > > Thanks,
> > > Neha
> > >
> >
>

Reply via email to