You can use any offset you like to fetch (provided it is present on the broker). Since the client-side FetchRequest and the server-side Log API don't support reading a specific _number_ of messages you will need to specify the bytes (size) to read. You can then extract as many messages as you like and discard the rest or keep fetching until you read as many messages as you need to.
On Fri, Sep 26, 2014 at 10:31:17AM +0530, Sharninder wrote: > Slight off-topic, but is it also possible to replay a specific number of > messages? For example, using the simple consumer, can I go back/reset the > offset so that I always go read the last 10 messages assuming the size of > each individual message could be different. All I found in the simple > consumer example was that replaying needs a byte parameter, but maybe I > didn't look hard enough. > > -- > Sharninder > > > On Thu, Sep 25, 2014 at 10:15 PM, pankaj ojha <pankajojh...@gmail.com> > wrote: > > > Thank You. I will try this out. > > > > On Thu, Sep 25, 2014 at 10:01 PM, Gwen Shapira <gshap...@cloudera.com> > > wrote: > > > > > Using high level consumer and assuming you already created an iterator: > > > > > > while (msgCount < maxMessages && it.hasNext()) { > > > bytes = it.next().message(); > > > eventList.add(bytes); > > > } > > > > > > (See a complete example here: > > > > > > > > https://github.com/apache/flume/blob/trunk/flume-ng-sources/flume-kafka-source/src/main/java/org/apache/flume/source/kafka/KafkaSource.java > > > ) > > > > > > Gwen > > > > > > On Thu, Sep 25, 2014 at 9:15 AM, pankaj ojha <pankajojh...@gmail.com> > > > wrote: > > > > Hi, > > > > > > > > My requirement is to read a specific number of messages from kafka > > topic > > > > which contains data in json format and after reading number of > > messges, i > > > > need to write that in a file and then stop. How can I count number of > > > > messages read by my consumer code(either simpleconsumer or high level) > > ? > > > > > > > > Please help. > > > > > > > > -- > > > > Thanks, > > > > Pankaj Ojha > > > > > > > > > > > -- > > Thanks, > > Pankaj Ojha > >