Hi Manu,

in the streaming-benchmark, are seeing the issue only when reading with
Gearpump, or is it triggered by a different processing framework as well?

I'm asking because there is a Flink user who is using Kafka 0.8.2.1 as well
who's reporting a very similar issue on SO:
http://stackoverflow.com/questions/34982483/flink-streaming-job-switched-to-failed-status/34987963
.
His issue is also only present under load.




On Thu, Jan 21, 2016 at 2:28 AM, Manu Zhang <owenzhang1...@gmail.com> wrote:

> Hi,
>
> Any suggestions for this issue or do I need to provide more information ?
> Any links I can refer to would be also very helpful.
>
> Thanks,
> Manu Zhang
>
>
> On Tue, Jan 19, 2016 at 8:41 PM, Manu Zhang <owenzhang1...@gmail.com>
> wrote:
>
> > Hi all,
> >
> > Is KAFKA-725 Broker Exception: Attempt to read with a maximum offset less
> > than start offset <https://issues.apache.org/jira/browse/KAFKA-725>
> still
> > valid ? We are seeing a similar issue when we are carrying out the
> yahoo's
> > streaming-benchmarks <https://github.com/yahoo/streaming-benchmarks> on
> a
> > 4-node cluster. Our issue id is
> > https://github.com/gearpump/gearpump/issues/1872.
> >
> > We are using Kafka scala-2.10-0.8.2.1. 4 brokers are installed on 4 nodes
> > with Zookeeper on 3 of them. On each node, 4 producers produce data to a
> > Kafka topic with 4 partitions and 1 replica. Each producer has a
> throughput
> > of 17K messages/s. 4 consumers are distributed (not necessarily evenly)
> > across the cluster and consume from Kafka as fast as possible.
> >
> > I tried logging the produced offsets (with callback in send) and found
> > that the "start offset" already existed when the consumer failed with the
> > fetch exception.
> >
> > This happened only when producers are producing at high throughput.
> >
> > Any ideas would be much appreciated.
> >
> > Thanks,
> > Manu Zhang
> >
>

Reply via email to