Hi all, Is KAFKA-725 Broker Exception: Attempt to read with a maximum offset less than start offset <https://issues.apache.org/jira/browse/KAFKA-725> still valid ? We are seeing a similar issue when we are carrying out the yahoo's streaming-benchmarks <https://github.com/yahoo/streaming-benchmarks> on a 4-node cluster. Our issue id is https://github.com/gearpump/gearpump/issues/1872.
We are using Kafka scala-2.10-0.8.2.1. 4 brokers are installed on 4 nodes with Zookeeper on 3 of them. On each node, 4 producers produce data to a Kafka topic with 4 partitions and 1 replica. Each producer has a throughput of 17K messages/s. 4 consumers are distributed (not necessarily evenly) across the cluster and consume from Kafka as fast as possible. I tried logging the produced offsets (with callback in send) and found that the "start offset" already existed when the consumer failed with the fetch exception. This happened only when producers are producing at high throughput. Any ideas would be much appreciated. Thanks, Manu Zhang