[ 
https://issues.apache.org/jira/browse/KAFKA-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14234678#comment-14234678
 ] 

lokesh Birla commented on KAFKA-727:
------------------------------------

Hi,

Is this really fixed? I still see this issue when I am using 4 topics, 3 
partitions and 3 replication factor.  I am using kafka_2.9.2-0.8.1.1.
Currently I am using 3 node broker and 1 zookeeper. I did not see this issue 
when I used 1,2 or 3 topics. 



2014-08-18 06:43:58,356] ERROR [KafkaApi-1] Error when processing fetch request 
for partition [mmetopic4,2] offset 1940029 from consumer with correlation id 21 
(kafka.server.Kaf
kaApis)
java.lang.IllegalArgumentException: Attempt to read with a maximum offset 
(1818353) less than the start offset (1940029).
        at kafka.log.LogSegment.read(LogSegment.scala:136)
        at kafka.log.Log.read(Log.scala:386)
        at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:530)
        at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:476)
        at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:471)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
        at scala.collection.immutable.Map$Map1.foreach(Map.scala:119)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
        at scala.collection.immutable.Map$Map1.map(Map.scala:107)
        at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:471)
        at 
kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:783)
        at 
kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:765)
        at 
kafka.server.RequestPurgatory$ExpiredRequestReaper.run(RequestPurgatory.scala:216)
        at java.lang.Thread.run(Thread.java:745)


THanks for your help. 


> broker can still expose uncommitted data to a consumer
> ------------------------------------------------------
>
>                 Key: KAFKA-727
>                 URL: https://issues.apache.org/jira/browse/KAFKA-727
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 0.8.0
>            Reporter: Jun Rao
>            Assignee: Jay Kreps
>            Priority: Blocker
>              Labels: p1
>         Attachments: KAFKA-727-v1.patch
>
>
> Even after kafka-698 is fixed, we still see consumer clients occasionally see 
> uncommitted data. The following is how this can happen.
> 1. In Log.read(), we pass in startOffset < HW and maxOffset = HW.
> 2. Then we call LogSegment.read(), in which we call translateOffset on the 
> maxOffset. The offset doesn't exist and translateOffset returns null.
> 3. Continue in LogSegment.read(), we then call messageSet.sizeInBytes() to 
> fetch and return the data.
> What can happen is that between step 2 and step 3, a new message is appended 
> to the log and is not committed yet. Now, we have exposed uncommitted data to 
> the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to