[ 
https://issues.apache.org/jira/browse/KAFKA-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16761237#comment-16761237
 ] 

Jose Armando Garcia Sancio edited comment on KAFKA-7656 at 2/5/19 9:28 PM:
---------------------------------------------------------------------------

I was able to generate a similar error in the broker by having the client send 
{{Integer.MIN_VALUE}} for both the max response size and the max partition size.


{noformat}
12:55:38.734 [DEBUG] [TestEventLogger] kafka.server.FetchRequestTest STARTED
12:55:39.161 [DEBUG] [TestEventLogger]
12:55:39.162 [DEBUG] [TestEventLogger] kafka.server.FetchRequestTest > 
testInvalidMaxBytes STARTED
12:55:41.889 [DEBUG] [TestEventLogger]
12:55:41.889 [DEBUG] [TestEventLogger] kafka.server.FetchRequestTest > 
testInvalidMaxBytes STANDARD_OUT
12:55:41.889 [DEBUG] [TestEventLogger]     [2019-02-05 12:55:41,884] ERROR 
[ReplicaManager broker=0] Error processing fetch with max size -2147483648 from 
consumer on partition topic0-5: (fetchOffset=0, logStartOffset=0, 
maxBytes=-2147483648, currentLeaderEpoch=Optional.empty) 
(kafka.server.ReplicaManager:76)
12:55:41.889 [DEBUG] [TestEventLogger]     java.lang.IllegalArgumentException: 
Invalid max size -2147483648 for log read from segment FileRecords(file= 
/tmp/kafka-3389591127360317062/topic0-5/00000000000000000000.log, start=0, 
end=2147483647)
12:55:41.889 [DEBUG] [TestEventLogger]          at 
kafka.log.LogSegment.read(LogSegment.scala:274)
12:55:41.889 [DEBUG] [TestEventLogger]          at 
kafka.log.Log.$anonfun$read$2(Log.scala:1192)
12:55:41.889 [DEBUG] [TestEventLogger]          at 
kafka.log.Log.maybeHandleIOException(Log.scala:1963)
12:55:41.889 [DEBUG] [TestEventLogger]          at 
kafka.log.Log.read(Log.scala:1147)
12:55:41.889 [DEBUG] [TestEventLogger]          at 
kafka.cluster.Partition.$anonfun$readRecords$1(Partition.scala:792)
12:55:41.889 [DEBUG] [TestEventLogger]          at 
kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
12:55:41.889 [DEBUG] [TestEventLogger]          at 
kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:257)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.cluster.Partition.readRecords(Partition.scala:768)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.ReplicaManager.read$1(ReplicaManager.scala:911)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.ReplicaManager.$anonfun$readFromLocalLog$4(ReplicaManager.scala:976)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:975)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:825)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:830)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:715)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.KafkaApis.handle(KafkaApis.scala:107)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
java.base/java.lang.Thread.run(Thread.java:834)
1
{noformat}

Having the client send a negative maximum bytes should not be a broker error 
and should instead be a client error. I suggest adding code to the broker to 
check for this type of input and instead return an {{INVALID_REQUEST}}.


was (Author: jagsancio):
I was able to generate a similar error in the broker by having the client send 
{{Integer.MIN_VALUE}} for both the max response size and the max partition size.


{{
12:55:38.734 [DEBUG] [TestEventLogger] kafka.server.FetchRequestTest STARTED
12:55:39.161 [DEBUG] [TestEventLogger]
12:55:39.162 [DEBUG] [TestEventLogger] kafka.server.FetchRequestTest > 
testInvalidMaxBytes STARTED
12:55:41.889 [DEBUG] [TestEventLogger]
12:55:41.889 [DEBUG] [TestEventLogger] kafka.server.FetchRequestTest > 
testInvalidMaxBytes STANDARD_OUT
12:55:41.889 [DEBUG] [TestEventLogger]     [2019-02-05 12:55:41,884] ERROR 
[ReplicaManager broker=0] Error processing fetch with max size -2147483648 from 
consumer on partition topic0-5: (fetchOffset=0, logStartOffset=0, 
maxBytes=-2147483648, currentLeaderEpoch=Optional.empty) 
(kafka.server.ReplicaManager:76)
12:55:41.889 [DEBUG] [TestEventLogger]     java.lang.IllegalArgumentException: 
Invalid max size -2147483648 for log read from segment FileRecords(file= 
/tmp/kafka-3389591127360317062/topic0-5/00000000000000000000.log, start=0, 
end=2147483647)
12:55:41.889 [DEBUG] [TestEventLogger]          at 
kafka.log.LogSegment.read(LogSegment.scala:274)
12:55:41.889 [DEBUG] [TestEventLogger]          at 
kafka.log.Log.$anonfun$read$2(Log.scala:1192)
12:55:41.889 [DEBUG] [TestEventLogger]          at 
kafka.log.Log.maybeHandleIOException(Log.scala:1963)
12:55:41.889 [DEBUG] [TestEventLogger]          at 
kafka.log.Log.read(Log.scala:1147)
12:55:41.889 [DEBUG] [TestEventLogger]          at 
kafka.cluster.Partition.$anonfun$readRecords$1(Partition.scala:792)
12:55:41.889 [DEBUG] [TestEventLogger]          at 
kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
12:55:41.889 [DEBUG] [TestEventLogger]          at 
kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:257)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.cluster.Partition.readRecords(Partition.scala:768)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.ReplicaManager.read$1(ReplicaManager.scala:911)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.ReplicaManager.$anonfun$readFromLocalLog$4(ReplicaManager.scala:976)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:975)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:825)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:830)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:715)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.KafkaApis.handle(KafkaApis.scala:107)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
12:55:41.890 [DEBUG] [TestEventLogger]          at 
java.base/java.lang.Thread.run(Thread.java:834)
1
}}

Having the client send a negative maximum bytes should not be a broker error 
and should instead be a client error. I suggest adding code to the broker to 
check for this type of input and instead return an {{INVALID_REQUEST}}.

> ReplicaManager fetch fails on leader due to long/integer overflow
> -----------------------------------------------------------------
>
>                 Key: KAFKA-7656
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7656
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 2.0.1
>         Environment: Linux 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 
> EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
>            Reporter: Patrick Haas
>            Priority: Major
>
> (Note: From 2.0.1-cp1 from confluent distribution)
> {{[2018-11-19 21:13:13,687] ERROR [ReplicaManager broker=103] Error 
> processing fetch operation on partition __consumer_offsets-20, offset 0 
> (kafka.server.ReplicaManager)}}
> {{java.lang.IllegalArgumentException: Invalid max size -2147483648 for log 
> read from segment FileRecords(file= 
> /prod/kafka/data/kafka-logs/__consumer_offsets-20/00000000000000000000.log, 
> start=0, end=2147483647)}}
> {{ at kafka.log.LogSegment.read(LogSegment.scala:274)}}
> {{ at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1159)}}
> {{ at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1114)}}
> {{ at kafka.log.Log.maybeHandleIOException(Log.scala:1842)}}
> {{ at kafka.log.Log.read(Log.scala:1114)}}
> {{ at 
> kafka.server.ReplicaManager.kafka$server$ReplicaManager$$read$1(ReplicaManager.scala:912)}}
> {{ at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:974)}}
> {{ at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:973)}}
> {{ at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)}}
> {{ at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)}}
> {{ at kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:973)}}
> {{ at kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:802)}}
> {{ at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:815)}}
> {{ at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:685)}}
> {{ at kafka.server.KafkaApis.handle(KafkaApis.scala:114)}}
> {{ at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)}}
> {{ at java.lang.Thread.run(Thread.java:748)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to