> On July 21, 2015, 6:57 a.m., Ewen Cheslack-Postava wrote:
> > core/src/main/scala/kafka/admin/TopicCommand.scala, line 90
> > <https://reviews.apache.org/r/36578/diff/2/?file=1015158#file1015158line90>
> >
> >     This format call isn't working because it's being called on the second 
> > string due to the split across lines and +. Needs parens added or switched 
> > back to a single string.

Sorry, probably due to last minute Intellij 'auto-formating'. Fixed.


> On July 21, 2015, 6:57 a.m., Ewen Cheslack-Postava wrote:
> > core/src/main/scala/kafka/server/AbstractFetcherThread.scala, line 166
> > <https://reviews.apache.org/r/36578/diff/2/?file=1015159#file1015159line166>
> >
> >     My quick test indicates this doesn't work -- I don't think this is the 
> > code that would be returned anyway since I don't think max message size is 
> > checked for fetch requests, fetch.message.max.bytes for consumers and  
> > replica.fetch.max.bytes for brokers are both for aggregate data size. The 
> > problem occurs when the total aggregate size permitted per request is 
> > smaller than a single message. I think in that case we're just returning 0 
> > messages, but not an error code. After enough time the result is that the 
> > ISR shrinks (which is what my test showed in the broker logs).
> >     
> >     I think to properly log this we might need to log it on the leader node 
> > not on the fetcher -- probably somewhere in ReplicaManager. If you're not 
> > familiar with this code, you'll want to start in KafkaApis.scala in 
> > handleOffsetRequest. However, I'm not sure what this means about issuing a 
> > useful warning to consumers since they wouldn't have easy access to broker 
> > logs. On the other hand, a stalled consumer is less of a problem than the 
> > ISR being forced down to a single replica.
> >     
> >     For reference, I tested this with a simple config with two brokers, 
> > adjusting message.max.bytes and replica.fetch.max.bytes. Then I created a 
> > topic with replication factor 2 and used console producer to send data of 
> > different sizes to test the output.

Thanks, Ewen! I removed the call then, and will spend some quality time with 
KafkaApis and ReplicaManager. But, as you said, since this wouldn't have a 
useful warning message for consumers, it's worth digging it in this ticket? 
wdyt?


- Edward


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36578/#review92374
-----------------------------------------------------------


On July 21, 2015, 4:21 p.m., Edward Ribeiro wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36578/
> -----------------------------------------------------------
> 
> (Updated July 21, 2015, 4:21 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2338
>     https://issues.apache.org/jira/browse/KAFKA-2338
> 
> 
> Repository: kafka
> 
> 
> Description
> -------
> 
> KAFKA-2338 Warn users if they change max.message.bytes that they also need to 
> update broker and consumer settings
> 
> 
> Diffs
> -----
> 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> 4e28bf1c08414e8e96e6ca639b927d51bfeb4616 
> 
> Diff: https://reviews.apache.org/r/36578/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Edward Ribeiro
> 
>

Reply via email to