> On March 5, 2014, 12:24 a.m., Jun Rao wrote:
> > core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala,
> >  lines 307-308
> > <https://reviews.apache.org/r/18740/diff/3/?file=510329#file510329line307>
> >
> >     Could we compare the content of unique message set?

This would be a bit tricky since toSet does not preserve ordering, in order to 
have one-on-one comparison we need to write some dedup code that preserves 
ordering; on the other hand, since each message sent by producer is unique, it 
is less likely that the counts match but the contents are different. Also the 
ordering of the produced messages are verified in the previous tests. So I 
think it is OK to just verify the numbers.


> On March 5, 2014, 12:24 a.m., Jun Rao wrote:
> > core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala,
> >  lines 323-324
> > <https://reviews.apache.org/r/18740/diff/3/?file=510329#file510329line323>
> >
> >     Actually, why don't we need to close producer during shutdown?

Originally with isInterruptible of ShutdownableThread to true, the producer may 
still be waiting for data while we try to shutdown the scheduler; later when I 
change isInterruptible to false, I thought we can close the producer on 
shutdown, but ShutdownableThread do not expose such API. If we want to enforce 
shutting down I can extend it with another onShutdown function, for example. 
But I think this is not that necessary, so I will probably just remove the 
comments.


- Guozhang


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18740/#review36194
-----------------------------------------------------------


On March 4, 2014, 11:15 p.m., Guozhang Wang wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/18740/
> -----------------------------------------------------------
> 
> (Updated March 4, 2014, 11:15 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1286
>     https://issues.apache.org/jira/browse/KAFKA-1286
> 
> 
> Repository: kafka
> 
> 
> Description
> -------
> 
> Incorporated Jay's comments.
> 
> 1. Fix the metadata-in-progree flag issue.
> 
> 2. Add backoff config for retry.
> 
> 3. Some logging level changes.
> 
> 4. Fix a minor NPE bug in delete-topic-manager.
> 
> 5. Rolling bounce test case.
> 
> 
> Diffs
> -----
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> bedd2a989a62b1ed53f006e7e2f8bd1bdc5dfa5b 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> d8e35e7d0e4cd27aad9a8d4bf14bc97458da9417 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  699027447145837495fb56b41ad9ee5e9cb60240 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
>  c7fbf3c06858a6016878667b68ee29b22b604f7d 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> 794262394133d8e10e52971dccc0082d3aa75047 
>   clients/src/main/java/org/apache/kafka/common/protocol/ApiKeys.java 
> 21a2592ea7c7f5d4831669196cf4e2d2b4e9bcf5 
>   clients/src/main/java/org/apache/kafka/common/requests/MetadataRequest.java 
> 91b9d64aca2255ed3ae1283b2703c7a0f8757a55 
>   
> clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java 
> 73b700602006b881e4be75bb4b6d541e64f291e5 
>   clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
> PRE-CREATION 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> b58cdcd16ffb62ba5329b8b2776f2bd18440b3a0 
>   core/src/main/scala/kafka/log/LogManager.scala 
> 10062af1e02af5e4238f408ba5b9f98cc226244f 
>   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
> 4b7c544594dba734c8875fce2a289f81d67ba291 
> 
> Diff: https://reviews.apache.org/r/18740/diff/
> 
> 
> Testing
> -------
> 
> integration tests
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>

Reply via email to