Re: [DISCUSS] Apache Kafka 2.4.0 release

2019-10-08 Thread Bruno Cadonna
Hi Manikumar,

It is technically true that KIP-471 is not completed, but the only
aspect that is not there are merely two metrics that I could not add
due to the RocksDB version currently used in Streams. Adding those two
metrics once the RocksDB version will have been increased, will be a
minor effort. So, I would consider KIP-471 as complete with those two
metrics blocked.

Best,
Bruno

On Mon, Oct 7, 2019 at 8:44 PM Manikumar  wrote:
>
> Hi all,
>
> I have moved couple of accepted KIPs without a PR to the next release.  We
> still have quite a few KIPs
> with PRs that are being reviewed, but haven't yet been merged. I have left
> all of these in assuming these
> PRs are ready and not risky to merge.  Please update your assigned
> KIPs/JIRAs, if they are not ready and
>  if you know they cannot make it to 2.4.0.
>
> Please ensure that all KIPs for 2.4.0 have been merged by Oct 16th. Any
> remaining KIPs
> will be moved to the next release.
>
> The KIPs still in progress are:
>
> - KIP-517: Add consumer metrics to observe user poll behavior
>  <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-517%3A+Add+consumer+metrics+to+observe+user+poll+behavior
> >
>
> - KIP-511: Collect and Expose Client's Name and Version in the Brokers
>  <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers
> >
>
> - KIP-474: To deprecate WindowStore#put(key, value)
>  
>
> - KIP-471: Expose RocksDB Metrics in Kafka Streams
>  <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-471%3A+Expose+RocksDB+Metrics+in+Kafka+Streams
> >
>
> - KIP-466: Add support for List serialization and deserialization
>  <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-466%3A+Add+support+for+List
> +serialization+and+deserialization>
>
> - KIP-455: Create an Administrative API for Replica Reassignment
>  <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-455%3A+Create+an+Administrative+API+for+Replica+Reassignment
> >
>
> - KIP-446: Add changelog topic configuration to KTable suppress
>  <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-446%3A+Add+changelog+topic+configuration+to+KTable+suppress
> >
>
> - KIP-444: Augment metrics for Kafka Streams
>  <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-444%3A+Augment+metrics+for+Kafka+Streams
> >
>
> - KIP-434: Add Replica Fetcher and Log Cleaner Count Metrics
>  <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-434%3A+Add+Replica+Fetcher+and+Log+Cleaner+Count+Metrics
> >
>
> - KIP-401: TransformerSupplier/ProcessorSupplier StateStore connecting
>  
>
> - KIP-396: Add Reset/List Offsets Operations to AdminClient
>    >
>
> - KIP-221: Enhance DSL with Connecting Topic Creation and Repartition Hint
>  <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-221%3A+Enhance+DSL+with+Connecting+Topic+Creation+and+Repartition+Hint
> >
>
>
> Thanks,
> Manikumar
>
> On Thu, Oct 3, 2019 at 8:54 AM Manikumar  wrote:
>
> > Hi all,
> >
> > Let's extend the feature freeze deadline to Friday to merge the current
> > work in-progress PRs.
> > Please ensure that all major features have been merged and that any minor
> > features have PRs by EOD Friday.
> > We will be cutting the 2.4 release branch early on Monday morning.
> >
> > Thanks,
> > Manikumar
> >
> > On Thu, Sep 26, 2019 at 7:30 PM Manikumar 
> > wrote:
> >
> >> Hi Gwen,
> >>
> >> As suggested by Ismael, If required, we can extend the feature freeze to
> >> end of the week.
> >>
> >> Thanks,
> >>
> >> On Thu, Sep 26, 2019 at 7:32 AM Ismael Juma  wrote:
> >>
> >>> Hi Gwen,
> >>>
> >>> Good question.
> >>>
> >>> I think we should stick to the schedule. As usual, some features will
> >>> slip
> >>> to the next release and that's ok, they'll get a bit more time to
> >>> stabilize. In the past, we have allowed 1-2 days over the feature freeze
> >>> date (i.e. end of the week instead of Wednesday) for features that
> >>> narrowly
> >>> miss the cut. I think that's probably OK to continue.
> >>>
> >>> Ismael
> >>>
> >>> On Wed, Sep 25, 2019, 6:01 PM Gwen Shapira  wrote:
> >>>
> >>> > Hi Mani and team,
> >>> >
> >>> > The feature freeze deadline is right next to Kafka Summit, making it a
> >>> > bit stressful for the committers and community members with PRs in
> >>> > flight. What do you think about delaying the feature freeze and code
> >>> > freeze by a week, to allow our community to enjoy Kafka Summit?
> >>> >
> >>> > Gwen
> >>> >
> >>> > On Wed, Sep 25, 2019 at 1:58 PM Randall Hauch 
> >>> wrote:
> >>> > >
> >>> > > Hi, Manikumar.
> >>> > >
> >>> > > Thanks for acting as release manager. Can we please add the following
> >>> > KIPs
> >>> > > that have all been approved with PRs either merged or in progress:
> >>> > >
> >>> > >
> >>> > >

Build failed in Jenkins: kafka-trunk-jdk8 #3953

2019-10-08 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: Modified Exception handling for KIP-470 (#7461)


--
[...truncated 2.68 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

Re: [VOTE] KIP-527: Add VoidSerde to Serdes

2019-10-08 Thread Nikolay Izhikov
Hello, Guozhang.

Following added to the KIP:

> If not null parameters passed then an java.lang.IllegalArgumentException will 
> be thrown.

В Пн, 07/10/2019 в 15:51 -0700, Guozhang Wang пишет:
> Hello Nikolay,
> 
> Could you clarify in the wiki doc what would happen if the passed in
> `byte[] bytes` or `Void data` parameters are not null? From the example PR
> we would throw exception here. I think it worth documenting it as part of
> the contract in the wiki page.
> 
> Otherwise, I'm +1.
> 
> 
> Guozhang
> 
> On Mon, Oct 7, 2019 at 3:19 PM Bill Bejeck  wrote:
> 
> > Thanks for the KIP.
> > 
> > +1(binding)
> > 
> > -Bill
> > 
> > On Mon, Oct 7, 2019 at 5:57 PM Matthias J. Sax 
> > wrote:
> > 
> > > +1 (binding)
> > > 
> > > 
> > > -Matthias
> > > 
> > > On 10/6/19 8:02 PM, Nikolay Izhikov wrote:
> > > > Hello,
> > > > 
> > > > Any additional feedback on this?
> > > > Do we need this in Kafka?
> > > > 
> > > > В Ср, 02/10/2019 в 08:30 +0200, Bruno Cadonna пишет:
> > > > > Hi Nikolay,
> > > > > 
> > > > > Thank you for the KIP!
> > > > > 
> > > > > +1 (non-binding)
> > > > > 
> > > > > Best,
> > > > > Bruno
> > > > > 
> > > > > On Tue, Oct 1, 2019 at 5:57 PM Nikolay Izhikov 
> > > 
> > > wrote:
> > > > > > 
> > > > > > Hello.
> > > > > > 
> > > > > > I would like to start vote for KIP-527: Add VoidSerde to Serdes
> > > > > > 
> > > > > > KIP -
> > 
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
> > > > > > Discussion thread -
> > 
> > https://lists.apache.org/thread.html/e6f95799898cc5d6e7d44dfd3fc2206117feb384a0a229a1c781ecd4@%3Cdev.kafka.apache.org%3E
> > > > > > 
> > > 
> > > 
> 
> 


signature.asc
Description: This is a digitally signed message part


[jira] [Created] (KAFKA-8996) Support health check method of TransactionManager and KafkaProducer

2019-10-08 Thread Myeonghyeon Lee (Jira)
Myeonghyeon Lee created KAFKA-8996:
--

 Summary: Support health check method of TransactionManager and 
KafkaProducer
 Key: KAFKA-8996
 URL: https://issues.apache.org/jira/browse/KAFKA-8996
 Project: Kafka
  Issue Type: Wish
  Components: clients
Affects Versions: 2.3.0
Reporter: Myeonghyeon Lee


Request to add Transactional KafkaProducer health check method to KafkaProducer 
and TransactionManager.

 
{code:java}

producer.initTransactions();

try {
producer.beginTransaction();

for (int i = 0; i < 100; i++) {
producer.send(new ProducerRecord<>("my-topic", Integer.toString(i), 
Integer.toString(i)));

// check broker, transaction coordinator, producer is not zombie 
boolean canCommit = producer.canCommitTransaction();

if (canCommit) {
   producer.commitTransaction();
} else {
   producer.abortTransaction();
}

} catch ()
}



{code}
 

 

I want to check if Transactional KafkaProducer is in normal state before Kafka 
Transaction commitTransaction.

If Kafka Broker is abnormal or if Producer is Zombie, an error will occur when 
commit is executed.

If KafkaProducer has a health check method, I can detect problems and take 
other actions before committing Kafka Transaction.

 

Typically, "best efforts 1PC" for consistency with heterogeneous databases.
{code:java}
1. Start Kafka transaction
2. Start DB transaction
3. Insert record to DB SUCCESS
4. Send message to Kafka SUCCESS
5. Commit DB Transaction
6. Commit Kafka Transaction   // can be failed broker status or producer is 
zombie{code}
I know that "best efforts 1PC" does not guarantee full consistency.

However, if KafkaProducer is provided with a health check method, I have a 
chance to check Kafka's status once before a haterogeneous database commit.

 

Please review.

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8997) Make Errors a first class type in the auto-generated protocol

2019-10-08 Thread David Jacot (Jira)
David Jacot created KAFKA-8997:
--

 Summary: Make Errors a first class type in the auto-generated 
protocol
 Key: KAFKA-8997
 URL: https://issues.apache.org/jira/browse/KAFKA-8997
 Project: Kafka
  Issue Type: Improvement
Reporter: David Jacot
Assignee: David Jacot


Errors are encoded at int16 in the auto-generated protocol but the Errors enum 
is usually used in the broker and the client. Manual conversion from on to the 
other is tedious and can be avoided by supporting the `Errors` enum natively.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #865

2019-10-08 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: Modified Exception handling for KIP-470 (#7461)


--
[...truncated 5.42 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimes

Re: [DISCUSS] Apache Kafka 2.4.0 release

2019-10-08 Thread Manikumar
Thanks Bruno. We will mark KIP-471 as complete.

On Tue, Oct 8, 2019 at 2:39 PM Bruno Cadonna  wrote:

> Hi Manikumar,
>
> It is technically true that KIP-471 is not completed, but the only
> aspect that is not there are merely two metrics that I could not add
> due to the RocksDB version currently used in Streams. Adding those two
> metrics once the RocksDB version will have been increased, will be a
> minor effort. So, I would consider KIP-471 as complete with those two
> metrics blocked.
>
> Best,
> Bruno
>
> On Mon, Oct 7, 2019 at 8:44 PM Manikumar 
> wrote:
> >
> > Hi all,
> >
> > I have moved couple of accepted KIPs without a PR to the next release.
> We
> > still have quite a few KIPs
> > with PRs that are being reviewed, but haven't yet been merged. I have
> left
> > all of these in assuming these
> > PRs are ready and not risky to merge.  Please update your assigned
> > KIPs/JIRAs, if they are not ready and
> >  if you know they cannot make it to 2.4.0.
> >
> > Please ensure that all KIPs for 2.4.0 have been merged by Oct 16th. Any
> > remaining KIPs
> > will be moved to the next release.
> >
> > The KIPs still in progress are:
> >
> > - KIP-517: Add consumer metrics to observe user poll behavior
> >  <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-517%3A+Add+consumer+metrics+to+observe+user+poll+behavior
> > >
> >
> > - KIP-511: Collect and Expose Client's Name and Version in the Brokers
> >  <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers
> > >
> >
> > - KIP-474: To deprecate WindowStore#put(key, value)
> >  
> >
> > - KIP-471: Expose RocksDB Metrics in Kafka Streams
> >  <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-471%3A+Expose+RocksDB+Metrics+in+Kafka+Streams
> > >
> >
> > - KIP-466: Add support for List serialization and deserialization
> >  <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-466%3A+Add+support+for+List
> > +serialization+and+deserialization>
> >
> > - KIP-455: Create an Administrative API for Replica Reassignment
> >  <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-455%3A+Create+an+Administrative+API+for+Replica+Reassignment
> > >
> >
> > - KIP-446: Add changelog topic configuration to KTable suppress
> >  <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-446%3A+Add+changelog+topic+configuration+to+KTable+suppress
> > >
> >
> > - KIP-444: Augment metrics for Kafka Streams
> >  <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-444%3A+Augment+metrics+for+Kafka+Streams
> > >
> >
> > - KIP-434: Add Replica Fetcher and Log Cleaner Count Metrics
> >  <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-434%3A+Add+Replica+Fetcher+and+Log+Cleaner+Count+Metrics
> > >
> >
> > - KIP-401: TransformerSupplier/ProcessorSupplier StateStore connecting
> >  <
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97553756>
> >
> > - KIP-396: Add Reset/List Offsets Operations to AdminClient
> >   <
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97551484
> > >
> >
> > - KIP-221: Enhance DSL with Connecting Topic Creation and Repartition
> Hint
> >  <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-221%3A+Enhance+DSL+with+Connecting+Topic+Creation+and+Repartition+Hint
> > >
> >
> >
> > Thanks,
> > Manikumar
> >
> > On Thu, Oct 3, 2019 at 8:54 AM Manikumar 
> wrote:
> >
> > > Hi all,
> > >
> > > Let's extend the feature freeze deadline to Friday to merge the current
> > > work in-progress PRs.
> > > Please ensure that all major features have been merged and that any
> minor
> > > features have PRs by EOD Friday.
> > > We will be cutting the 2.4 release branch early on Monday morning.
> > >
> > > Thanks,
> > > Manikumar
> > >
> > > On Thu, Sep 26, 2019 at 7:30 PM Manikumar 
> > > wrote:
> > >
> > >> Hi Gwen,
> > >>
> > >> As suggested by Ismael, If required, we can extend the feature freeze
> to
> > >> end of the week.
> > >>
> > >> Thanks,
> > >>
> > >> On Thu, Sep 26, 2019 at 7:32 AM Ismael Juma 
> wrote:
> > >>
> > >>> Hi Gwen,
> > >>>
> > >>> Good question.
> > >>>
> > >>> I think we should stick to the schedule. As usual, some features will
> > >>> slip
> > >>> to the next release and that's ok, they'll get a bit more time to
> > >>> stabilize. In the past, we have allowed 1-2 days over the feature
> freeze
> > >>> date (i.e. end of the week instead of Wednesday) for features that
> > >>> narrowly
> > >>> miss the cut. I think that's probably OK to continue.
> > >>>
> > >>> Ismael
> > >>>
> > >>> On Wed, Sep 25, 2019, 6:01 PM Gwen Shapira 
> wrote:
> > >>>
> > >>> > Hi Mani and team,
> > >>> >
> > >>> > The feature freeze deadline is right next to Kafka Summit, making
> it a
> > >>> > bit stressful for the committers and community members with PRs in
> > >>> > flight. What do you think about delaying the feature freeze and
> code

[jira] [Resolved] (KAFKA-8983) AdminClient deleteRecords should not fail all partitions unnecessarily

2019-10-08 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8983.

Fix Version/s: 2.4.0
   Resolution: Fixed

> AdminClient deleteRecords should not fail all partitions unnecessarily
> --
>
> Key: KAFKA-8983
> URL: https://issues.apache.org/jira/browse/KAFKA-8983
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.4.0
>
>
> The deleteRecords API in the AdminClient groups records to be sent by the 
> partition leaders. If one of these requests fails, we currently fail all 
> futures, including those tied to requests sent to other leaders. It would be 
> better to fail only those partitions included in the failed request.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8998) Release Candidate Version 2.3.1-rc1 breaks backward compatibility in PATCH version

2019-10-08 Thread Juan Pablo Correa (Jira)
Juan Pablo Correa created KAFKA-8998:


 Summary: Release Candidate Version 2.3.1-rc1 breaks backward 
compatibility in PATCH version
 Key: KAFKA-8998
 URL: https://issues.apache.org/jira/browse/KAFKA-8998
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.3.1
Reporter: Juan Pablo Correa


The expected type for the `preparedResponse` method in the `MetadataResponse` 
([https://github.com/apache/kafka/blob/2.3.1-rc1/clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java#L466])
 has changed from a `List` to a `Collection` from the version 2.3.0 
to 2.3.1, this breaks the backward compatibility that one would expect from a 
PATCH version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Need some clarification of Kafka and MQTT

2019-10-08 Thread Sarvesh Gupta
Hi,

We are building some IoT driven solution for industries, I am very new to Kafka 
and I need some clarifications so please help me out with below given doubts.
We have use case where we wanted to connect Kafka with MQTT as source of 
receiving data, But I am little bit confused on this. I saw lots of blog and 
videos where people are connecting Kafka with MQTT using confluent, lenses and 
some another third party libraries. So my question is can’t we directly connect 
Kafka with MQTT without using any third party dependencies.
And I also wanted to know what are ways to connect Kafka and MQTT. If there is 
any way to connect apache Kafka and MQTT directly without using confluent or 
other platforms then please give me some idea about that method. And also we 
are using python language for out product so if there is any pythonic way to do 
this then please let me know.


Thanks and Regards,
Sarvesh Gupta



[jira] [Created] (KAFKA-8999) org.apache.kafka.common.utils.Utils.delete masks errors encountered during processing

2019-10-08 Thread Stefan Hoffmeister (Jira)
Stefan Hoffmeister created KAFKA-8999:
-

 Summary: org.apache.kafka.common.utils.Utils.delete masks errors 
encountered during processing
 Key: KAFKA-8999
 URL: https://issues.apache.org/jira/browse/KAFKA-8999
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Stefan Hoffmeister


The implementation of apache.kafka.common.utils.Utils.delete masks any errors 
it encounters during recursive deletes of a directory.

[https://github.com/apache/kafka/blob/2.4/clients/src/main/java/org/apache/kafka/common/utils/Utils.java#L748]
 is implemented such that for visitFileFailed and postVisitDirectory on 
SimpleFileVisitor any _unknown_ exception information present is simply 
discarded.

At the very least, these exceptions should be logged at TRACE level, to be able 
to determine the root cause of any such failures.

Beyond logging, it might be worthwhile reviewing whether ignoring an exception 
flagged on postVisitDirectory is correct, as that very exception might prevent 
the visited directory from being deleted.

This relates indirectly to debugging KAFKA-6647, where the masking of errors 
here effectively hides why Kafka Streams sometimes throws 
DirectoryNotEmptyException (there is analysis on KAFKA-6647 why this may be the 
case - fixing this defect here would at least provide sufficient error context 
information)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9000) Flaky Test KTableKTableForeignKeyJoinIntegrationTest.doLeftJoinFromRightThenDeleteRightEntity

2019-10-08 Thread Bruno Cadonna (Jira)
Bruno Cadonna created KAFKA-9000:


 Summary: Flaky Test 
KTableKTableForeignKeyJoinIntegrationTest.doLeftJoinFromRightThenDeleteRightEntity
 Key: KAFKA-9000
 URL: https://issues.apache.org/jira/browse/KAFKA-9000
 Project: Kafka
  Issue Type: Test
  Components: streams
Reporter: Bruno Cadonna


{code:java}
java.lang.AssertionError: expected:<[KeyValue(2, value1=1.77,value2=10), 
KeyValue(1, value1=1.33,value2=10), KeyValue(3, value1=3.77,value2=30)]> but 
was:<[KeyValue(3, value1=3.77,value2=null), KeyValue(1, value1=1.33,value2=10), 
KeyValue(3, value1=3.77,value2=30)]>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:120)
at org.junit.Assert.assertEquals(Assert.java:146)
at 
org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest.doLeftJoinFromRightThenDeleteRightEntity(KTableKTableForeignKeyJoinIntegrationTest.java:313)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9001) Flaky Test KStreamAggregationIntegrationTest#shouldReduceSessionWindows

2019-10-08 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9001:
--

 Summary: Flaky Test 
KStreamAggregationIntegrationTest#shouldReduceSessionWindows
 Key: KAFKA-9001
 URL: https://issues.apache.org/jira/browse/KAFKA-9001
 Project: Kafka
  Issue Type: Bug
  Components: streams, unit tests
Reporter: Sophie Blee-Goldman
 Attachments: log.txt

*Jenkins link:* 
[https://jenkins.confluent.io/job/apache-kafka-test/job/2.3/18/testReport/junit/org.apache.kafka.streams.integration/KStreamAggregationIntegrationTest/shouldReduceSessionWindows/]

*Stacktrace*
{code:java}
java.lang.AssertionError: Expected: 
  but: was null
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) 
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6) 
at 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest.shouldReduceSessionWindows(KStreamAggregationIntegrationTest.java:663)

{code}
*Standard Error*
{code:java}
Processed a total of 15 messages
Processed a total of 15 messages{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #866

2019-10-08 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8983; AdminClient deleteRecords should not fail all partitions


--
[...truncated 2.68 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos

[jira] [Created] (KAFKA-9002) org.apache.kafka.streams.integration.RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenCreated

2019-10-08 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9002:
--

 Summary: 
org.apache.kafka.streams.integration.RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenCreated
 Key: KAFKA-9002
 URL: https://issues.apache.org/jira/browse/KAFKA-9002
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bill Bejeck


[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/25603/testReport/junit/org.apache.kafka.streams.integration/RegexSourceIntegrationTest/testRegexMatchesTopicsAWhenCreated/]
{noformat}
Error Messagejava.lang.AssertionError: Condition not met within timeout 15000. 
Stream tasks not updatedStacktracejava.lang.AssertionError: Condition not met 
within timeout 15000. Stream tasks not updated
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:376)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:336)
at 
org.apache.kafka.streams.integration.RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenCreated(RegexSourceIntegrationTest.java:175)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 

Build failed in Jenkins: kafka-trunk-jdk8 #3954

2019-10-08 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8983; AdminClient deleteRecords should not fail all partitions


--
[...truncated 2.68 MB...]
org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNa

[jira] [Created] (KAFKA-9003) Flaky Test RepartitionOptimizingIntegrationTest#shouldSendCorrectRecords_OPTIMIZED

2019-10-08 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9003:
--

 Summary: Flaky Test 
RepartitionOptimizingIntegrationTest#shouldSendCorrectRecords_OPTIMIZED
 Key: KAFKA-9003
 URL: https://issues.apache.org/jira/browse/KAFKA-9003
 Project: Kafka
  Issue Type: Bug
  Components: streams, unit tests
Reporter: Sophie Blee-Goldman


Trace no longer available (but willing to bet it was a timeout).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7190) Under low traffic conditions purging repartition topics cause WARN statements about UNKNOWN_PRODUCER_ID

2019-10-08 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7190.

Fix Version/s: 2.4.0
 Assignee: Bob Barrett  (was: Guozhang Wang)
   Resolution: Fixed

> Under low traffic conditions purging repartition topics cause WARN statements 
> about  UNKNOWN_PRODUCER_ID 
> -
>
> Key: KAFKA-7190
> URL: https://issues.apache.org/jira/browse/KAFKA-7190
> Project: Kafka
>  Issue Type: Improvement
>  Components: core, streams
>Affects Versions: 1.1.0, 1.1.1
>Reporter: Bill Bejeck
>Assignee: Bob Barrett
>Priority: Major
> Fix For: 2.4.0
>
>
> When a streams application has little traffic, then it is possible that 
> consumer purging would delete
> even the last message sent by a producer (i.e., all the messages sent by
> this producer have been consumed and committed), and as a result, the broker
> would delete that producer's ID. The next time when this producer tries to
> send, it will get this UNKNOWN_PRODUCER_ID error code, but in this case,
> this error is retriable: the producer would just get a new producer id and
> retries, and then this time it will succeed. 
>  
> Possible fixes could be on the broker side, i.e., delaying the deletion of 
> the produderIDs for a more extended period or on the streams side developing 
> a more conservative approach to deleting offsets from repartition topics
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-527: Add VoidSerde to Serdes

2019-10-08 Thread Guozhang Wang
Thanks, I'm +1 (binding).

On Tue, Oct 8, 2019 at 2:33 AM Nikolay Izhikov  wrote:

> Hello, Guozhang.
>
> Following added to the KIP:
>
> > If not null parameters passed then an java.lang.IllegalArgumentException
> will be thrown.
>
> В Пн, 07/10/2019 в 15:51 -0700, Guozhang Wang пишет:
> > Hello Nikolay,
> >
> > Could you clarify in the wiki doc what would happen if the passed in
> > `byte[] bytes` or `Void data` parameters are not null? From the example
> PR
> > we would throw exception here. I think it worth documenting it as part of
> > the contract in the wiki page.
> >
> > Otherwise, I'm +1.
> >
> >
> > Guozhang
> >
> > On Mon, Oct 7, 2019 at 3:19 PM Bill Bejeck  wrote:
> >
> > > Thanks for the KIP.
> > >
> > > +1(binding)
> > >
> > > -Bill
> > >
> > > On Mon, Oct 7, 2019 at 5:57 PM Matthias J. Sax 
> > > wrote:
> > >
> > > > +1 (binding)
> > > >
> > > >
> > > > -Matthias
> > > >
> > > > On 10/6/19 8:02 PM, Nikolay Izhikov wrote:
> > > > > Hello,
> > > > >
> > > > > Any additional feedback on this?
> > > > > Do we need this in Kafka?
> > > > >
> > > > > В Ср, 02/10/2019 в 08:30 +0200, Bruno Cadonna пишет:
> > > > > > Hi Nikolay,
> > > > > >
> > > > > > Thank you for the KIP!
> > > > > >
> > > > > > +1 (non-binding)
> > > > > >
> > > > > > Best,
> > > > > > Bruno
> > > > > >
> > > > > > On Tue, Oct 1, 2019 at 5:57 PM Nikolay Izhikov <
> nizhi...@apache.org>
> > > >
> > > > wrote:
> > > > > > >
> > > > > > > Hello.
> > > > > > >
> > > > > > > I would like to start vote for KIP-527: Add VoidSerde to Serdes
> > > > > > >
> > > > > > > KIP -
> > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
> > > > > > > Discussion thread -
> > >
> > >
> https://lists.apache.org/thread.html/e6f95799898cc5d6e7d44dfd3fc2206117feb384a0a229a1c781ecd4@%3Cdev.kafka.apache.org%3E
> > > > > > >
> > > >
> > > >
> >
> >
>


-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk11 #867

2019-10-08 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7190; Retain producer state until transactionalIdExpiration time


--
[...truncated 2.68 MB...]
org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.s

Build failed in Jenkins: kafka-2.4-jdk8 #4

2019-10-08 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8983; AdminClient deleteRecords should not fail all partitions

[jason] KAFKA-7190; Retain producer state until transactionalIdExpiration time


--
[...truncated 2.67 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue

[jira] [Created] (KAFKA-9004) Fetch from follower unintentionally enabled for old consumers

2019-10-08 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-9004:
--

 Summary: Fetch from follower unintentionally enabled for old 
consumers
 Key: KAFKA-9004
 URL: https://issues.apache.org/jira/browse/KAFKA-9004
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: David Arthur


With KIP-392, we allow consumers to fetch from followers. This capability is 
enabled when a replica selector has been provided in the configuration. When 
not in use, the intent is to preserve current behavior of fetching only from 
leader. The leader epoch is the mechanism that keeps us honest. When there is a 
leader change, the epoch gets bumped, consumer fetches fail due to the fenced 
epoch, and we find the new leader.

However, for old consumers, there is no similar protection. The leader epoch 
was not available to clients until recently. If there is a preferred leader 
election (for example), the old consumer will happily continue fetching from 
the demoted leader until a periodic metadata fetch causes us to discover the 
new leader. This does not create any problems from a correctness 
perspective–fetches are still bound by the high watermark–but it is unexpected 
and may cause unexpected performance characteristics.

To fix this (assuming we think it should be fixed), we could be stricter about 
fetches and require the leader check if the fetch request has no epoch. Or 
maybe just require the leader check for older versions of the fetch request.

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8944) Compiler Warning

2019-10-08 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8944.

Resolution: Fixed

> Compiler Warning
> 
>
> Key: KAFKA-8944
> URL: https://issues.apache.org/jira/browse/KAFKA-8944
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: huxihx
>Priority: Minor
>  Labels: scala
>
> When building Kafka Streams, we get the following compiler warning that we 
> should fix:
> {code:java}
> scala/src/main/scala/org/apache/kafka/streams/scala/kstream/KTable.scala:24: 
> imported `Suppressed' is permanently hidden by definition of object 
> Suppressed in package kstream import 
> org.apache.kafka.streams.kstream.{Suppressed, 
> ValueTransformerWithKeySupplier, KTable => KTableJ}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-448: Add State Stores Unit Test Support to Kafka Streams Test Utils

2019-10-08 Thread Yishun Guan
Hi,

I am currently switching between jobs - so slightly busier than usual.
But I will definitely update this KIP during the weekend.

Thanks,
Yishun

On Mon, Oct 7, 2019 at 3:18 PM Matthias J. Sax  wrote:
>
> What is the status of this KIP? Any updates?
>
>
> -Matthias
>
>
> On 9/10/19 8:08 PM, Sophie Blee-Goldman wrote:
> > Just took a look at the current KIP, and I think you should actually be
> > fine if you're just mocking the stores.
> > The issue I brought up isn't necessarily blocking this KIP, but it is
> > related -- just wanted to bring it up and
> > see if there's any overlap, or if it's better to address separately.
> >
> > The problem isn't actually with in-memory stores (as opposed to
> > persistent), I suspect
> > people just happen to have exclusively hit/reported this issue with
> > in-memory stores since
> > a lightweight in-memory store is more attractive for unit tests than a
> > bunch of RocksDB instances
> >
> > The current problem technically only affects window/session stores, but the
> > workaround for KV stores
> > is not necessarily stable or supported. The issue is that to create a
> > store, you must use the store Supplier/Builder
> > provided in the public API (e.g. Stores#windowStoreBuilder), which requires
> > `init` to be called before using the store.
> > `init` takes a ProcessorContext as a parameter, and for KV stores you can
> > just pass in a MockProcessorContext.
> > Unfortunately, window/session stores internally cast the ProcessorContext
> > to an InternalProcessorContext in order
> > to set up some metrics, so you end up with a ClassCastException and no way
> > to use window/session stores in your test.
> >
> > So if you're literally wrapping any of these stores (eg
> > InMemoryWindowStore, or RocksDBSessionStore) then I think
> > you actually would run into this.
> >
> > Anyways, the current state of things is that we don't really support using
> > state stores in unit testing at all -- you can't
> > record the number of expected put/get calls, and you can't use an actual
> > store to, well, store things. We definitely
> > need both of these things to really round out our unit tests, but we don't
> > need to solve both of them in one KIP.
> > Probably best to avoid the issue in this KIP if possible :)
> >
> > On Thu, Sep 5, 2019 at 11:23 AM Yishun Guan  wrote:
> >
> >> Thanks Sophie!
> >>
> >> I took a look at the issue and the mailing thread. So in other words,
> >> people are having issues writing unit tests using in-memory stores
> >> (which is a very common practice due to the lack of a better
> >> alternative), so we try to provide a better solution for testings, and
> >> hopefully works well with the current MockProcessContext. But the
> >> current issues we are facing with the in-memory stores, how can we
> >> better fix them in the mock stores? Should I think more about how the
> >> mock stores will interact with MockProcessorContext? The design I
> >> present now it's just a wrapper on a store. Do you think we need to
> >> address that before we go further? Instead of a wrapper, should we
> >> think about building a more comprehensive mock store?
> >>
> >> Thanks,
> >> Yishun
> >>
> >> On Thu, Aug 29, 2019 at 12:18 AM Sophie Blee-Goldman
> >>  wrote:
> >>>
> >>> Hey Yishun! Glad to see this is in the works :)
> >>>
> >>> Within the past month or so, needing state stores for unit tests has been
> >>> brought up multiple times. Unfortunately, before now some people had to
> >>> rely on internal APIs to get a store for their tests, which is unsafe as
> >>> they can (and in this case
> >>> <
> >> https://mail-archives.apache.org/mod_mbox/kafka-users/201907.mbox/%3cCAM0Vdef0h3p4gB=r3s=vvgssqqzqa4oxxkpl5cnpaxn146p...@mail.gmail.com%3e
> >>> ,
> >>> did) change. While there is an unstable workaround for KV stores, there
> >> is
> >>> unfortunately no good way to get a window or session store for your
> >> tests. This
> >>> ticket  explains that
> >>> particular issue, plus some ways to resolve it that could get kind of
> >> messy.
> >>>
> >>> I think that ticket would likely be subsumed by your KIP (and much
> >>> cleaner), but I just wanted to point to some use cases and make sure we
> >>> have them covered within this KIP. We definitely have a gap here and I
> >>> think it's pretty clear many users would benefit from state store support
> >>> in unit tests!
> >>>
> >>> Cheers,
> >>> Sophie
> >>>
> >>> On Tue, Aug 27, 2019 at 1:11 PM Yishun Guan  wrote:
> >>>
>  Hi All,
> 
>  I have finally worked on this KIP again and want to discuss with you
>  all before this KIP goes dormant.
> 
>  Recap: https://issues.apache.org/jira/browse/KAFKA-6460
> 
> 
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-448%3A+Add+State+Stores+Unit+Test+Support+to+Kafka+Streams+Test+Utils
> 
>  I have updated my KIP.
>  1. Provided an example of how the test will look.
>  2. Allo

Re: Need some clarification of Kafka and MQTT

2019-10-08 Thread Svante Karlsson
No, you needs something that can speak mqtt and sent that data to kafka
(and possibly the other way around). There are many such alternatives but
kafka knows nothing about mqtt

Den tis 8 okt. 2019 kl 18:01 skrev Sarvesh Gupta :

> Hi,
>
> We are building some IoT driven solution for industries, I am very new to
> Kafka and I need some clarifications so please help me out with below given
> doubts.
> We have use case where we wanted to connect Kafka with MQTT as source of
> receiving data, But I am little bit confused on this. I saw lots of blog
> and videos where people are connecting Kafka with MQTT using confluent,
> lenses and some another third party libraries. So my question is can’t we
> directly connect Kafka with MQTT without using any third party dependencies.
> And I also wanted to know what are ways to connect Kafka and MQTT. If
> there is any way to connect apache Kafka and MQTT directly without using
> confluent or other platforms then please give me some idea about that
> method. And also we are using python language for out product so if there
> is any pythonic way to do this then please let me know.
>
>
> Thanks and Regards,
> Sarvesh Gupta
>
>