Re: [VOTE] KIP-341: Update Sticky Assignor's User Data Protocol

2018-08-06 Thread Rajini Sivaram
Hi Vahid,

Thanks for the KIP.

+1 (binding)

Regards,

Rajini

On Fri, Aug 3, 2018 at 7:27 PM, Dong Lin  wrote:

> Thanks Vahid!
>
> +1
>
> On Thu, Aug 2, 2018 at 1:27 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com
> > wrote:
>
> > Hi everyone,
> >
> > I believe the feedback on this KIP has been addressed so far. So I'd like
> > to start a vote.
> > The KIP:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 341%3A+Update+Sticky+Assignor%27s+User+Data+Protocol
> > Discussion thread:
> > https://www.mail-archive.com/dev@kafka.apache.org/msg89733.html
> >
> > Thanks!
> > --Vahid
> >
> >
>


Re: [VOTE] KIP-342 - Add support for custom SASL extensions in OAuthBearer authentication

2018-08-06 Thread Stanislav Kozlovski
KIP-342 has passed.

Here is the vote break down:

Binding:
   - Jason Gustafson
   - Jun Rao
   - Rajini Sivaram

Non-Binding:
   - Ron Dagostino

Thanks to all of those who voted and provided feedback!

On Fri, Aug 3, 2018 at 11:50 PM Jason Gustafson  wrote:

> +1 Thanks Stanislav!
>
> On Fri, Aug 3, 2018 at 2:13 AM, Stanislav Kozlovski <
> stanis...@confluent.io>
> wrote:
>
> > Thank you, Jun. Noted in the KIP.
> >
> > On Fri, Aug 3, 2018 at 2:28 AM Jun Rao  wrote:
> >
> > > Hi, Stanislav,
> > >
> > > Thanks for the KIP. +1
> > >
> > > Just one minor comment. Since the JWT token supports customizable claim
> > > fields, it would be useful to clarify when to use the SASL extension vs
> > the
> > > customized fields in JWT.
> > >
> > > Jun
> > >
> > > On Wed, Jul 25, 2018 at 10:03 AM, Stanislav Kozlovski <
> > > stanis...@confluent.io> wrote:
> > >
> > > > Hey everbody,
> > > >
> > > > I'd like to start a vote thread for KIP-342 Add support for custom
> SASL
> > > > extensions in OAuthBearer authentication
> > > >  > > > 342%3A+Add+support+for+Custom+SASL+extensions+in+
> > > > OAuthBearer+authentication>
> > > >
> > > > --
> > > > Best,
> > > > Stanislav
> > > >
> > >
> >
> >
> > --
> > Best,
> > Stanislav
> >
>


-- 
Best,
Stanislav


Build failed in Jenkins: kafka-trunk-jdk10 #370

2018-08-06 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-7183: Add a trogdor test that creates many connections to 
brokers

--
[...truncated 1.53 MB...]
kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsNull STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsNull PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithCoordinatorLoadInProgressOnEndTxnWhenCoordinatorIsLoading 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithCoordinatorLoadInProgressOnEndTxnWhenCoordinatorIsLoading 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithSuccessOnAddPartitionsWhenStateIsCompleteAbort STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithSuccessOnAddPartitionsWhenStateIsCompleteAbort PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareAbortState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareAbortState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithErrorsNoneOnAddPartitionWhenNoErrorsAndPartitionsTheSame 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithErrorsNoneOnAddPartitionWhenNoErrorsAndPartitionsTheSame PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsEmpty STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsEmpty PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestAddPartitionsToTransactionWhenTransactionalIdIsEmpty
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestAddPartitionsToTransactionWhenTransactionalIdIsEmpty
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendPrepareAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendPrepareAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAcceptInitPidAndReturnNextPidWhenTransactionalIdIsNull STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAcceptInitPidAndReturnNextPidWhenTransactionalIdIsNull PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsEx

Timestamp based log compaction

2018-08-06 Thread Jan Lukavský

Hi,

I have a question about log compaction. LogCleaner's JavaDoc states that:

{quote}

A message with key K and offset O is obsolete if there exists a message 
with key K and offset O' such that O < O'.


{/quote}

That works fine if messages are arriving "in-order", i.e. with timestamp 
assigned by log-append time (with some possible problems with clock 
synchronization during leader rebalance), but if topic might contain 
messages, that are late (because producer explicitly assignes timestamp 
to each message), then compacting purely by offset might cause message 
with older timestamp to be kept in the log in favor of newer message. Is 
this intentional? Would it be possible to relax this so that the log 
compaction would prefer message's timestamp instead of offset? What if 
the behavior of the LogCleaner would be changed to something like this:


{quote}

A message with key K, timestamp T1 and offset O1 is obsolete if there 
exists a message with key K, timestamp T2 and offset O2' such that T1 < 
T2 or T1 = T2 and O1 < O2'.


{/quote}

I'm aware that this would be much more complicated (because of the clock 
synchronization problem that would have to be resolved), but this 
definition seems to be more aligned with time characteristic of the 
data. Should I try to create a KIP or this was already discussed and 
considered unwanted (or even impossible) feature?


Thanks for any comments,

 Jan



Build failed in Jenkins: kafka-trunk-jdk8 #2867

2018-08-06 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-7183: Add a trogdor test that creates many connections to 
brokers

--
[...truncated 426.36 KB...]
kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnLiteralResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnLiteralResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeThrowsOnNoneLiteralResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeThrowsOnNoneLiteralResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testGetAclsPrincipal STARTED

kafka.security.auth.SimpleAclAutho

Re: Timestamp based log compaction

2018-08-06 Thread Rajini Sivaram
Can you take a look at KIP-280:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction
?

On Mon, Aug 6, 2018 at 10:55 AM, Jan Lukavský  wrote:

> Hi,
>
> I have a question about log compaction. LogCleaner's JavaDoc states that:
>
> {quote}
>
> A message with key K and offset O is obsolete if there exists a message
> with key K and offset O' such that O < O'.
>
> {/quote}
>
> That works fine if messages are arriving "in-order", i.e. with timestamp
> assigned by log-append time (with some possible problems with clock
> synchronization during leader rebalance), but if topic might contain
> messages, that are late (because producer explicitly assignes timestamp to
> each message), then compacting purely by offset might cause message with
> older timestamp to be kept in the log in favor of newer message. Is this
> intentional? Would it be possible to relax this so that the log compaction
> would prefer message's timestamp instead of offset? What if the behavior of
> the LogCleaner would be changed to something like this:
>
> {quote}
>
> A message with key K, timestamp T1 and offset O1 is obsolete if there
> exists a message with key K, timestamp T2 and offset O2' such that T1 < T2
> or T1 = T2 and O1 < O2'.
>
> {/quote}
>
> I'm aware that this would be much more complicated (because of the clock
> synchronization problem that would have to be resolved), but this
> definition seems to be more aligned with time characteristic of the data.
> Should I try to create a KIP or this was already discussed and considered
> unwanted (or even impossible) feature?
>
> Thanks for any comments,
>
>  Jan
>
>


[jira] [Created] (KAFKA-7250) Kafka-Streams-Scala DSL transform shares transformer instance

2018-08-06 Thread Michal (JIRA)
Michal created KAFKA-7250:
-

 Summary: Kafka-Streams-Scala DSL transform shares transformer 
instance
 Key: KAFKA-7250
 URL: https://issues.apache.org/jira/browse/KAFKA-7250
 Project: Kafka
  Issue Type: Bug
Reporter: Michal


The new Kafka Streams Scala DSL provides transform function with following 
signature

{{def transform[K1, V1](transformer: Transformer[K, V, (K1, V1)], 
stateStoreNames: String*): KStream[K1, V1]}}

the provided 'transformer' (will refer to it as scala-transformer)  instance is 
than used to derive java Transformer instance and in turn a TransformerSupplier 
that is passed to the underlying java DSL. However that causes all the tasks to 
share the same instance of the scala-transformer. This introduce all sort of 
issues. The simplest way to reproduce is to implement simplest transformer of 
the following shape:

{{.transform(new Transformer[String, String, (String, String)] {}}

    var context: ProcessorContext = _

{{  def init(pc: ProcessorContext) = \{ context = pc}}}

{{  def transform(k: String, v: String): (String, String) = {}}

        context.timestamp() 

        ...

{{  }}}{{})}}

the call to timestmap will die with exception "This should not happen as 
timestamp() should only be called while a record is processed" due to record 
context not being set - while the update of record context was actually 
performed, but due to shared nature of the scala-transformer the local 
reference to the processor context is pointing to the one of the last 
initialized task rather than the current task. 

The solution is to accept a function in following manner: 

def transform[K1, V1](getTransformer: () => Transformer[K, V, (K1, V1)], 
stateStoreNames: String*): KStream[K1, V1]

 or TransformerSupplier - like the transformValues DSL function does.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Timestamp based log compaction

2018-08-06 Thread Jan Lukavský

Hi Rajini,

thanks for pointing out, that looks like exactly what I had in mind. I 
wasn't able to google that.


Jan


On 08/06/2018 12:31 PM, Rajini Sivaram wrote:

Can you take a look at KIP-280:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction
?

On Mon, Aug 6, 2018 at 10:55 AM, Jan Lukavský  wrote:


Hi,

I have a question about log compaction. LogCleaner's JavaDoc states that:

{quote}

A message with key K and offset O is obsolete if there exists a message
with key K and offset O' such that O < O'.

{/quote}

That works fine if messages are arriving "in-order", i.e. with timestamp
assigned by log-append time (with some possible problems with clock
synchronization during leader rebalance), but if topic might contain
messages, that are late (because producer explicitly assignes timestamp to
each message), then compacting purely by offset might cause message with
older timestamp to be kept in the log in favor of newer message. Is this
intentional? Would it be possible to relax this so that the log compaction
would prefer message's timestamp instead of offset? What if the behavior of
the LogCleaner would be changed to something like this:

{quote}

A message with key K, timestamp T1 and offset O1 is obsolete if there
exists a message with key K, timestamp T2 and offset O2' such that T1 < T2
or T1 = T2 and O1 < O2'.

{/quote}

I'm aware that this would be much more complicated (because of the clock
synchronization problem that would have to be resolved), but this
definition seems to be more aligned with time characteristic of the data.
Should I try to create a KIP or this was already discussed and considered
unwanted (or even impossible) feature?

Thanks for any comments,

  Jan






Re: [DISCUSS] KIP-349 Priorities for Source Topics

2018-08-06 Thread nick
> From: "Matthias J. Sax"  One general question: The Jira is marked as "stream", so I am wondering what 
> the intended scope the KIP is because, it suggests a new consumer
> API only. Can you clarify?

Based on the thread in KAFKA-6690, I’ve changed the component from streams to 
consumer.
--
  Nick





Re: [VOTE] KIP-332: Update AclCommand to use AdminClient API

2018-08-06 Thread Manikumar
Hi All,

The vote has passed with 3 binding votes (Rajini, Jason, Dong) and 2
non-binding votes (Ted, Colin).

Thanks everyone for the votes.

Thanks,
Manikumar


On Sat, Aug 4, 2018 at 5:09 AM Dong Lin  wrote:

> Thanks Manikumar!
>
> +1
>
> On Fri, Aug 3, 2018 at 4:33 PM, Jason Gustafson 
> wrote:
>
> > +1 Thanks Manikumar!
> >
> > On Fri, Aug 3, 2018 at 4:24 PM, Colin McCabe  wrote:
> >
> > > +1 (non-binding)
> > >
> > > regards,
> > > Colin
> > >
> > > On Fri, Aug 3, 2018, at 02:27, Rajini Sivaram wrote:
> > > > Hi Manikumar,
> > > >
> > > > +1 (binding)
> > > >
> > > > Thanks for the KIP!
> > > >
> > > > On Fri, Aug 3, 2018 at 3:46 AM, Ted Yu  wrote:
> > > >
> > > > > +1
> > > > >
> > > > > On Thu, Aug 2, 2018 at 7:33 PM Manikumar <
> manikumar.re...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > I would like to start voting on KIP-332 which allows AclCommand
> to
> > > use
> > > > > > AdminClient API for acl management.
> > > > > >
> > > > > > KIP:
> > > > > >
> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 332%3A+Update+AclCommand+to+use+AdminClient+API
> > > > > >
> > > > > > Discussion Thread:
> > > > > > https://www.mail-archive.com/dev@kafka.apache.org/msg90049.html
> > > > > >
> > > > > > Thanks,
> > > > > > Manikumar
> > > > > >
> > > > >
> > >
> >
>


Re: [DISCUSS] KIP-351: Add --critical-partitions option to describe topics command

2018-08-06 Thread Kevin Lu
Hi Jason,

Thanks for the response!

I completely agree with you and Mickael about adding a
--under-minisr-partitions option to match the existing metric. I will
create a separate KIP to discuss the --under-minisr-partitions option. I
believe there is a technical challenge with retrieving the
min.insync.replicas configuration from zookeeper currently as it may also
be stored as a broker configuration, but let me do some digging to confirm.

I am going to modify KIP-351 to represent the the gap that you have
mentioned (exactly at min.isr) as this is an important state that we
specifically monitor to alert on.

Any other thoughts?

Regards,
Kevin

On Thu, Aug 2, 2018 at 11:23 PM Jason Gustafson  wrote:

> Hey Kevin,
>
> Thanks for the KIP. I like Mickael's suggestion to
> add --under-minisr-partitions since it fits with the metric we already
> expose. It's also a good question whether there should be a separate
> category for partitions which are right at min.isr. I'm reluctant to add
> new categories, but I agree there might be a gap at the moment. Say you
> have a replication factor of 3 and the min isr is set to 1. Our notion of
> URP does not capture the difference between having an ISR down to a size of
> 1 and one down to a size of 2. The reason this might be significant is that
> a shrink of the ISR down to 2 may just be caused by a rolling restart or a
> transient network blip. A shrink to 1, on the other hand, might be
> indicative of a more severe problem and could be cause for a call from
> pagerduty.
>
> -Jason
>
> On Thu, Aug 2, 2018 at 9:28 AM, Kevin Lu  wrote:
>
> > Hi Mickael,
> >
> > Thanks for the suggestion!
> >
> > Correct me if I am mistaken, but if a producer attempts to send to a
> > partition that is under min ISR (and ack=all or -1) then the send will
> fail
> > with a NotEnoughReplicas or NotEnoughReplicasAfterAppend exception? At
> this
> > point, client-side has already suffered failure but the server-side is
> > still fine for now?
> >
> > If the above is true, then this would be a FATAL case for producers.
> >
> > Would it be valuable to include the CRITICAL case where a topic partition
> > has exactly min ISR so that Kafka operators can take action so it does
> not
> > become FATAL? This could be in the same option or a new one.
> >
> > Thanks!
> >
> > Regards,
> > Kevin
> >
> > On Thu, Aug 2, 2018 at 2:27 AM Mickael Maison 
> > wrote:
> >
> > > What about also adding a --under-minisr-partitions option?
> > >
> > > That would match the
> > > "kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount"
> > > broker metric and it's usually pretty relevant when investigating
> > > issues
> > >
> > > On Thu, Aug 2, 2018 at 8:54 AM, Kevin Lu 
> wrote:
> > > > Hi friends!
> > > >
> > > > This thread is to discuss KIP-351
> > > > <
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-351%3A
> > +Add+--critical-partitions+option+to+describe+topics+command
> > > >
> > > > !
> > > >
> > > > I am proposing to add a --critical-partitions option to the describe
> > > topics
> > > > command that will only list out topic partitions that have 1 ISR left
> > > (RF >
> > > > 1) as they would be in a critical state and need immediate
> > > repartitioning.
> > > >
> > > > I wonder if the name "critical" is appropriate?
> > > >
> > > > Thoughts?
> > > >
> > > > Thanks!
> > > >
> > > > Regards,
> > > > Kevin
> > >
> >
>


Add to JIRA

2018-08-06 Thread Piyush Sagar
Hello All,

Can anyone please add me to JIRA ?

Also, Is there any document(Confluence page) which briefs folder structure/
sub modules of the main kafka repository ?

Thanks


Re: [DISCUSS] KIP-332: Update AclCommand to use AdminClient API

2018-08-06 Thread Manikumar
Hi All,

For this KIP, I will be using "--command-config" option name to specify
config file.
We can handle the naming discrepancies as part of KIP-14.

Thanks,





On Sat, Aug 4, 2018 at 4:53 AM Jason Gustafson  wrote:

> Here's hoping someone has time to pick up KIP-14 again:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-14+-+Tools+Standardization
> .
> The inconsistencies are so irritating.
>
> -Jason
>
> On Fri, Aug 3, 2018 at 10:27 AM, Rajini Sivaram 
> wrote:
>
> > Hi Dong,
> >
> > I don't have a preference for the option itself, but we are using "
> > --command-config" in ConfigCommand, ConsumerGroupCommand,
> > BrokerApiVersionsCommand, DeleteRecordsCommand. So it feels like we
> should
> > use the same here. I think we are currently only using "--config-file"
> > StreamsResetter. I will let others comment on whether we should change to
> > this when we add the option to other tools.
> >
> > Regards,
> >
> > Rajini
> >
> > On Fri, Aug 3, 2018 at 5:41 PM, Dong Lin  wrote:
> >
> > > Hey Rajini, Manikumar,
> > >
> > > Currently kafka-streams-application-reset.sh uses "--config-file" and a
> > > few
> > > other tools uses "--command-config". So the config name will be
> > > inconsistent no matter which config name we choose. Not sure we
> currently
> > > distinquish between core commands and non-core commands. I am wondering
> > if
> > > it is better to choose the name that is better in the long term. Do you
> > > think `config-file` would be more intuitive than `command-config` here?
> > >
> > > Thanks,
> > > Dong
> > >
> > > On Fri, Aug 3, 2018 at 2:40 AM, Manikumar 
> > > wrote:
> > >
> > > > Updated the KIP.  added a note to KIP-340 discussion thread.
> > > >
> > > > On Fri, Aug 3, 2018 at 2:52 PM Rajini Sivaram <
> rajinisiva...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Thanks Manikumar. Can you add a note to the KIP-340 discussion
> > thread?
> > > > >
> > > > > On Fri, Aug 3, 2018 at 10:04 AM, Manikumar <
> > manikumar.re...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi Rajini,
> > > > > >
> > > > > > Yes, I too prefer using  "--command-config" . Recently in one of
> > the
> > > > > other
> > > > > > KIPs (KIP-340),  it was suggested to use
> > > > > > "--config-file", So I just followed the recent suggestion. But I
> > > think
> > > > we
> > > > > > should use same name in all tools (at least in core tools).
> > > > > >
> > > > > > If there are no concerns, I will change the option to
> > > > >  "--command-config".
> > > > > > Since KIP-340 PR is not yet merged, we can also change there.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > >
> > > > > > On Fri, Aug 3, 2018 at 1:57 PM Rajini Sivaram <
> > > rajinisiva...@gmail.com
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Manikumar,
> > > > > > >
> > > > > > > We have some tools already (ConfigCommand,
> ConsumerGroupCommand,
> > > > > > > DelegationTokenCommand) which use "--command-config" option to
> > > > specify
> > > > > > > config file. Perhaps use should use the same name for
> AclCommand
> > as
> > > > > well?
> > > > > > >
> > > > > > > On Thu, Aug 2, 2018 at 7:23 PM, Colin McCabe <
> cmcc...@apache.org
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > +1 for starting the vote
> > > > > > > >
> > > > > > > > cheers,
> > > > > > > > Colin
> > > > > > > >
> > > > > > > >
> > > > > > > > On Wed, Aug 1, 2018, at 08:46, Manikumar wrote:
> > > > > > > > > Hi all,
> > > > > > > > >
> > > > > > > > > If there are no concerns, I will start the voting process
> > soon.
> > > > > > > > >
> > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > On Tue, Jul 31, 2018 at 9:08 AM Manikumar <
> > > > > manikumar.re...@gmail.com
> > > > > > >
> > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi Colin,
> > > > > > > > > >
> > > > > > > > > > Yes,  "--authorizer-properties" option is not required
> with
> > > > > > > > > > "--bootstrap-server" option. Updated the KIP.
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks,
> > > > > > > > > >
> > > > > > > > > > On Tue, Jul 31, 2018 at 1:30 AM Ted Yu <
> > yuzhih...@gmail.com>
> > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > >> Look good to me.
> > > > > > > > > >>
> > > > > > > > > >> On Mon, Jul 23, 2018 at 7:30 AM Manikumar <
> > > > > > > manikumar.re...@gmail.com>
> > > > > > > > > >> wrote:
> > > > > > > > > >>
> > > > > > > > > >> > Hi all,
> > > > > > > > > >> >
> > > > > > > > > >> > I have created a KIP to use AdminClient API in
> > AclCommand
> > > > > > > > > >> (kafka-acls.sh)
> > > > > > > > > >> >
> > > > > > > > > >> > *
> > > > > > > > > >> >
> > > > > > > > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > > > 332%3A+Update+AclCommand+to+use+AdminClient+API*
> > > > > > > > > >> > <
> > > > > > > > > >> >
> > > > > > > > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > > > 332%3A+Update+AclCommand+to+use+AdminClient+API
> > > > > > > > > >> > >
> > > > > > > > > 

Build failed in Jenkins: kafka-trunk-jdk10 #371

2018-08-06 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H30 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 24d0d3351a43f52f6f688143672d7c57c5f19595
error: Could not read 1e98b2aa5027eb60ee1a112a7f8ec45b9c47
error: Could not read 689bac3c6fcae927ccee89c8d4fd54c2fae83be1
remote: Counting objects: 3371, done.
remote: Compressing objects:   2% (1/40)   remote: Compressing objects: 
  5% (2/40)   remote: Compressing objects:   7% (3/40)   
remote: Compressing objects:  10% (4/40)   remote: Compressing objects: 
 12% (5/40)   remote: Compressing objects:  15% (6/40)   
remote: Compressing objects:  17% (7/40)   remote: Compressing objects: 
 20% (8/40)   remote: Compressing objects:  22% (9/40)   
remote: Compressing objects:  25% (10/40)   remote: Compressing 
objects:  27% (11/40)   remote: Compressing objects:  30% (12/40)   
remote: Compressing objects:  32% (13/40)   remote: Compressing 
objects:  35% (14/40)   remote: Compressing objects:  37% (15/40)   
remote: Compressing objects:  40% (16/40)   remote: Compressing 
objects:  42% (17/40)   remote: Compressing objects:  45% (18/40)   
remote: Compressing objects:  47% (19/40)   remote: Compressing 
objects:  50% (20/40)   remote: Compressing objects:  52% (21/40)   
remote: Compressing objects:  55% (22/40)   remote: Compressing 
objects:  57% (23/40)   remote: Compressing objects:  60% (24/40)   
remote: Compressing objects:  62% (25/40)   remote: Compressing 
objects:  65% (26/40)   remote: Compressing objects:  67% (27/40)   
remote: Compressing objects:  70% (28/40)   remote: Compressing 
objects:  72% (29/40)   remote: Compressing objects:  75% (30/40)   
remote: Compressing objects:  77% (31/40)   remote: Compressing 
objects:  80% (32/40)   remote: Compressing objects:  82% (33/40)   
remote: Compressing objects:  85% (34/40)   remote: Compressing 
objects:  87% (35/40)   remote: Compressing objects:  90% (36/40)   
remote: Compressing objects:  92% (37/40)   remote: Compressing 
objects:  95% (38/40)   remote: Compressing objects:  97% (39/40)   
remote: Compressing objects: 100% (40/40)   remote: Compressing 
objects: 100% (40/40), done.
Receiving objects:   0% (1/3371)   Receiving objects:   1% (34/3371)   
Receiving objects:   2% (68/3371)   Receiving objects:   3% (102/3371)   
Receiving objects:   4% (135/3371)   Receiving objects:   5% (169/3371)   
Receiving objects:   6% (203/3371)   Receiving objects:   7% (236/3371)   
Receiving objects:   8% (270/3371)   Receiving objects:   9% (304/3371)   
Receiving objects:  10% (338/3371)   Receiving objects:  11% (371/3371)   
Receiving objects:  12% (405/3371)   Receiving objects:  13% (439/3371)   
Receiving objects:  14% (472/3371)   Receiving objects:  15% (506/3371)   
Receiving objects:  16% (540/3371)   Receiving objects:  17% (574/3371)   
Receiving objects:  18% (607/3371)   Receiving objects:  19% (641/3

Build failed in Jenkins: kafka-trunk-jdk8 #2868

2018-08-06 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 7310, done.
remote: Compressing objects:   2% (1/45)   remote: Compressing objects: 
  4% (2/45)   remote: Compressing objects:   6% (3/45)   
remote: Compressing objects:   8% (4/45)   remote: Compressing objects: 
 11% (5/45)   remote: Compressing objects:  13% (6/45)   
remote: Compressing objects:  15% (7/45)   remote: Compressing objects: 
 17% (8/45)   remote: Compressing objects:  20% (9/45)   
remote: Compressing objects:  22% (10/45)   remote: Compressing 
objects:  24% (11/45)   remote: Compressing objects:  26% (12/45)   
remote: Compressing objects:  28% (13/45)   remote: Compressing 
objects:  31% (14/45)   remote: Compressing objects:  33% (15/45)   
remote: Compressing objects:  35% (16/45)   remote: Compressing 
objects:  37% (17/45)   remote: Compressing objects:  40% (18/45)   
remote: Compressing objects:  42% (19/45)   remote: Compressing 
objects:  44% (20/45)   remote: Compressing objects:  46% (21/45)   
remote: Compressing objects:  48% (22/45)   remote: Compressing 
objects:  51% (23/45)   remote: Compressing objects:  53% (24/45)   
remote: Compressing objects:  55% (25/45)   remote: Compressing 
objects:  57% (26/45)   remote: Compressing objects:  60% (27/45)   
remote: Compressing objects:  62% (28/45)   remote: Compressing 
objects:  64% (29/45)   remote: Compressing objects:  66% (30/45)   
remote: Compressing objects:  68% (31/45)   remote: Compressing 
objects:  71% (32/45)   remote: Compressing objects:  73% (33/45)   
remote: Compressing objects:  75% (34/45)   remote: Compressing 
objects:  77% (35/45)   remote: Compressing objects:  80% (36/45)   
remote: Compressing objects:  82% (37/45)   remote: Compressing 
objects:  84% (38/45)   remote: Compressing objects:  86% (39/45)   
remote: Compressing objects:  88% (40/45)   remote: Compressing 
objects:  91% (41/45)   remote: Compressing objects:  93% (42/45)   
remote: Compressing objects:  95% (43/45)   remote: Compressing 
objects:  97% (44/45)   remote: Compressing objects: 100% (45/45)   
remote: Compressing objects: 100% (45/45), done.
Receiving objects:   0% (1/7310)   Receiving objects:   1% (74/7310)   
Receiving objects:   2% (147/7310)   Receiving objects:   3% (220/7310)   
Receiving objects:   4% (293/7310)   Receiving objects:   5% (366/7310)   
Receiving objects:   6% (439/7310)   Receiving objects:   7% (512/7310)   
Receiving objects:   8% (585/7310)   Receiving objects:   9% (658/7310)   
Receiving objects:  10% (731/7310)   Receiving objects:  11% (805/7310)   
Receiving objects:  12% (878/

Re: [DISCUSS] KIP-320: Allow fetchers to detect and handle log truncation

2018-08-06 Thread Jason Gustafson
Hey Jun,

Thanks for the review. Responses below:

50. Yes, that is right. I clarified this in the KIP.

51. Yes, updated the KIP to mention.

52. Yeah, this was a reference to a previous iteration. I've fixed it.

53. I changed the API to use an `Optional` for the leader epoch
and added a note about the default value. Does that seem reasonable?

54. We discussed this above, but could not find a great option. The options
are to add a new API (e.g. positionAndEpoch) or to rely on the user to get
the epoch from the fetched records. We were leaning toward the latter, but
I admit it was not fully satisfying. In this case, Connect would need to
track the last consumed offsets manually instead of relying on the
consumer. We also considered adding a convenience method to ConsumerRecords
to get the offset to commit for all fetched partitions. This makes the
additional bookkeeping pretty minimal. What do you think?

55. I clarified in the KIP. I was mainly thinking of situations where a
previously valid offset becomes out of range.

56. Yeah, that's a bit annoying. I decided to keep LeaderEpoch as it is and
use CurrentLeaderEpoch for both OffsetForLeaderEpoch and the Fetch APIs. I
think Dong suggested this previously as well.

57. We could, but I'm not sure there's a strong reason to do so. I was
thinking we would leave it around for convenience, but let me know if you
think we should do otherwise.


Thanks,
Jason


On Fri, Aug 3, 2018 at 4:49 PM, Jun Rao  wrote:

> Hi, Jason,
>
> Thanks for the updated KIP. Well thought-through. Just a few minor comments
> below.
>
> 50. For seek(TopicPartition partition, OffsetAndMetadata offset), I guess
> under the cover, it will make OffsetsForLeaderEpoch request to determine if
> the seeked offset is still valid before fetching? If so, it will be useful
> document this in the wiki.
>
> 51. Similarly, if the consumer fetch request gets FENCED_LEADER_EPOCH, I
> guess the consumer will also make OffsetsForLeaderEpoch request to
> determine if the last consumed offset is still valid before fetching? If
> so, it will be useful document this in the wiki.
>
> 52. "If the consumer seeks to the middle of the log, for example, then we
> will use the sentinel value -1 and the leader will skip the epoch
> validation. " Is this true? If the consumer seeks using seek(TopicPartition
> partition, OffsetAndMetadata offset) and the seeked offset is valid, the
> consumer can/should use the leaderEpoch in the cached metadata for
> fetching?
>
> 53. OffsetAndMetadata. For backward compatibility, we need to support
> constructing OffsetAndMetadata without providing leaderEpoch. Could we
> define the default value of leaderEpoch if not provided and the semantics
> of that (e.g., skipping the epoch validation)?
>
> 54. I saw the following code in WorkerSinkTask in Connect. It saves the
> offset obtained through position(), which can be committed latter. Since
> position() doesn't return the leaderEpoch, this can lead to committed
> offset without leaderEpoch. Not sure how common this usage is, but what's
> the recommendation for such users?
>
> private class HandleRebalance implements ConsumerRebalanceListener {
> @Override
> public void onPartitionsAssigned(Collection
> partitions) {
> log.debug("{} Partitions assigned {}", WorkerSinkTask.this,
> partitions);
> lastCommittedOffsets = new HashMap<>();
> currentOffsets = new HashMap<>();
> for (TopicPartition tp : partitions) {
> long pos = consumer.position(tp);
> lastCommittedOffsets.put(tp, new OffsetAndMetadata(pos));
>
> 55. "With this KIP, the only case in which this is possible is if the
> consumer fetches from an offset earlier than the log start offset." Is that
> true? I guess a user could seek to a large offset without providing
> leaderEpoch, which can cause the offset to be larger than the log end
> offset during fetch?
>
> 56. In the schema for OffsetForLeaderEpochRequest, LeaderEpoch seems to be
> an existing field. Is LeaderEpochQuery the new field? The name is not very
> intuitive. It will be useful to document its meaning.
>
> 57. Should we deprecate the following api?
> void seek(TopicPartition partition, long offset);
>
> Thanks,
>
> Jun
>
>
> On Fri, Aug 3, 2018 at 9:32 AM, Jason Gustafson 
> wrote:
>
> > Hey All,
> >
> > I think I've addressed all pending review. If there is no additional
> > feedback, I'll plan to start a vote thread next week.
> >
> > Thanks,
> > Jason
> >
> > On Tue, Jul 31, 2018 at 9:46 AM, Dong Lin  wrote:
> >
> > > Hey Jason,
> > >
> > > Thanks for your reply. I will comment below.
> > >
> > > Regarding 1, we probably can not simply rename both to `LeaderEpoch`
> > > because we already have a LeaderEpoch field in OffsetsForLeaderEpoch.
> > >
> > > Regarding 5, I am not strong on this. I agree with the two benefits of
> > > having two error codes: 1) not having to refresh metadata when consumer
> > > sees UNKNOWN_LEADER_EPOCH and 2) provide mor

[jira] [Created] (KAFKA-7251) Add support for TLS 1.3

2018-08-06 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-7251:
-

 Summary: Add support for TLS 1.3
 Key: KAFKA-7251
 URL: https://issues.apache.org/jira/browse/KAFKA-7251
 Project: Kafka
  Issue Type: New Feature
  Components: security
Reporter: Rajini Sivaram
 Fix For: 2.1.0


Java 11 adds support for TLS 1.3. We should support this when we add support 
for Java 11.

Related issues:

[https://bugs.openjdk.java.net/browse/JDK-8206170]

[https://bugs.openjdk.java.net/browse/JDK-8206178]

[https://bugs.openjdk.java.net/browse/JDK-8208538]

[https://bugs.openjdk.java.net/browse/JDK-8207009]

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-332: Update AclCommand to use AdminClient API

2018-08-06 Thread Colin McCabe
Thanks, Manikumar.  Sounds good.

best,
Colin

On Mon, Aug 6, 2018, at 08:57, Manikumar wrote:
> Hi All,
> 
> For this KIP, I will be using "--command-config" option name to specify
> config file.
> We can handle the naming discrepancies as part of KIP-14.
> 
> Thanks,
> 
> 
> 
> 
> 
> On Sat, Aug 4, 2018 at 4:53 AM Jason Gustafson  wrote:
> 
> > Here's hoping someone has time to pick up KIP-14 again:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-14+-+Tools+Standardization
> > .
> > The inconsistencies are so irritating.
> >
> > -Jason
> >
> > On Fri, Aug 3, 2018 at 10:27 AM, Rajini Sivaram 
> > wrote:
> >
> > > Hi Dong,
> > >
> > > I don't have a preference for the option itself, but we are using "
> > > --command-config" in ConfigCommand, ConsumerGroupCommand,
> > > BrokerApiVersionsCommand, DeleteRecordsCommand. So it feels like we
> > should
> > > use the same here. I think we are currently only using "--config-file"
> > > StreamsResetter. I will let others comment on whether we should change to
> > > this when we add the option to other tools.
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > > On Fri, Aug 3, 2018 at 5:41 PM, Dong Lin  wrote:
> > >
> > > > Hey Rajini, Manikumar,
> > > >
> > > > Currently kafka-streams-application-reset.sh uses "--config-file" and a
> > > > few
> > > > other tools uses "--command-config". So the config name will be
> > > > inconsistent no matter which config name we choose. Not sure we
> > currently
> > > > distinquish between core commands and non-core commands. I am wondering
> > > if
> > > > it is better to choose the name that is better in the long term. Do you
> > > > think `config-file` would be more intuitive than `command-config` here?
> > > >
> > > > Thanks,
> > > > Dong
> > > >
> > > > On Fri, Aug 3, 2018 at 2:40 AM, Manikumar 
> > > > wrote:
> > > >
> > > > > Updated the KIP.  added a note to KIP-340 discussion thread.
> > > > >
> > > > > On Fri, Aug 3, 2018 at 2:52 PM Rajini Sivaram <
> > rajinisiva...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Thanks Manikumar. Can you add a note to the KIP-340 discussion
> > > thread?
> > > > > >
> > > > > > On Fri, Aug 3, 2018 at 10:04 AM, Manikumar <
> > > manikumar.re...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Rajini,
> > > > > > >
> > > > > > > Yes, I too prefer using  "--command-config" . Recently in one of
> > > the
> > > > > > other
> > > > > > > KIPs (KIP-340),  it was suggested to use
> > > > > > > "--config-file", So I just followed the recent suggestion. But I
> > > > think
> > > > > we
> > > > > > > should use same name in all tools (at least in core tools).
> > > > > > >
> > > > > > > If there are no concerns, I will change the option to
> > > > > >  "--command-config".
> > > > > > > Since KIP-340 PR is not yet merged, we can also change there.
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Aug 3, 2018 at 1:57 PM Rajini Sivaram <
> > > > rajinisiva...@gmail.com
> > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi Manikumar,
> > > > > > > >
> > > > > > > > We have some tools already (ConfigCommand,
> > ConsumerGroupCommand,
> > > > > > > > DelegationTokenCommand) which use "--command-config" option to
> > > > > specify
> > > > > > > > config file. Perhaps use should use the same name for
> > AclCommand
> > > as
> > > > > > well?
> > > > > > > >
> > > > > > > > On Thu, Aug 2, 2018 at 7:23 PM, Colin McCabe <
> > cmcc...@apache.org
> > > >
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > +1 for starting the vote
> > > > > > > > >
> > > > > > > > > cheers,
> > > > > > > > > Colin
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On Wed, Aug 1, 2018, at 08:46, Manikumar wrote:
> > > > > > > > > > Hi all,
> > > > > > > > > >
> > > > > > > > > > If there are no concerns, I will start the voting process
> > > soon.
> > > > > > > > > >
> > > > > > > > > > Thanks
> > > > > > > > > >
> > > > > > > > > > On Tue, Jul 31, 2018 at 9:08 AM Manikumar <
> > > > > > manikumar.re...@gmail.com
> > > > > > > >
> > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Hi Colin,
> > > > > > > > > > >
> > > > > > > > > > > Yes,  "--authorizer-properties" option is not required
> > with
> > > > > > > > > > > "--bootstrap-server" option. Updated the KIP.
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Thanks,
> > > > > > > > > > >
> > > > > > > > > > > On Tue, Jul 31, 2018 at 1:30 AM Ted Yu <
> > > yuzhih...@gmail.com>
> > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > >> Look good to me.
> > > > > > > > > > >>
> > > > > > > > > > >> On Mon, Jul 23, 2018 at 7:30 AM Manikumar <
> > > > > > > > manikumar.re...@gmail.com>
> > > > > > > > > > >> wrote:
> > > > > > > > > > >>
> > > > > > > > > > >> > Hi all,
> > > > > > > > > > >> >
> > > > > > > > > > >> > I have created a KIP to use AdminClient API in
> > > AclCommand
> > > > > > > > > > >> (kafka-acls.sh)
> > > > > > > > > > >> >
> > > > > > >

[jira] [Created] (KAFKA-7252) Default value for 'state.dir' is incorrect in the docs

2018-08-06 Thread Brett Harper (JIRA)
Brett Harper created KAFKA-7252:
---

 Summary: Default value for 'state.dir' is incorrect in the docs
 Key: KAFKA-7252
 URL: https://issues.apache.org/jira/browse/KAFKA-7252
 Project: Kafka
  Issue Type: Task
  Components: documentation
Affects Versions: 1.0.0, 0.11.0.0
Reporter: Brett Harper


The default value for 'state.dir' in a Kafka Streams application does not match 
what is specified in the Streams Developer Guide. 

The default value specified in code is "/tmp/kafka-streams" according to the 
StreamsConfig class, but the documentation specifies it as 
"/var/lib/kafka-streams".

[https://kafka.apache.org/10/documentation/streams/developer-guide/config-streams#optional-configuration-parameters]

This affects versions 0.11.0 and 1.0.0, and possibly others as well. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-346 - Limit blast radius of log compaction failure

2018-08-06 Thread Colin McCabe
Perhaps we could start with max.uncleanable.partitions and then implement 
max.uncleanable.partitions.per.logdir in a follow-up change if it seemed to be 
necessary?  What do you think?

regards,
Colin


On Sat, Aug 4, 2018, at 10:53, Stanislav Kozlovski wrote:
> Hey Ray,
> 
> Thanks for the explanation. In regards to the configuration property - I'm
> not sure. As long as it has sufficient documentation, I find
> "max.uncleanable.partitions" to be okay. If we were to add the distinction
> explicitly, maybe it should be `max.uncleanable.partitions.per.logdir` ?
> 
> On Thu, Aug 2, 2018 at 7:32 PM Ray Chiang  wrote:
> 
> > One more thing occurred to me.  Should the configuration property be
> > named "max.uncleanable.partitions.per.disk" instead?
> >
> > -Ray
> >
> >
> > On 8/1/18 9:11 AM, Stanislav Kozlovski wrote:
> > > Yes, good catch. Thank you, James!
> > >
> > > Best,
> > > Stanislav
> > >
> > > On Wed, Aug 1, 2018 at 5:05 PM James Cheng  wrote:
> > >
> > >> Can you update the KIP to say what the default is for
> > >> max.uncleanable.partitions?
> > >>
> > >> -James
> > >>
> > >> Sent from my iPhone
> > >>
> > >>> On Jul 31, 2018, at 9:56 AM, Stanislav Kozlovski <
> > stanis...@confluent.io>
> > >> wrote:
> > >>> Hey group,
> > >>>
> > >>> I am planning on starting a voting thread tomorrow. Please do reply if
> > >> you
> > >>> feel there is anything left to discuss.
> > >>>
> > >>> Best,
> > >>> Stanislav
> > >>>
> > >>> On Fri, Jul 27, 2018 at 11:05 PM Stanislav Kozlovski <
> > >> stanis...@confluent.io>
> > >>> wrote:
> > >>>
> >  Hey, Ray
> > 
> >  Thanks for pointing that out, it's fixed now
> > 
> >  Best,
> >  Stanislav
> > 
> > > On Fri, Jul 27, 2018 at 9:43 PM Ray Chiang 
> > wrote:
> > >
> > > Thanks.  Can you fix the link in the "KIPs under discussion" table on
> > > the main KIP landing page
> > > <
> > >
> > >>
> > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals#
> > >>> ?
> > > I tried, but the Wiki won't let me.
> > >
> > > -Ray
> > >
> > >> On 7/26/18 2:01 PM, Stanislav Kozlovski wrote:
> > >> Hey guys,
> > >>
> > >> @Colin - good point. I added some sentences mentioning recent
> > > improvements
> > >> in the introductory section.
> > >>
> > >> *Disk Failure* - I tend to agree with what Colin said - once a disk
> > > fails,
> > >> you don't want to work with it again. As such, I've changed my mind
> > >> and
> > >> believe that we should mark the LogDir (assume its a disk) as
> > offline
> > >> on
> > >> the first `IOException` encountered. This is the LogCleaner's
> > current
> > >> behavior. We shouldn't change that.
> > >>
> > >> *Respawning Threads* - I believe we should never re-spawn a thread.
> > >> The
> > >> correct approach in my mind is to either have it stay dead or never
> > >> let
> > > it
> > >> die in the first place.
> > >>
> > >> *Uncleanable-partition-names metric* - Colin is right, this metric
> > is
> > >> unneeded. Users can monitor the `uncleanable-partitions-count`
> > metric
> > > and
> > >> inspect logs.
> > >>
> > >>
> > >> Hey Ray,
> > >>
> > >>> 2) I'm 100% with James in agreement with setting up the LogCleaner
> > to
> > >>> skip over problematic partitions instead of dying.
> > >> I think we can do this for every exception that isn't `IOException`.
> > > This
> > >> will future-proof us against bugs in the system and potential other
> > > errors.
> > >> Protecting yourself against unexpected failures is always a good
> > thing
> > > in
> > >> my mind, but I also think that protecting yourself against bugs in
> > the
> > >> software is sort of clunky. What does everybody think about this?
> > >>
> > >>> 4) The only improvement I can think of is that if such an
> > >>> error occurs, then have the option (configuration setting?) to
> > >> create a
> > >>> .skip file (or something similar).
> > >> This is a good suggestion. Have others also seen corruption be
> > >> generally
> > >> tied to the same segment?
> > >>
> > >> On Wed, Jul 25, 2018 at 11:55 AM Dhruvil Shah  > >
> > > wrote:
> > >>> For the cleaner thread specifically, I do not think respawning will
> > > help at
> > >>> all because we are more than likely to run into the same issue
> > again
> > > which
> > >>> would end up crashing the cleaner. Retrying makes sense for
> > transient
> > >>> errors or when you believe some part of the system could have
> > healed
> > >>> itself, both of which I think are not true for the log cleaner.
> > >>>
> > >>> On Wed, Jul 25, 2018 at 11:08 AM Ron Dagostino 
> > > wrote:
> >  << > you
> > > in
> > >>> an
> >  infinite loop which consumes resources and fires off continuous
> > log
> >  messages.
> >  Hi C

Build failed in Jenkins: kafka-trunk-jdk10 #372

2018-08-06 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H31 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 24d0d3351a43f52f6f688143672d7c57c5f19595
error: Could not read 1e98b2aa5027eb60ee1a112a7f8ec45b9c47
error: Could not read 689bac3c6fcae927ccee89c8d4fd54c2fae83be1
remote: Counting objects: 4800, done.
remote: Compressing objects:   3% (1/33)   remote: Compressing objects: 
  6% (2/33)   remote: Compressing objects:   9% (3/33)   
remote: Compressing objects:  12% (4/33)   remote: Compressing objects: 
 15% (5/33)   remote: Compressing objects:  18% (6/33)   
remote: Compressing objects:  21% (7/33)   remote: Compressing objects: 
 24% (8/33)   remote: Compressing objects:  27% (9/33)   
remote: Compressing objects:  30% (10/33)   remote: Compressing 
objects:  33% (11/33)   remote: Compressing objects:  36% (12/33)   
remote: Compressing objects:  39% (13/33)   remote: Compressing 
objects:  42% (14/33)   remote: Compressing objects:  45% (15/33)   
remote: Compressing objects:  48% (16/33)   remote: Compressing 
objects:  51% (17/33)   remote: Compressing objects:  54% (18/33)   
remote: Compressing objects:  57% (19/33)   remote: Compressing 
objects:  60% (20/33)   remote: Compressing objects:  63% (21/33)   
remote: Compressing objects:  66% (22/33)   remote: Compressing 
objects:  69% (23/33)   remote: Compressing objects:  72% (24/33)   
remote: Compressing objects:  75% (25/33)   remote: Compressing 
objects:  78% (26/33)   remote: Compressing objects:  81% (27/33)   
remote: Compressing objects:  84% (28/33)   remote: Compressing 
objects:  87% (29/33)   remote: Compressing objects:  90% (30/33)   
remote: Compressing objects:  93% (31/33)   remote: Compressing 
objects:  96% (32/33)   remote: Compressing objects: 100% (33/33)   
remote: Compressing objects: 100% (33/33), done.
Receiving objects:   0% (1/4800)   Receiving objects:   1% (48/4800)   
Receiving objects:   2% (96/4800)   Receiving objects:   3% (144/4800)   
Receiving objects:   4% (192/4800)   Receiving objects:   5% (240/4800)   
Receiving objects:   6% (288/4800)   Receiving objects:   7% (336/4800)   
Receiving objects:   8% (384/4800)   Receiving objects:   9% (432/4800)   
Receiving objects:  10% (480/4800)   Receiving objects:  11% (528/4800)   
Receiving objects:  12% (576/4800)   Receiving objects:  13% (624/4800)   
Receiving objects:  14% (672/4800)   Receiving objects:  15% (720/4800)   
Receiving objects:  16% (768/4800)   Receiving objects:  17% (816/4800)   
Receiving objects:  18% (864/4800)   Receiving objects:  19% (912/4800)   
Receiving objects:  20% (960/4800)   Receiving objects:  21% (1008/4800)   
Receiving objects:  22% (1056/4800)   Receiving objects:  23% (1104/4800)   
Receiving objects:  24% (1152/4800)   Receiving objects:  25% (1200/4800)   
Receiving objects:  26% (1248/4800)   Receiving objects:  27% (1296/4800)   
Receiving objects:  28% (1344/4800)   Receiving object

[VOTE] KIP-310: Add a Kafka Source Connector to Kafka Connect

2018-08-06 Thread McCaig, Rhys
Hi All,

I’m starting a vote on KIP-310: Add a Kafka Source Connector to Kafka Connect

KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-310%3A+Add+a+Kafka+Source+Connector+to+Kafka+Connect
Discussion Thread: 
http://mail-archives.apache.org/mod_mbox/kafka-dev/201808.mbox/%3c17e8d696-e51c-4beb-bd70-9324d4b53...@comcast.com%3e>

Cheers,
Rhys


Re: ConsumerGroupCommand tool improvement?

2018-08-06 Thread Vahid S Hashemian
Hi Colin,

Thanks for considering the idea and sharing your feedback.

The improvements I proposed can be achieved, to some extend, using the 
AdminClient API and the Consumer Group CLI tool. But they won't fully 
support the proposal.

For example,
Regular expressions are not supported on the groups
Topic / Client filtering is not supported across all groups

So the reason for proposing the idea was to see if other Kafka users are 
also interested in some of these features so we can remove the burden of 
them writing custom code around existing consumer group features, and make 
those features built into Kafka Consumer Group Command and AdminClient 
API.

Thanks again!
--Vahid



From:   Colin McCabe 
To: us...@kafka.apache.org
Date:   08/03/2018 04:16 PM
Subject:Re: ConsumerGroupCommand tool improvement?



Hi Vahid,

Interesting idea.

It seems like if you're using the AdminClient APIs programmatically, you 
can just do the filtering yourself in a more flexible way than what we 
could provide.

On the other hand, if you're using the ./bin/consumer-groups.sh 
command-line tool, why not use grep or a similar tool to filter the 
output?  Maybe there is some additional functionality in supporting 
regexes in the command-line tool, but it also seems like it might be kind 
of complex as well.  Do you have some examples where  having regex support 
int the tool would be much easier than the traditional way of piping the 
output to grep, awk, and sed?

best,
Colin


On Thu, Aug 2, 2018, at 14:23, Vahid S Hashemian wrote:
> Hi all,
> 
> A requirement has been raised by a colleague and I wanted to see if 
there 
> is any interest in the community in adding the functionality to Apache 
> Kafka.
> 
> ConsumerGroupCommand tool in describe ('--describe' or '--describe 
> --offsets') mode currently lists all topics the group has consumed from 
> and all consumers with assigned partitions for a single group.
> The idea is to allow filtering of topics, consumers (client ids), and 
even 
> groups using regular expressions. This will allow the tool to handle use 

> cases such as
> What's the status of a particular consumer (or consumers) in all the 
> groups they are consuming from? (for example to check if they are 
lagging 
> behind in all groups)
> What consumer groups are consuming from a topic (or topics) and what's 
the 
> lag for each group?
> Limit the existing result to the topics/consumers of interest (for 
groups 
> with several topics/consumers)
> ...
> 
> This would potentially lead to enhancing the AdminClient API as well.
> 
> If the community also sees a value in this, I could start drafting a 
KIP.
> 
> Thanks for your feedback.
> --Vahid
> 







Re: Add to JIRA

2018-08-06 Thread Matthias J. Sax
What is our user id?

I am not aware of any docs, but it's rather simple.

kafka/core (brokers)
kafka/clients (consumer,producer,admin)
kafka/connect
kafka/streams
kafka/tools (command line tools like `bin/kafka-consumer-group.sh`)
kafka/tests (system test)


Hope this helps.


-Matthias


On 8/6/18 8:56 AM, Piyush Sagar wrote:
> Hello All,
> 
> Can anyone please add me to JIRA ?
> 
> Also, Is there any document(Confluence page) which briefs folder structure/
> sub modules of the main kafka repository ?
> 
> Thanks
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-346 - Limit blast radius of log compaction failure

2018-08-06 Thread Ray Chiang

I'm okay with that.

-Ray

On 8/6/18 10:59 AM, Colin McCabe wrote:

Perhaps we could start with max.uncleanable.partitions and then implement 
max.uncleanable.partitions.per.logdir in a follow-up change if it seemed to be 
necessary?  What do you think?

regards,
Colin


On Sat, Aug 4, 2018, at 10:53, Stanislav Kozlovski wrote:

Hey Ray,

Thanks for the explanation. In regards to the configuration property - I'm
not sure. As long as it has sufficient documentation, I find
"max.uncleanable.partitions" to be okay. If we were to add the distinction
explicitly, maybe it should be `max.uncleanable.partitions.per.logdir` ?

On Thu, Aug 2, 2018 at 7:32 PM Ray Chiang  wrote:


One more thing occurred to me.  Should the configuration property be
named "max.uncleanable.partitions.per.disk" instead?

-Ray


On 8/1/18 9:11 AM, Stanislav Kozlovski wrote:

Yes, good catch. Thank you, James!

Best,
Stanislav

On Wed, Aug 1, 2018 at 5:05 PM James Cheng  wrote:


Can you update the KIP to say what the default is for
max.uncleanable.partitions?

-James

Sent from my iPhone


On Jul 31, 2018, at 9:56 AM, Stanislav Kozlovski <

stanis...@confluent.io>

wrote:

Hey group,

I am planning on starting a voting thread tomorrow. Please do reply if

you

feel there is anything left to discuss.

Best,
Stanislav

On Fri, Jul 27, 2018 at 11:05 PM Stanislav Kozlovski <

stanis...@confluent.io>

wrote:


Hey, Ray

Thanks for pointing that out, it's fixed now

Best,
Stanislav


On Fri, Jul 27, 2018 at 9:43 PM Ray Chiang 

wrote:

Thanks.  Can you fix the link in the "KIPs under discussion" table on
the main KIP landing page
<


https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals#

?

I tried, but the Wiki won't let me.

-Ray


On 7/26/18 2:01 PM, Stanislav Kozlovski wrote:
Hey guys,

@Colin - good point. I added some sentences mentioning recent

improvements

in the introductory section.

*Disk Failure* - I tend to agree with what Colin said - once a disk

fails,

you don't want to work with it again. As such, I've changed my mind

and

believe that we should mark the LogDir (assume its a disk) as

offline

on

the first `IOException` encountered. This is the LogCleaner's

current

behavior. We shouldn't change that.

*Respawning Threads* - I believe we should never re-spawn a thread.

The

correct approach in my mind is to either have it stay dead or never

let

it

die in the first place.

*Uncleanable-partition-names metric* - Colin is right, this metric

is

unneeded. Users can monitor the `uncleanable-partitions-count`

metric

and

inspect logs.


Hey Ray,


2) I'm 100% with James in agreement with setting up the LogCleaner

to

skip over problematic partitions instead of dying.

I think we can do this for every exception that isn't `IOException`.

This

will future-proof us against bugs in the system and potential other

errors.

Protecting yourself against unexpected failures is always a good

thing

in

my mind, but I also think that protecting yourself against bugs in

the

software is sort of clunky. What does everybody think about this?


4) The only improvement I can think of is that if such an
error occurs, then have the option (configuration setting?) to

create a

.skip file (or something similar).

This is a good suggestion. Have others also seen corruption be

generally

tied to the same segment?

On Wed, Jul 25, 2018 at 11:55 AM Dhruvil Shah 
wrote:

For the cleaner thread specifically, I do not think respawning will

help at

all because we are more than likely to run into the same issue

again

which

would end up crashing the cleaner. Retrying makes sense for

transient

errors or when you believe some part of the system could have

healed

itself, both of which I think are not true for the log cleaner.

On Wed, Jul 25, 2018 at 11:08 AM Ron Dagostino 

wrote:

<<
you

in

an

infinite loop which consumes resources and fires off continuous

log

messages.
Hi Colin.  In case it could be relevant, one way to mitigate this

effect

is

to implement a backoff mechanism (if a second respawn is to occur

then

wait

for 1 minute before doing it; then if a third respawn is to occur

wait

for

2 minutes before doing it; then 4 minutes, 8 minutes, etc. up to

some

max

wait time).

I have no opinion on whether respawn is appropriate or not in this

context,

but a mitigation like the increasing backoff described above may

be

relevant in weighing the pros and cons.

Ron

On Wed, Jul 25, 2018 at 1:26 PM Colin McCabe 

wrote:

On Mon, Jul 23, 2018, at 23:20, James Cheng wrote:
Hi Stanislav! Thanks for this KIP!

I agree that it would be good if the LogCleaner were more

tolerant

of

errors. Currently, as you said, once it dies, it stays dead.

Things are better now than they used to be. We have the metric


kafka.log:type=LogCleanerManager,name=time-since-last-run-ms

which we can use to tell us if the threads are dead. And as of

1.1.0,

we

have KIP-226, which allows you to restart the log cleane

Build failed in Jenkins: kafka-2.0-jdk8 #103

2018-08-06 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H34 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
error: missing object referenced by 'refs/tags/1.1.1-rc0'
error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 737bf43bb4e78d2d7a0ee53c27527b479972ebf8
error: Could not read 1ed1daefbc2d72e9b501b94d8c99e874b89f1137
remote: Counting objects: 7473, done.
remote: Compressing objects:   1% (1/51)   remote: Compressing objects: 
  3% (2/51)   remote: Compressing objects:   5% (3/51)   
remote: Compressing objects:   7% (4/51)   remote: Compressing objects: 
  9% (5/51)   remote: Compressing objects:  11% (6/51)   
remote: Compressing objects:  13% (7/51)   remote: Compressing objects: 
 15% (8/51)   remote: Compressing objects:  17% (9/51)   
remote: Compressing objects:  19% (10/51)   remote: Compressing 
objects:  21% (11/51)   remote: Compressing objects:  23% (12/51)   
remote: Compressing objects:  25% (13/51)   remote: Compressing 
objects:  27% (14/51)   remote: Compressing objects:  29% (15/51)   
remote: Compressing objects:  31% (16/51)   remote: Compressing 
objects:  33% (17/51)   remote: Compressing objects:  35% (18/51)   
remote: Compressing objects:  37% (19/51)   remote: Compressing 
objects:  39% (20/51)   remote: Compressing objects:  41% (21/51)   
remote: Compressing objects:  43% (22/51)   remote: Compressing 
objects:  45% (23/51)   remote: Compressing objects:  47% (24/51)   
remote: Compressing objects:  49% (25/51)   remote: Compressing 
objects:  50% (26/51)   remote: Compressing objects:  52% (27/51)   
remote: Compressing objects:  54% (28/51)   remote: Compressing 
objects:  56% (29/51)   remote: Compressing objects:  58% (30/51)   
remote: Compressing objects:  60% (31/51)   remote: Compressing 
objects:  62% (32/51)   remote: Compressing objects:  64% (33/51)   
remote: Compressing objects:  66% (34/51)   remote: Compressing 
objects:  68% (35/51)   remote: Compressing objects:  70% (36/51)   
remote: Compressing objects:  72% (37/51)   remote: Compressing 
objects:  74% (38/51)   remote: Compressing objects:  76% (39/51)   
remote: Compressing objects:  78% (40/51)   remote: Compressing 
objects:  80% (41/51)   remote: Compressing objects:  82% (42/51)   
remote: Compressing objects:  84% (43/51)   remote: Compressing 
objects:  86% (44/51)   remote: Compressing objects:  88% (45/51)   
remote: Compressing objects:  90% (46/51)   remote: Compressing 
objects:  92% (47/51)   remote: Compressing objects:  94% (48/51)   
remote: Compressing objects:  96% (49/51)   remote: Compressing 
objects:  98% (50/51)   remote: Compressing objects: 100% (51/51)   
remote: Compressing objects: 100% (51/

Build failed in Jenkins: kafka-1.0-jdk7 #224

2018-08-06 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H28 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 1e98b2aa5027eb60ee1a112a7f8ec45b9c47
error: Could not read 5030b5d9d82bb08d520e07c4cd23e23e3caa58af
error: Could not read 2a2def1d9f52422231591c810e637433ac3fc160
remote: Counting objects: 4361, done.
remote: Compressing objects:   1% (1/57)   remote: Compressing objects: 
  3% (2/57)   remote: Compressing objects:   5% (3/57)   
remote: Compressing objects:   7% (4/57)   remote: Compressing objects: 
  8% (5/57)   remote: Compressing objects:  10% (6/57)   
remote: Compressing objects:  12% (7/57)   remote: Compressing objects: 
 14% (8/57)   remote: Compressing objects:  15% (9/57)   
remote: Compressing objects:  17% (10/57)   remote: Compressing 
objects:  19% (11/57)   remote: Compressing objects:  21% (12/57)   
remote: Compressing objects:  22% (13/57)   remote: Compressing 
objects:  24% (14/57)   remote: Compressing objects:  26% (15/57)   
remote: Compressing objects:  28% (16/57)   remote: Compressing 
objects:  29% (17/57)   remote: Compressing objects:  31% (18/57)   
remote: Compressing objects:  33% (19/57)   remote: Compressing 
objects:  35% (20/57)   remote: Compressing objects:  36% (21/57)   
remote: Compressing objects:  38% (22/57)   remote: Compressing 
objects:  40% (23/57)   remote: Compressing objects:  42% (24/57)   
remote: Compressing objects:  43% (25/57)   remote: Compressing 
objects:  45% (26/57)   remote: Compressing objects:  47% (27/57)   
remote: Compressing objects:  49% (28/57)   remote: Compressing 
objects:  50% (29/57)   remote: Compressing objects:  52% (30/57)   
remote: Compressing objects:  54% (31/57)   remote: Compressing 
objects:  56% (32/57)   remote: Compressing objects:  57% (33/57)   
remote: Compressing objects:  59% (34/57)   remote: Compressing 
objects:  61% (35/57)   remote: Compressing objects:  63% (36/57)   
remote: Compressing objects:  64% (37/57)   remote: Compressing 
objects:  66% (38/57)   remote: Compressing objects:  68% (39/57)   
remote: Compressing objects:  70% (40/57)   remote: Compressing 
objects:  71% (41/57)   remote: Compressing objects:  73% (42/57)   
remote: Compressing objects:  75% (43/57)   remote: Compressing 
objects:  77% (44/57)   remote: Compressing objects:  78% (45/57)   
remote: Compressing objects:  80% (46/57)   remote: Compressing 
objects:  82% (47/57)   remote: Compressing objects:  84% (48/57)   
remote: Compressing objects:  85% (49/57)   remote: Compressing 
objects:  87% (50/57)   remote: Compressing objects:  89% (51/57)   
remote: Compressing objects:  91% (52/57)   remote: Compressing 
objects:  92% (53/57)   remote: Compressing objects:  94% (54/57)   
remote: Compressing objects:  96% (55/57)   remote: Comp

Build failed in Jenkins: kafka-2.0-jdk8 #104

2018-08-06 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H34 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
error: missing object referenced by 'refs/tags/1.1.1-rc0'
error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 737bf43bb4e78d2d7a0ee53c27527b479972ebf8
error: Could not read 1ed1daefbc2d72e9b501b94d8c99e874b89f1137
remote: Counting objects: 7491, done.
remote: Compressing objects:   1% (1/70)   remote: Compressing objects: 
  2% (2/70)   remote: Compressing objects:   4% (3/70)   
remote: Compressing objects:   5% (4/70)   remote: Compressing objects: 
  7% (5/70)   remote: Compressing objects:   8% (6/70)   
remote: Compressing objects:  10% (7/70)   remote: Compressing objects: 
 11% (8/70)   remote: Compressing objects:  12% (9/70)   
remote: Compressing objects:  14% (10/70)   remote: Compressing 
objects:  15% (11/70)   remote: Compressing objects:  17% (12/70)   
remote: Compressing objects:  18% (13/70)   remote: Compressing 
objects:  20% (14/70)   remote: Compressing objects:  21% (15/70)   
remote: Compressing objects:  22% (16/70)   remote: Compressing 
objects:  24% (17/70)   remote: Compressing objects:  25% (18/70)   
remote: Compressing objects:  27% (19/70)   remote: Compressing 
objects:  28% (20/70)   remote: Compressing objects:  30% (21/70)   
remote: Compressing objects:  31% (22/70)   remote: Compressing 
objects:  32% (23/70)   remote: Compressing objects:  34% (24/70)   
remote: Compressing objects:  35% (25/70)   remote: Compressing 
objects:  37% (26/70)   remote: Compressing objects:  38% (27/70)   
remote: Compressing objects:  40% (28/70)   remote: Compressing 
objects:  41% (29/70)   remote: Compressing objects:  42% (30/70)   
remote: Compressing objects:  44% (31/70)   remote: Compressing 
objects:  45% (32/70)   remote: Compressing objects:  47% (33/70)   
remote: Compressing objects:  48% (34/70)   remote: Compressing 
objects:  50% (35/70)   remote: Compressing objects:  51% (36/70)   
remote: Compressing objects:  52% (37/70)   remote: Compressing 
objects:  54% (38/70)   remote: Compressing objects:  55% (39/70)   
remote: Compressing objects:  57% (40/70)   remote: Compressing 
objects:  58% (41/70)   remote: Compressing objects:  60% (42/70)   
remote: Compressing objects:  61% (43/70)   remote: Compressing 
objects:  62% (44/70)   remote: Compressing objects:  64% (45/70)   
remote: Compressing objects:  65% (46/70)   remote: Compressing 
objects:  67% (47/70)   remote: Compressing objects:  68% (48/70)   
remote: Compressing objects:  70% (49/70)   remote: Compressing 
objects:  71% (50/70)   remote: Compressing objects:  72% (51/70)   
remote: Compressing objects:  74% (52/

Build failed in Jenkins: kafka-1.0-jdk7 #225

2018-08-06 Thread Apache Jenkins Server
See 

--
[...truncated 729 B...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 1785c72ff1ef79f46217661abcf9b20ad2c4faf6
error: Could not read 060fd7c5cc2a34d43871af8452a07a84ee32bd20
error: Could not read 060fd7c5cc2a34d43871af8452a07a84ee32bd20
error: Could not read e9f6f2bdcef91f79d128e75e91a1184461ec3de6
error: Could not read 38e4be686ccd8906f936c0c010d628956c2844a7
error: missing object referenced by 'refs/tags/1.0.2'
error: Could not read 1785c72ff1ef79f46217661abcf9b20ad2c4faf6
error: Could not read f54ba7cf8528b5082471a86850ba951244a990e1
error: Could not read 50595d48edeaaf2c0ac255c1514af50b339f7d30
error: Could not read 2a121f7b1d402825cde176be346de8f91c6c7c81
remote: Counting objects: 9479, done.
remote: Compressing objects:   1% (1/71)   remote: Compressing objects: 
  2% (2/71)   remote: Compressing objects:   4% (3/71)   
remote: Compressing objects:   5% (4/71)   remote: Compressing objects: 
  7% (5/71)   remote: Compressing objects:   8% (6/71)   
remote: Compressing objects:   9% (7/71)   remote: Compressing objects: 
 11% (8/71)   remote: Compressing objects:  12% (9/71)   
remote: Compressing objects:  14% (10/71)   remote: Compressing 
objects:  15% (11/71)   remote: Compressing objects:  16% (12/71)   
remote: Compressing objects:  18% (13/71)   remote: Compressing 
objects:  19% (14/71)   remote: Compressing objects:  21% (15/71)   
remote: Compressing objects:  22% (16/71)   remote: Compressing 
objects:  23% (17/71)   remote: Compressing objects:  25% (18/71)   
remote: Compressing objects:  26% (19/71)   remote: Compressing 
objects:  28% (20/71)   remote: Compressing objects:  29% (21/71)   
remote: Compressing objects:  30% (22/71)   remote: Compressing 
objects:  32% (23/71)   remote: Compressing objects:  33% (24/71)   
remote: Compressing objects:  35% (25/71)   remote: Compressing 
objects:  36% (26/71)   remote: Compressing objects:  38% (27/71)   
remote: Compressing objects:  39% (28/71)   remote: Compressing 
objects:  40% (29/71)   remote: Compressing objects:  42% (30/71)   
remote: Compressing objects:  43% (31/71)   remote: Compressing 
objects:  45% (32/71)   remote: Compressing objects:  46% (33/71)   
remote: Compressing objects:  47% (34/71)   remote: Compressing 
objects:  49% (35/71)   remote: Compressing objects:  50% (36/71)   
remote: Compressing objects:  52% (37/71)   remote: Compressing 
objects:  53% (38/71)   remote: Compressing objects:  54% (39/71)   
remote: Compressing objects:  56% (40/71)   remote: Compressing 
objects:  57% (41/71)   remote: Compressing objects:  59% (42/71)   
remote: Compressing objects:  60% (43/71)   remote: Compressing 
objects:  61% (44/71)   remote: Compressing objects:  63% (45/71)   
remote: Compressing objects:  64% (46/71)   remote: Compressing 
objects:  66% (47/71)   remote: Compressing objects:  67% (48/71)   
remote: Compressing objects:  69% (49/71)   remote: Compressing 
objects:  70% (50/71)   remote: Compressing objects:  71% (51/71)   
remote: Compressing objects:  73% (52/71)   remote: Compressing 
objects:  74% (53/71)   remote: Compressing objects:  76% (54/71)   
remote: Compressing objects:  77% (55/71)   remote: Compressing 
objects:  78% (56/71)   remote: Compressing objects:  80% (57/71)   
remote: Compress

Re: [DISCUSS] KIP-320: Allow fetchers to detect and handle log truncation

2018-08-06 Thread Jason Gustafson
Hi Jun,

I spent a little more time looking at the usage in WorkerSinkTask. I think
actually the initialization of the positions in the assignment callback is
not strictly necessary. We keep a map of the current consumed offsets which
is updated as we consume the data. As far as I can tell, we could either
skip the initialization and wait until the first fetched records come in or
we could use the committed() API to initialize positions. I think the root
of it is the argument Anna made previously. The leader epoch lets us track
the history of records that we have consumed. It is only useful when we
want to tell whether records we have consumed were lost. So getting the
leader epoch of an arbitrary position that was seeked doesn't really make
sense. The dependence on the consumed records is most explicit if we only
expose the leader epoch inside the fetched records. We might consider
adding a `lastConsumedLeaderEpoch` API to expose it directly, but I'm
inclined to leave that as potential future work.

A couple additional notes:

1. I've renamed OffsetAndMetadata.leaderEpoch to
OffsetAndMetadata.lastLeaderEpoch. In fact, the consumer doesn't know what
the leader epoch of the committed offset should be, so this just clarifies
the expected usage.

2. I decided to add a helper to ConsumerRecords to get the next offsets. We
would use this in WorkerSinkTask and external storage use cases to simplify
the commit logic. If we are consuming batch by batch, then we don't need
the message-level bookkeeping.

Thanks,
Jason

On Mon, Aug 6, 2018 at 10:28 AM, Jason Gustafson  wrote:

> Hey Jun,
>
> Thanks for the review. Responses below:
>
> 50. Yes, that is right. I clarified this in the KIP.
>
> 51. Yes, updated the KIP to mention.
>
> 52. Yeah, this was a reference to a previous iteration. I've fixed it.
>
> 53. I changed the API to use an `Optional` for the leader epoch
> and added a note about the default value. Does that seem reasonable?
>
> 54. We discussed this above, but could not find a great option. The
> options are to add a new API (e.g. positionAndEpoch) or to rely on the user
> to get the epoch from the fetched records. We were leaning toward the
> latter, but I admit it was not fully satisfying. In this case, Connect
> would need to track the last consumed offsets manually instead of relying
> on the consumer. We also considered adding a convenience method to
> ConsumerRecords to get the offset to commit for all fetched partitions.
> This makes the additional bookkeeping pretty minimal. What do you think?
>
> 55. I clarified in the KIP. I was mainly thinking of situations where a
> previously valid offset becomes out of range.
>
> 56. Yeah, that's a bit annoying. I decided to keep LeaderEpoch as it is
> and use CurrentLeaderEpoch for both OffsetForLeaderEpoch and the Fetch
> APIs. I think Dong suggested this previously as well.
>
> 57. We could, but I'm not sure there's a strong reason to do so. I was
> thinking we would leave it around for convenience, but let me know if you
> think we should do otherwise.
>
>
> Thanks,
> Jason
>
>
> On Fri, Aug 3, 2018 at 4:49 PM, Jun Rao  wrote:
>
>> Hi, Jason,
>>
>> Thanks for the updated KIP. Well thought-through. Just a few minor
>> comments
>> below.
>>
>> 50. For seek(TopicPartition partition, OffsetAndMetadata offset), I guess
>> under the cover, it will make OffsetsForLeaderEpoch request to determine
>> if
>> the seeked offset is still valid before fetching? If so, it will be useful
>> document this in the wiki.
>>
>> 51. Similarly, if the consumer fetch request gets FENCED_LEADER_EPOCH, I
>> guess the consumer will also make OffsetsForLeaderEpoch request to
>> determine if the last consumed offset is still valid before fetching? If
>> so, it will be useful document this in the wiki.
>>
>> 52. "If the consumer seeks to the middle of the log, for example, then we
>> will use the sentinel value -1 and the leader will skip the epoch
>> validation. " Is this true? If the consumer seeks using
>> seek(TopicPartition
>> partition, OffsetAndMetadata offset) and the seeked offset is valid, the
>> consumer can/should use the leaderEpoch in the cached metadata for
>> fetching?
>>
>> 53. OffsetAndMetadata. For backward compatibility, we need to support
>> constructing OffsetAndMetadata without providing leaderEpoch. Could we
>> define the default value of leaderEpoch if not provided and the semantics
>> of that (e.g., skipping the epoch validation)?
>>
>> 54. I saw the following code in WorkerSinkTask in Connect. It saves the
>> offset obtained through position(), which can be committed latter. Since
>> position() doesn't return the leaderEpoch, this can lead to committed
>> offset without leaderEpoch. Not sure how common this usage is, but what's
>> the recommendation for such users?
>>
>> private class HandleRebalance implements ConsumerRebalanceListener {
>> @Override
>> public void onPartitionsAssigned(Collection
>> partitions) {
>> log.debug("{} Partition

Build failed in Jenkins: kafka-trunk-jdk10 #374

2018-08-06 Thread Apache Jenkins Server
See 


Changes:

[harsha] KAFKA-7142: fix joinGroup performance issues (#5354)

[harsha] Fixed Spelling. (#5432)

[wangguoz] MINOR: fix Streams docs state.dir (#5465)

--
[...truncated 1.54 MB...]
kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsNull STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsNull PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithCoordinatorLoadInProgressOnEndTxnWhenCoordinatorIsLoading 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithCoordinatorLoadInProgressOnEndTxnWhenCoordinatorIsLoading 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithSuccessOnAddPartitionsWhenStateIsCompleteAbort STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithSuccessOnAddPartitionsWhenStateIsCompleteAbort PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareAbortState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareAbortState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithErrorsNoneOnAddPartitionWhenNoErrorsAndPartitionsTheSame 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithErrorsNoneOnAddPartitionWhenNoErrorsAndPartitionsTheSame PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsEmpty STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsEmpty PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestAddPartitionsToTransactionWhenTransactionalIdIsEmpty
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestAddPartitionsToTransactionWhenTransactionalIdIsEmpty
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendPrepareAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendPrepareAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAcceptInitPidAndReturnNextPidWhenTransactionalIdIsNull STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAcceptInitPidAndReturnNextPidWhenTransactionalIdIsNull PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted STARTED

kafka.coordinator.transaction.Transa

Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by specifying member id

2018-08-06 Thread Boyang Chen
Sounds good to me Guozhang. I feel clear and confident about this enhanced 
design.


Kindly asking everyone's opinion on this thread for the updated approach. If we 
are ok here, I will go ahead to update the KIP and plan out the implementation.


Best,

Boyang


From: Guozhang Wang 
Sent: Monday, August 6, 2018 8:46 AM
To: dev
Subject: Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by 
specifying member id

Re: reuse of "client.id": good question. Although client.id could be unique
to guarantee fencing, by using it we are binding the user's behavior of
specifically setting client id to the indication of using static
membership. There are, however, cases where users want to specify client id
but NOT want to use static membership protocols we proposed here. This was
also why in transactional messaging we chose to not reuse client.id as
transactional.id since the latter also indicate that "I want to turn on
transactional messaging". If we agree that here setting this "id" has
similar semantics of enabling certain additional feature, then we should
not piggy-back that on the existing client.id.


Guozhang

On Sat, Aug 4, 2018 at 10:13 PM, Boyang Chen  wrote:

> Hey Guozhang,
>
>
> I think this makes sense. I'm wondering whether we could reuse the
> consumer client id as unique identifier instead of introducing new terms.
> From my experience, they should be unique in either Stream or Consumer use
> case.
>
>
> Thanks a lot for the clarification!
>
> 
> From: Guozhang Wang 
> Sent: Saturday, August 4, 2018 5:25 AM
> To: dev
> Subject: Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by
> specifying member id
>
> @Boyang,
>
> * Yes registry id should be generated via a config, similar to what you
> proposed in the KIP currently for member id. And "even if multiple
> consumers are using the same member id" (I think you meant registry id
> here?), we can fence against them.
>
> BTW I'm not married (but cannot speak for Jason :P ) to "registry id", so
> any other naming suggestions are welcome. John's suggestion of `member
> name` sounds okay to me. Another wild idea I had is just name it "consumer
> id".
>
> * About the metrics: note that during a consumer bouncing case, it will
> resume and come back with an empty member id and a pre-existed registry id.
> On the coordinator case, it is considered a "fenced" event, but it is
> really not an event that should be alerted. So I think we should not simply
> record a metric whenever the coordinator updates the registryId -> memberId
> map.
>
> We could, however, consider adding a metric for the other event: whenever a
> coordinator is rejecting a request because of incorrect member id. Note
> this event covers both a static membership related fencing event, plus
> today's normal cases where members are kicked out of the group. During a
> rebalance, we may see some spikes of this metrics but then during normal
> operations, this metric should stay as 0; but when we see this metric stays
> as non-zero consistently, it means there are some zombies lurking around.
>
> * About the code base: GroupCoordinator represents the broker-side
> coordinator logic, and ConsumerCoordinator is on the client side,
> representing the related logic for consumer to communicate with the
> broker-side coordinator. When we talk about the "coordinator" in this
> discussion, unless clearly specified, it should always be to the
> broker-side logic, i.e. GroupCoordinator.
>
>
> @John
>
> Note that a consumer can only know that it is already fenced when talking
> to the coordinator (again). More specifically, once a consumer gets
> assignment, it will keep fetching from the brokers without talking to
> coordinator until the next heartbeat request. Before that time, it is
> un-avoidable that two consumers may be fetching from the same partitions
> now, I used commit request just as an example of any requests sent to the
> coordinator, not saying that they will only be aware until then. In fact,
> when they sent Heartbeat request to the coordinator, they could also be
> notified that "you are fenced".
>
>
> Guozhang
>
>
> On Wed, Aug 1, 2018 at 10:41 PM, Boyang Chen  wrote:
>
> > Hey Guozhang,
> >
> >
> > thanks for this detailed proposal! Quickly I want to clarify, where this
> > registry id is generated? My understanding is that the registry id is a
> > unique id provided by user correct? This way even if multiple consumers
> are
> > using the same member id, we are fencing against polluting the commit
> > history. Very thoughtful approach! Furthermore, I feel we should export
> > alerting metrics on catching duplicate consumer instances since multiple
> of
> > them could still be reading the same topic partition and executing
> > duplicate business logic.
> >
> >
> > Also a minor comment is that our current codebase has two different
> > classes: the GroupCoordinator and ConsumerCoordinator. When talking abou

Re: [VOTE] KIP-328: Ability to suppress updates for KTables

2018-08-06 Thread Matthias J. Sax
+1 (binding)

Thanks for the KIP.


-Matthias

On 8/3/18 12:52 AM, Damian Guy wrote:
> Thanks John! +1
> 
> On Mon, 30 Jul 2018 at 23:58 Guozhang Wang  wrote:
> 
>> Yes, the addendum lgtm as well. Thanks!
>>
>> On Mon, Jul 30, 2018 at 3:34 PM, John Roesler  wrote:
>>
>>> Another thing that came up after I started working on an implementation
>> is
>>> that in addition to deprecating "retention" from the Windows interface,
>> we
>>> also need to deprecate "segmentInterval", for the same reasons. I simply
>>> overlooked it previously. I've updated the KIP accordingly.
>>>
>>> Hopefully, this doesn't change anyone's vote.
>>>
>>> Thanks,
>>> -John
>>>
>>> On Mon, Jul 30, 2018 at 5:31 PM John Roesler  wrote:
>>>
 Thanks Guozhang,

 Thanks for that catch. to clarify, currently, events are "late" only
>> when
 they are older than the retention period. Currently, we detect this in
>>> the
 processor and record it as a "skipped-record". We then do not attempt
>> to
 store the event in the window store. If a user provided a
>> pre-configured
 window store with a retention period smaller than the one they specify
>>> via
 Windows#until, the segmented store will drop the update with no metric
>>> and
 record a debug-level log.

 With KIP-328, with the introduction of "grace period" and moving
>>> retention
 fully into the state store, we need to have metrics for both "late
>>> events"
 (new records older than the grace period) and "expired window events"
>>> (new
 records for windows that are no longer retained in the state store). I
 already proposed metrics for the late events, and I've just updated the
>>> KIP
 with metrics for the expired window events. I also updated the KIP to
>>> make
 it clear that neither late nor expired events will count as
 "skipped-records" any more.

 -John

 On Mon, Jul 30, 2018 at 4:22 PM Guozhang Wang 
>>> wrote:

> Hi John,
>
> Thanks for the updated KIP, +1 from me, and one minor suggestion:
>
> Following your suggestion of the differentiation of `skipped-records`
>>> v.s.
> `late-event-drop`, we should probably consider moving the scenarios
>>> where
> records got ignored due the window not being available any more in
> windowed
> aggregation operators from the `skipped-records` metrics recording to
>>> the
> `late-event-drop` metrics recording.
>
>
>
> Guozhang
>
>
> On Mon, Jul 30, 2018 at 1:36 PM, Bill Bejeck 
>> wrote:
>
>> Thanks for the KIP!
>>
>> +1
>>
>> -Bill
>>
>> On Mon, Jul 30, 2018 at 3:42 PM Ted Yu  wrote:
>>
>>> +1
>>>
>>> On Mon, Jul 30, 2018 at 11:46 AM John Roesler 
> wrote:
>>>
 Hello devs,

 The discussion of KIP-328 has gone some time with no new
>> comments,
> so I
>>> am
 calling for a vote!

 Here's the KIP: https://cwiki.apache.org/confluence/x/sQU0BQ

 The basic idea is to provide:
 * more usable control over update rate (vs the current state
>> store
>>> caches)
 * the final-result-for-windowed-computations feature which
>>> several
>> people
 have requested

 Thanks,
 -John

>>>
>>
>
>
>
> --
> -- Guozhang
>

>>>
>>
>>
>>
>> --
>> -- Guozhang
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: ConsumerGroupCommand tool improvement?

2018-08-06 Thread Vahid S Hashemian
Hi Colin,

Thanks for the feedback!
I understand your concerns (I was thinking about making this improvement 
in a way that's fully backward compatible, with similar regex syntax as in 
some of the existing tools).
In any case, If there is not enough interest in the user community I'll 
rest my case.

Thanks!
--Vahid




From:   Colin McCabe 
To: us...@kafka.apache.org
Date:   08/06/2018 12:44 PM
Subject:Re: ConsumerGroupCommand tool improvement?



On Mon, Aug 6, 2018, at 12:04, Vahid S Hashemian wrote:
> Hi Colin,
> 
> Thanks for considering the idea and sharing your feedback.
> 
> The improvements I proposed can be achieved, to some extend, using the 
> AdminClient API and the Consumer Group CLI tool. But they won't fully 
> support the proposal.
> 
> For example,
> Regular expressions are not supported on the groups
> Topic / Client filtering is not supported across all groups
> 
> So the reason for proposing the idea was to see if other Kafka users are 

> also interested in some of these features so we can remove the burden of 

> them writing custom code around existing consumer group features, and 
make 
> those features built into Kafka Consumer Group Command and AdminClient 
> API.

Hmm.  If you're writing Java code that calls the APIs, though, this is 
easy, right?  You can filter the groups in the cluster with a regular 
expression with just a single line of code in Java 8.

> adminClient.listConsumerGroups().all().stream().
>filter(listing -> listing.groupId().matches(myRegex)).
>collect(Collectors.toList());

An option to filter ListConsumerGroups by a regular expression might 
actually be less easy-to-use for most users than this simple filter, since 
users would have to read the JavaDoc for our APIs.

Maybe there are some use-cases where it makes sense to add regex support 
to the command-line tools, though.

I guess the reason why I am pushing back on this is that regular 
expressions add a lot of complexity to the API and make it harder for us 
to meet our backwards compatibility guarantees.  For example, Java regular 
expressions changed slightly between Java 7 and Java 8

best,
Colin


> 
> Thanks again!
> --Vahid
> 
> 
> 
> From:   Colin McCabe 
> To: us...@kafka.apache.org
> Date:   08/03/2018 04:16 PM
> Subject:Re: ConsumerGroupCommand tool improvement?
> 
> 
> 
> Hi Vahid,
> 
> Interesting idea.
> 
> It seems like if you're using the AdminClient APIs programmatically, you 

> can just do the filtering yourself in a more flexible way than what we 
> could provide.
> 
> On the other hand, if you're using the ./bin/consumer-groups.sh 
> command-line tool, why not use grep or a similar tool to filter the 
> output?  Maybe there is some additional functionality in supporting 
> regexes in the command-line tool, but it also seems like it might be 
kind 
> of complex as well.  Do you have some examples where  having regex 
support 
> int the tool would be much easier than the traditional way of piping the 

> output to grep, awk, and sed?
> 
> best,
> Colin
> 
> 
> On Thu, Aug 2, 2018, at 14:23, Vahid S Hashemian wrote:
> > Hi all,
> > 
> > A requirement has been raised by a colleague and I wanted to see if 
> there 
> > is any interest in the community in adding the functionality to Apache 

> > Kafka.
> > 
> > ConsumerGroupCommand tool in describe ('--describe' or '--describe 
> > --offsets') mode currently lists all topics the group has consumed 
from 
> > and all consumers with assigned partitions for a single group.
> > The idea is to allow filtering of topics, consumers (client ids), and 
> even 
> > groups using regular expressions. This will allow the tool to handle 
use 
> 
> > cases such as
> > What's the status of a particular consumer (or consumers) in all the 
> > groups they are consuming from? (for example to check if they are 
> lagging 
> > behind in all groups)
> > What consumer groups are consuming from a topic (or topics) and what's 

> the 
> > lag for each group?
> > Limit the existing result to the topics/consumers of interest (for 
> groups 
> > with several topics/consumers)
> > ...
> > 
> > This would potentially lead to enhancing the AdminClient API as well.
> > 
> > If the community also sees a value in this, I could start drafting a 
> KIP.
> > 
> > Thanks for your feedback.
> > --Vahid
> > 
> 
> 
> 
> 
> 







Re: [DISCUSS] KIP-320: Allow fetchers to detect and handle log truncation

2018-08-06 Thread Jun Rao
Hi, Jason,

Thanks for the reply. They all make sense. Just a couple of more minor
comments.

57. I was thinking that if will be useful to encourage people to use the
new seek() api to get better semantics. Deprecating the old seek api is one
way. I guess we could also just document it for now.

60. "Log truncation is detected if the first offset of the epoch for the
committed offset is larger than this epoch and begins at an earlier
offset." It seems that we should add "that" before "is larger than"?

Thanks,

Jun


On Mon, Aug 6, 2018 at 3:07 PM, Jason Gustafson  wrote:

> Hi Jun,
>
> I spent a little more time looking at the usage in WorkerSinkTask. I think
> actually the initialization of the positions in the assignment callback is
> not strictly necessary. We keep a map of the current consumed offsets which
> is updated as we consume the data. As far as I can tell, we could either
> skip the initialization and wait until the first fetched records come in or
> we could use the committed() API to initialize positions. I think the root
> of it is the argument Anna made previously. The leader epoch lets us track
> the history of records that we have consumed. It is only useful when we
> want to tell whether records we have consumed were lost. So getting the
> leader epoch of an arbitrary position that was seeked doesn't really make
> sense. The dependence on the consumed records is most explicit if we only
> expose the leader epoch inside the fetched records. We might consider
> adding a `lastConsumedLeaderEpoch` API to expose it directly, but I'm
> inclined to leave that as potential future work.
>
> A couple additional notes:
>
> 1. I've renamed OffsetAndMetadata.leaderEpoch to
> OffsetAndMetadata.lastLeaderEpoch. In fact, the consumer doesn't know what
> the leader epoch of the committed offset should be, so this just clarifies
> the expected usage.
>
> 2. I decided to add a helper to ConsumerRecords to get the next offsets. We
> would use this in WorkerSinkTask and external storage use cases to simplify
> the commit logic. If we are consuming batch by batch, then we don't need
> the message-level bookkeeping.
>
> Thanks,
> Jason
>
> On Mon, Aug 6, 2018 at 10:28 AM, Jason Gustafson 
> wrote:
>
> > Hey Jun,
> >
> > Thanks for the review. Responses below:
> >
> > 50. Yes, that is right. I clarified this in the KIP.
> >
> > 51. Yes, updated the KIP to mention.
> >
> > 52. Yeah, this was a reference to a previous iteration. I've fixed it.
> >
> > 53. I changed the API to use an `Optional` for the leader epoch
> > and added a note about the default value. Does that seem reasonable?
> >
> > 54. We discussed this above, but could not find a great option. The
> > options are to add a new API (e.g. positionAndEpoch) or to rely on the
> user
> > to get the epoch from the fetched records. We were leaning toward the
> > latter, but I admit it was not fully satisfying. In this case, Connect
> > would need to track the last consumed offsets manually instead of relying
> > on the consumer. We also considered adding a convenience method to
> > ConsumerRecords to get the offset to commit for all fetched partitions.
> > This makes the additional bookkeeping pretty minimal. What do you think?
> >
> > 55. I clarified in the KIP. I was mainly thinking of situations where a
> > previously valid offset becomes out of range.
> >
> > 56. Yeah, that's a bit annoying. I decided to keep LeaderEpoch as it is
> > and use CurrentLeaderEpoch for both OffsetForLeaderEpoch and the Fetch
> > APIs. I think Dong suggested this previously as well.
> >
> > 57. We could, but I'm not sure there's a strong reason to do so. I was
> > thinking we would leave it around for convenience, but let me know if you
> > think we should do otherwise.
> >
> >
> > Thanks,
> > Jason
> >
> >
> > On Fri, Aug 3, 2018 at 4:49 PM, Jun Rao  wrote:
> >
> >> Hi, Jason,
> >>
> >> Thanks for the updated KIP. Well thought-through. Just a few minor
> >> comments
> >> below.
> >>
> >> 50. For seek(TopicPartition partition, OffsetAndMetadata offset), I
> guess
> >> under the cover, it will make OffsetsForLeaderEpoch request to determine
> >> if
> >> the seeked offset is still valid before fetching? If so, it will be
> useful
> >> document this in the wiki.
> >>
> >> 51. Similarly, if the consumer fetch request gets FENCED_LEADER_EPOCH, I
> >> guess the consumer will also make OffsetsForLeaderEpoch request to
> >> determine if the last consumed offset is still valid before fetching? If
> >> so, it will be useful document this in the wiki.
> >>
> >> 52. "If the consumer seeks to the middle of the log, for example, then
> we
> >> will use the sentinel value -1 and the leader will skip the epoch
> >> validation. " Is this true? If the consumer seeks using
> >> seek(TopicPartition
> >> partition, OffsetAndMetadata offset) and the seeked offset is valid, the
> >> consumer can/should use the leaderEpoch in the cached metadata for
> >> fetching?
> >>
> >> 53. OffsetAndMetadata. F

Build failed in Jenkins: kafka-2.0-jdk8 #105

2018-08-06 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: Fixed default streams state dir location. (#5441)

[matthias] MINOR: fix Streams docs state.dir (#5465)

--
[...truncated 438.61 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyM

Jenkins build is back to normal : kafka-1.0-jdk7 #226

2018-08-06 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk10 #375

2018-08-06 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H31 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 24d0d3351a43f52f6f688143672d7c57c5f19595
error: Could not read 1e98b2aa5027eb60ee1a112a7f8ec45b9c47
error: Could not read 689bac3c6fcae927ccee89c8d4fd54c2fae83be1
remote: Counting objects: 4878, done.
remote: Compressing objects:  12% (1/8)   remote: Compressing objects:  
25% (2/8)   remote: Compressing objects:  37% (3/8)   remote: 
Compressing objects:  50% (4/8)   remote: Compressing objects:  62% 
(5/8)   remote: Compressing objects:  75% (6/8)   remote: 
Compressing objects:  87% (7/8)   remote: Compressing objects: 100% 
(8/8)   remote: Compressing objects: 100% (8/8), done.
Receiving objects:   0% (1/4878)   Receiving objects:   1% (49/4878)   
Receiving objects:   2% (98/4878)   Receiving objects:   3% (147/4878)   
Receiving objects:   4% (196/4878)   Receiving objects:   5% (244/4878)   
Receiving objects:   6% (293/4878)   Receiving objects:   7% (342/4878)   
Receiving objects:   8% (391/4878)   Receiving objects:   9% (440/4878)   
Receiving objects:  10% (488/4878)   Receiving objects:  11% (537/4878)   
Receiving objects:  12% (586/4878)   Receiving objects:  13% (635/4878)   
Receiving objects:  14% (683/4878)   Receiving objects:  15% (732/4878)   
Receiving objects:  16% (781/4878)   Receiving objects:  17% (830/4878)   
Receiving objects:  18% (879/4878)   Receiving objects:  19% (927/4878)   
Receiving objects:  20% (976/4878)   Receiving objects:  21% (1025/4878)   
Receiving objects:  22% (1074/4878)   Receiving objects:  23% (1122/4878)   
Receiving objects:  24% (1171/4878)   Receiving objects:  25% (1220/4878)   
Receiving objects:  26% (1269/4878)   Receiving objects:  27% (1318/4878)   
Receiving objects:  28% (1366/4878)   Receiving objects:  29% (1415/4878)   
Receiving objects:  30% (1464/4878)   Receiving objects:  31% (1513/4878)   
Receiving objects:  32% (1561/4878)   Receiving objects:  33% (1610/4878)   
Receiving objects:  34% (1659/4878)   Receiving objects:  35% (1708/4878)   
Receiving objects:  36% (1757/4878)   Receiving objects:  37% (1805/4878)   
Receiving objects:  38% (1854/4878)   Receiving objects:  39% (1903/4878)   
Receiving objects:  40% (1952/4878)   Receiving objects:  41% (2000/4878)   
Receiving objects:  42% (2049/4878)   Receiving objects:  43% (2098/4878)   
Receiving objects:  44% (2147/4878)   Receiving objects:  45% (2196/4878)   
Receiving objects:  46% (2244/4878)   Receiving objects:  47% (2293/4878)   
Receiving objects:  48% (2342/4878)   Receiving objects:  49% (2391/4878)   
Receiving objects:  50% (2439/4878)   Receiving objects:  51% (2488/4878)   
Receiving objects:  52% (2537/4878)   Receiving objects:  53% (2586/4878)   
Receiving objects:  54% (2635/4878)   Receiving objects:  55% (2683/4878)   
Receiving objects:  56% (2732/4878)   Receiving objects:  57% (2781/4878)   
Receiving objects:  58% (2830/4878)   Receiving objects:  59% (2879/4878)   
Receiving objects:  60% (2927/4878)   Receiving objects:  61% (2976/4878)   
Receiving objects:  62% (3025/4878)   Receiving objects:  63% (3074/487

Jenkins build is back to normal : kafka-trunk-jdk8 #2870

2018-08-06 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk10 #376

2018-08-06 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H31 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 24d0d3351a43f52f6f688143672d7c57c5f19595
error: Could not read 1e98b2aa5027eb60ee1a112a7f8ec45b9c47
error: Could not read 689bac3c6fcae927ccee89c8d4fd54c2fae83be1
remote: Counting objects: 4878, done.
remote: Compressing objects:  12% (1/8)   remote: Compressing objects:  
25% (2/8)   remote: Compressing objects:  37% (3/8)   remote: 
Compressing objects:  50% (4/8)   remote: Compressing objects:  62% 
(5/8)   remote: Compressing objects:  75% (6/8)   remote: 
Compressing objects:  87% (7/8)   remote: Compressing objects: 100% 
(8/8)   remote: Compressing objects: 100% (8/8), done.
Receiving objects:   0% (1/4878)   Receiving objects:   1% (49/4878)   
Receiving objects:   2% (98/4878)   Receiving objects:   3% (147/4878)   
Receiving objects:   4% (196/4878)   Receiving objects:   5% (244/4878)   
Receiving objects:   6% (293/4878)   Receiving objects:   7% (342/4878)   
Receiving objects:   8% (391/4878)   Receiving objects:   9% (440/4878)   
Receiving objects:  10% (488/4878)   Receiving objects:  11% (537/4878)   
Receiving objects:  12% (586/4878)   Receiving objects:  13% (635/4878)   
Receiving objects:  14% (683/4878)   Receiving objects:  15% (732/4878)   
Receiving objects:  16% (781/4878)   Receiving objects:  17% (830/4878)   
Receiving objects:  18% (879/4878)   Receiving objects:  19% (927/4878)   
Receiving objects:  20% (976/4878)   Receiving objects:  21% (1025/4878)   
Receiving objects:  22% (1074/4878)   Receiving objects:  23% (1122/4878)   
Receiving objects:  24% (1171/4878)   Receiving objects:  25% (1220/4878)   
Receiving objects:  26% (1269/4878)   Receiving objects:  27% (1318/4878)   
Receiving objects:  28% (1366/4878)   Receiving objects:  29% (1415/4878)   
Receiving objects:  30% (1464/4878)   Receiving objects:  31% (1513/4878)   
Receiving objects:  32% (1561/4878)   Receiving objects:  33% (1610/4878)   
Receiving objects:  34% (1659/4878)   Receiving objects:  35% (1708/4878)   
Receiving objects:  36% (1757/4878)   Receiving objects:  37% (1805/4878)   
Receiving objects:  38% (1854/4878)   Receiving objects:  39% (1903/4878)   
Receiving objects:  40% (1952/4878)   Receiving objects:  41% (2000/4878)   
Receiving objects:  42% (2049/4878)   Receiving objects:  43% (2098/4878)   
Receiving objects:  44% (2147/4878)   Receiving objects:  45% (2196/4878)   
Receiving objects:  46% (2244/4878)   Receiving objects:  47% (2293/4878)   
Receiving objects:  48% (2342/4878)   Receiving objects:  49% (2391/4878)   
Receiving objects:  50% (2439/4878)   Receiving objects:  51% (2488/4878)   
Receiving objects:  52% (2537/4878)   Receiving objects:  53% (2586/4878)   
Receiving objects:  54% (2635/4878)   Receiving objects:  55% (2683/4878)   
Receiving objects:  56% (2732/4878)   Receiving objects:  57% (2781/4878)   
Receiving objects:  58% (2830/4878)   Receiving objects:  59% (2879/4878)   
Receiving objects:  60% (2927/4878)   Receiving objects:  61% (2976/4878)   
Receiving objects:  62% (3025/4878)   Receiving objects:  63% (3074/487

[jira] [Created] (KAFKA-7253) The connector type responded by worker is always null when creating connector

2018-08-06 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created KAFKA-7253:
-

 Summary: The connector type responded by worker is always null 
when creating connector
 Key: KAFKA-7253
 URL: https://issues.apache.org/jira/browse/KAFKA-7253
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


{code}
// Note that we use the updated connector config 
despite the fact that we don't have an updated
// snapshot yet. The existing task info should still be 
accurate.
Map map = 
configState.connectorConfig(connName);
ConnectorInfo info = new ConnectorInfo(connName, 
config, configState.tasks(connName),
map == null ? null : 
connectorTypeForClass(map.get(ConnectorConfig.CONNECTOR_CLASS_CONFIG)));
{code}
the null map causes the null type in response. The connector class name can be 
taken from the config of request instead since we require the config should 
contain the connector class name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #377

2018-08-06 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H31 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 24d0d3351a43f52f6f688143672d7c57c5f19595
error: Could not read 1e98b2aa5027eb60ee1a112a7f8ec45b9c47
error: Could not read 689bac3c6fcae927ccee89c8d4fd54c2fae83be1
remote: Counting objects: 4894, done.
remote: Compressing objects:   7% (1/14)   remote: Compressing objects: 
 14% (2/14)   remote: Compressing objects:  21% (3/14)   
remote: Compressing objects:  28% (4/14)   remote: Compressing objects: 
 35% (5/14)   remote: Compressing objects:  42% (6/14)   
remote: Compressing objects:  50% (7/14)   remote: Compressing objects: 
 57% (8/14)   remote: Compressing objects:  64% (9/14)   
remote: Compressing objects:  71% (10/14)   remote: Compressing 
objects:  78% (11/14)   remote: Compressing objects:  85% (12/14)   
remote: Compressing objects:  92% (13/14)   remote: Compressing 
objects: 100% (14/14)   remote: Compressing objects: 100% (14/14), 
done.
Receiving objects:   0% (1/4894)   Receiving objects:   1% (49/4894)   
Receiving objects:   2% (98/4894)   Receiving objects:   3% (147/4894)   
Receiving objects:   4% (196/4894)   Receiving objects:   5% (245/4894)   
Receiving objects:   6% (294/4894)   Receiving objects:   7% (343/4894)   
Receiving objects:   8% (392/4894)   Receiving objects:   9% (441/4894)   
Receiving objects:  10% (490/4894)   Receiving objects:  11% (539/4894)   
Receiving objects:  12% (588/4894)   Receiving objects:  13% (637/4894)   
Receiving objects:  14% (686/4894)   Receiving objects:  15% (735/4894)   
Receiving objects:  16% (784/4894)   Receiving objects:  17% (832/4894)   
Receiving objects:  18% (881/4894)   Receiving objects:  19% (930/4894)   
Receiving objects:  20% (979/4894)   Receiving objects:  21% (1028/4894)   
Receiving objects:  22% (1077/4894)   Receiving objects:  23% (1126/4894)   
Receiving objects:  24% (1175/4894)   Receiving objects:  25% (1224/4894)   
Receiving objects:  26% (1273/4894)   Receiving objects:  27% (1322/4894)   
Receiving objects:  28% (1371/4894)   Receiving objects:  29% (1420/4894)   
Receiving objects:  30% (1469/4894)   Receiving objects:  31% (1518/4894)   
Receiving objects:  32% (1567/4894)   Receiving objects:  33% (1616/4894)   
Receiving objects:  34% (1664/4894)   Receiving objects:  35% (1713/4894)   
Receiving objects:  36% (1762/4894)   Receiving objects:  37% (1811/4894)   
Receiving objects:  38% (1860/4894)   Receiving objects:  39% (1909/4894)   
Receiving objects:  40% (1958/4894)   Receiving objects:  41% (2007/4894)   
Receiving objects:  42% (2056/4894)   Receiving objects:  43% (2105/4894)   
Receiving objects:  44% (2154/4894)   Receiving objects:  45% (2203/4894)   
Receiving objects:  46% (2252/4894)   Receiving objects:  47% (2301/4894)   
Receiving objects:  48% (2350/4894)   Receiving objects:  49% (2399/4894)   
Receiving objects:  50% (2447/4894)   Receiving objects:  51% (2496/4894)   
Receiving objects:  52% (2545/4894)   Receiving objects:  53% (2594/4894)   
Receiving objects:  54% (2643/4894)   Receiving objec

Re: Add to JIRA

2018-08-06 Thread Piyush Sagar
Thanks Matthias J. Sax for help.

My username : *piyush_sagar*

I'm looking to contribute to JAVA codebase, any pointers for starting up
will be helpful.

Thanks

On Tue, 7 Aug 2018 at 00:53, Matthias J. Sax  wrote:

> What is our user id?
>
> I am not aware of any docs, but it's rather simple.
>
> kafka/core (brokers)
> kafka/clients (consumer,producer,admin)
> kafka/connect
> kafka/streams
> kafka/tools (command line tools like `bin/kafka-consumer-group.sh`)
> kafka/tests (system test)
>
>
> Hope this helps.
>
>
> -Matthias
>
>
> On 8/6/18 8:56 AM, Piyush Sagar wrote:
> > Hello All,
> >
> > Can anyone please add me to JIRA ?
> >
> > Also, Is there any document(Confluence page) which briefs folder
> structure/
> > sub modules of the main kafka repository ?
> >
> > Thanks
> >
>
>


Build failed in Jenkins: kafka-trunk-jdk8 #2871

2018-08-06 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Log AdminClient configs (#5457)

--
[...truncated 1.97 MB...]
org.apache.kafka.common.security.scram.internals.ScramFormatterTest > 
rfc7677Example STARTED

org.apache.kafka.common.security.scram.internals.ScramFormatterTest > 
rfc7677Example PASSED

org.apache.kafka.common.security.scram.internals.ScramFormatterTest > saslName 
STARTED

org.apache.kafka.common.security.scram.internals.ScramFormatterTest > saslName 
PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode PASSED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testUseOldPrincipalBuilderForPlaintextIfProvided STARTED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testUseOldPrincipalBuilderForPlaintextIfProvided PASSED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testPrincipalBuilderScram STARTED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testPrincipalBuilderScram PASSED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testPrincipalBuilderGssapi STARTED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testPrincipalBuilderGssapi PASSED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testUseSessionPeerPrincipalForSsl STARTED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testUseSessionPeerPrincipalForSsl PASSED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testUseOldPrincipalBuilderForSslIfProvided STARTED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testUseOldPrincipalBuilderForSslIfProvided PASSED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testReturnAnonymousPrincipalForPlaintext STARTED

org.apache.kafka.common.security.auth.DefaultKafkaPrincipalBuilderTest > 
testReturnAnonymousPrincipalForPlaintext PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testToLowerCase 
STARTED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testToLowerCase 
PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse STARTED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testInvalidRules 
STARTED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testInvalidRules 
PASSED

org.apache.kafka.common.security.kerberos.KerberosRuleTest > 
testReplaceParameters STARTED

org.apache.kafka.common.security.kerberos.KerberosRuleTest > 
testReplaceParameters PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testKeyStoreTrustStoreValidation STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testKeyStoreTrustStoreValidation PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithoutPasswordConfiguration STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithoutPasswordConfiguration PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testUntrustedKeyStoreValidation STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testUntrustedKeyStoreValidation PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testCertificateEntriesValidation STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testCertificateEntriesValidation PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration PASSED

org.apache.kafka.common.security.plain.internals.PlainSaslServerTest > 
noAuthorizationIdSpecified STARTED

org.apache.kafka.common.security.plain.internals.PlainSaslServerTest > 
noAuthorizationIdSpecified PASSED

org.apache.kafka.common.security.plain.internals.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.internals.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId PASSED

org.apache.kafka.common.security.plain.internals.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.internals.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameOverride STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameOverride PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingOptionValue 
STARTED

org.a

Re: [VOTE] KIP-341: Update Sticky Assignor's User Data Protocol

2018-08-06 Thread Vahid S Hashemian
The KIP passes with 3 binding and 2 non-binding +1 votes, and no -1 or 0 
votes.

Binding +1s
* Jason Gustafson
* Dong Lin
* Rajini Sivaram

Non-binding +1s
* Ted Yu
* Mike Freyberger

Thank you for providing feedback or vote on this KIP.

--Vahid




From:   "Vahid S Hashemian" 
To: dev 
Date:   08/02/2018 01:27 PM
Subject:[VOTE] KIP-341: Update Sticky Assignor's User Data 
Protocol



Hi everyone,

I believe the feedback on this KIP has been addressed so far. So I'd like 
to start a vote.
The KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-341%3A+Update+Sticky+Assignor%27s+User+Data+Protocol

Discussion thread: 
https://www.mail-archive.com/dev@kafka.apache.org/msg89733.html


Thanks!
--Vahid







Build failed in Jenkins: kafka-trunk-jdk10 #378

2018-08-06 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Log AdminClient configs (#5457)

[lindong28] KAFKA-5928; Avoid redundant requests to zookeeper when reassign 
topic

--
[...truncated 1.59 MB...]
at 
org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:149)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:58)
at 
org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:324)
at 
org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:234)
at 
org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:140)
at 
org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:37)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy72.output(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.results.StateTrackingTestResultProcessor.output(StateTrackingTestResultProcessor.java:112)
at 
org.gradle.api.internal.tasks.testing.results.AttachParentTestResultProcessor.output(AttachParentTestResultProcessor.java:48)
at jdk.internal.reflect.GeneratedMethodAccessor281.invoke(Unknown 
Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.FailureHandlingDispatch.dispatch(FailureHandlingDispatch.java:29)
at 
org.gradle.internal.dispatch.AsyncDispatch.dispatchMessages(AsyncDispatch.java:133)
at 
org.gradle.internal.dispatch.AsyncDispatch.access$000(AsyncDispatch.java:34)
at 
org.gradle.internal.dispatch.AsyncDispatch$1.run(AsyncDispatch.java:73)
at 
org.gradle.internal.operations.CurrentBuildOperationPreservingRunnable.run(CurrentBuildOperationPreservingRunnable.java:42)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at 
org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at 
org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.io.IOException: No space left on device
at java.base/java.io.FileOutputStream.writeBytes(Native Method)
at java.base/java.io.FileOutputStream.write(FileOutputStream.java:355)
at com.esotericsoftware.kryo.io.Output.flush(Output.java:154)
... 54 more
java.io.IOException: No space left on device
com.esotericsoftware.kryo.KryoException: java.io.IOException: No space left on 
device
at com.esotericsoftware.kryo.io.Output.flush(Output.java:156)
at com.esotericsoftware.kryo.io.Output.require(Output.java:134)
at com.esotericsoftware.kryo.io.Output.writeBoolean(Output.java:578)
at 
org.gradle.internal.serialize.kryo.KryoBackedEncoder.writeBoolean(KryoBackedEncoder.java:63)
at 
org.gradle.api.internal.tasks.testing.junit.result.TestOutputStore$Writer.onOutput(TestOutputStore.java:99)
at 
org.gradle.api.internal.tasks.testing.junit.result.TestReportDataCollector.onOutput(TestReportDataCollector.java:144)
at jdk.internal.reflect.GeneratedMethodAccessor279.invoke(Unknown 
Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:42)
at 
org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:230)
at 
org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:149)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:58)
at 
org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:324)
at 
org.gradle.int

Re: [DISCUSS] KIP-340: Allow kafka-reassign-partitions.sh and kafka-log-dirs.sh to take admin client property file

2018-08-06 Thread Dong Lin
Hi all,

Since we are going to use "command-config" as the option name in KIP-332, I
have updated the KIP to use "command-config" for consistency.

Please comment if you think "config-file" is better.

Thanks,
Dong

On Fri, Aug 3, 2018 at 2:39 AM, Manikumar  wrote:

> Hi Dong,
>
> In KIP-332 discussion, It was agreed to use "--command-config" option name
> for passing config property file.
> We can also use same name in here,  to make it consistent across all
> tools.
>
> Thanks,
>
> On Thu, Jul 12, 2018 at 9:20 AM Dong Lin  wrote:
>
> > wiki page is currently read-only and it is unavailable for write
> operation.
> > Will update it later.
> >
> > On Wed, Jul 11, 2018 at 8:48 PM, Dong Lin  wrote:
> >
> > > Ah I see. Thanks for the suggestion. It is updated now.
> > >
> > > On Wed, Jul 11, 2018 at 8:13 PM, Ted Yu  wrote:
> > >
> > >> bq. the same approach used by "--config-file" in ConfigCommand.
> > >>
> > >> I should have copied more from the KIP.
> > >> What I meant was that ConfigCommand doesn't use "--config-file"
> option.
> > So
> > >> 'same approach' implies StreamsResetter class, not ConfigCommand.
> > >>
> > >> I didn't mean to change ConfigCommand w.r.t. name of the option.
> > >>
> > >> Cheers
> > >>
> > >> On Wed, Jul 11, 2018 at 8:06 PM Dong Lin  wrote:
> > >>
> > >> > Do you mean we should replace "--command-config" with
> "--config-file"
> > in
> > >> > ConfigCommand? There is backward compatibility concern with the
> > change.
> > >> I
> > >> > am not sure the benefit of this change is worth the effort to
> > deprecate
> > >> the
> > >> > old config. Maybe we should do it separately if more people thing it
> > is
> > >> > necessary?
> > >> >
> > >> > On Wed, Jul 11, 2018 at 8:01 PM, Ted Yu 
> wrote:
> > >> >
> > >> > > bq. "--config-file" in ConfigCommand.
> > >> > >
> > >> > > Please update the above - it should be StreamsResetter
> > >> > >
> > >> > >
> > >> > > On Wed, Jul 11, 2018 at 7:59 PM Dong Lin 
> > wrote:
> > >> > >
> > >> > > > Hey Ted,
> > >> > > >
> > >> > > > Thanks much for the suggestion. Yeah "config-file" looks better
> > than
> > >> > > > "command-config". I have updated the KIP as suggested.
> > >> > > >
> > >> > > > Thanks,
> > >> > > > Dong
> > >> > > >
> > >> > > > On Wed, Jul 11, 2018 at 5:57 PM, Ted Yu 
> > >> wrote:
> > >> > > >
> > >> > > > > Looking at StreamsResetter.java :
> > >> > > > >
> > >> > > > >commandConfigOption = optionParser.accepts("config-
> file",
> > >> > > > "Property
> > >> > > > > file containing configs to be passed to admin cl
> > >> > > > >
> > >> > > > > Not sure you have considered naming the option in the above
> > >> fashion.
> > >> > > > >
> > >> > > > > Probably add the above to Alternative section.
> > >> > > > >
> > >> > > > > Cheers
> > >> > > > >
> > >> > > > > On Wed, Jul 11, 2018 at 2:04 PM Dong Lin  >
> > >> > wrote:
> > >> > > > >
> > >> > > > > > Hi all,
> > >> > > > > >
> > >> > > > > > I have created KIP-340: Allow kafka-reassign-partitions.sh
> and
> > >> > > > > > kafka-log-dirs.sh to take admin client property file. See
> > >> > > > > >
> > >> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> > > > > 340%3A+Allow+kafka-reassign-partitions.sh+and+kafka-log-
> > >> > > > > dirs.sh+to+take+admin+client+property+file
> > >> > > > > > .
> > >> > > > > >
> > >> > > > > > This KIP provides a way to allow
> kafka-reassign-partitions.sh
> > >> and
> > >> > > > > > kafka-log-dirs.sh to talk to broker over SSL. Please review
> > the
> > >> KIP
> > >> > > if
> > >> > > > > you
> > >> > > > > > have time.
> > >> > > > > >
> > >> > > > > >
> > >> > > > > > Thanks!
> > >> > > > > > Dong
> > >> > > > > >
> > >> > > > >
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >
> > >
> >
>