[GitHub] kafka pull request #2956: KAFKA-4422: Drop support for Scala 2.10 (KIP-119)

2017-05-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2956


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4422) Drop support for Scala 2.10 (KIP-119)

2017-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4422:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2956
[https://github.com/apache/kafka/pull/2956]

> Drop support for Scala 2.10 (KIP-119)
> -
>
> Key: KAFKA-4422
> URL: https://issues.apache.org/jira/browse/KAFKA-4422
> Project: Kafka
>  Issue Type: Task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>  Labels: kip
> Fix For: 0.11.0.0
>
>
> Now that Scala 2.12 has been released, we should drop support for Scala 2.10  
> in the next major Kafka version so that we keep the number of supported 
> versions at 2. Since we have to compile and run the tests on each supported 
> version, there is a non-trivial cost from a development and testing 
> perspective.
> The clients library is in Java and we recommend people use the Java clients 
> instead of the Scala ones, so dropping support for Scala 2.10 should have a 
> smaller impact than it would have had in the past. Scala 2.10 was released in 
> January 2013 and support ended in March 2015. 
> Once we drop support for Scala 2.10, we can take advantage of APIs and 
> compiler improvements introduced in Scala 2.11 (introduced in April 2014): 
> http://scala-lang.org/news/2.11.0
> Link to KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-119%3A+Drop+Support+for+Scala+2.10+in+Kafka+0.11



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4422) Drop support for Scala 2.10 (KIP-119)

2017-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16005992#comment-16005992
 ] 

ASF GitHub Bot commented on KAFKA-4422:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2956


> Drop support for Scala 2.10 (KIP-119)
> -
>
> Key: KAFKA-4422
> URL: https://issues.apache.org/jira/browse/KAFKA-4422
> Project: Kafka
>  Issue Type: Task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>  Labels: kip
> Fix For: 0.11.0.0
>
>
> Now that Scala 2.12 has been released, we should drop support for Scala 2.10  
> in the next major Kafka version so that we keep the number of supported 
> versions at 2. Since we have to compile and run the tests on each supported 
> version, there is a non-trivial cost from a development and testing 
> perspective.
> The clients library is in Java and we recommend people use the Java clients 
> instead of the Scala ones, so dropping support for Scala 2.10 should have a 
> smaller impact than it would have had in the past. Scala 2.10 was released in 
> January 2013 and support ended in March 2015. 
> Once we drop support for Scala 2.10, we can take advantage of APIs and 
> compiler improvements introduced in Scala 2.11 (introduced in April 2014): 
> http://scala-lang.org/news/2.11.0
> Link to KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-119%3A+Drop+Support+for+Scala+2.10+in+Kafka+0.11



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-155 Add range scan for windowed state stores

2017-05-11 Thread Michal Borowiecki

Also, wrt


In the case of the window store, the "key" of the single-key iterator is
the actual timestamp of the underlying entry, not just range of the 
window,

so if we were to wrap the result key a window we wouldn't be getting back
the equivalent of the single key iterator. 
I believe the timestamp in the entry *is* the window start time (the end 
time is implicitly known by adding the window size to the window start time)


https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamWindowReduce.java#L109 



https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamWindowAggregate.java#L111

Both use window.start() as the timestamp when storing into the WindowStore.

Or am I confusing something horribly here? Hope not ;-)


If the above is correct, then using KeyValueIterator, V> as 
the return type of the new fetch method would indeed not lose anything 
the single key iterator is offering.


The window end time could simply be calculated as window start time + 
window size (window size would have to be passed from the store supplier 
to the store implementation, which I think it isn't now but that's an 
implementation detail).


If you take objection to exposing the window end time because the single 
key iterator doesn't do that, then an alternative could also be to have 
the return type of the new fetch be something like 
KeyValueItarator, V>, since the key is composed of the 
actual key and the timestamp together. peakNextKey() would then allow 
you to glimpse both the actual key and the associated window start time. 
This feels like a better workaround then putting the KeyValue pair in 
the V of the WindowStoreIterator.


All-in-all, I'd still prefer KeyValueIterator, V> as it more 
clearly names what's what.


What do you think?

Thanks,

Michal

On 11/05/17 07:51, Michal Borowiecki wrote:
Well, another concern, apart from potential confusion, is that you 
won't be able to peek the actual next key, just the timestamp. So the 
tradeoff is between having consistency in return types versus 
consistency in having the ability to know the next key without moving 
the iterator. To me the latter just feels more important.


Thanks,
Michal
On 11 May 2017 12:46 a.m., Xavier Léauté  wrote:

Thank you for the feedback Michal.

While I agree the return may be a little bit more confusing to reason
about, the reason for doing so was to keep the range query interfaces
consistent with their single-key counterparts.

In the case of the window store, the "key" of the single-key
iterator is
the actual timestamp of the underlying entry, not just range of
the window,
so if we were to wrap the result key a window we wouldn't be
getting back
the equivalent of the single key iterator.

In both cases peekNextKey is just returning the timestamp of the
next entry
in the window store that matches the query.

In the case of the session store, we already return Windowed
for the
single-key method, so it made sense there to also return
Windowed for
the range method.

Hope this make sense? Let me know if you still have concerns about
this.

Thank you,
Xavier

On Wed, May 10, 2017 at 12:25 PM Michal Borowiecki <
michal.borowie...@openbet.com> wrote:

> Apologies, I missed the discussion (or lack thereof) about the
return
> type of:
>
> WindowStoreIterator> fetch(K from, K to, long
timeFrom,
> long timeTo)
>
>
> WindowStoreIterator (as the KIP mentions) is a subclass of
> KeyValueIterator
>
> KeyValueIterator has the following method:
>
> /** * Peek at the next key without advancing the iterator *
@return the
> key of the next value that would be returned from the next call
to next
> */ K peekNextKey();
>
> Given the type in this case will be Long, I assume what it would
return
> is the window timestamp of the next found record?
>
>
> In the case of WindowStoreIterator fetch(K key, long
timeFrom, long
> timeTo);
> all records found by fetch have the same key, so it's harmless
to return
> the timestamp of the next found window but here we have varying
keys and
> varying windows, so won't it be too confusing?
>
> KeyValueIterator, V> (as in the proposed
> ReadOnlySessionStore.fetch) just feels much more intuitive.
>
> Apologies again for jumping onto this only once the voting has
already
> begun.
> Thanks,
> Michał
>
> On 10/05/17 20:08, Sriram Subramanian wrote:
> > +1
> >
> > On Wed, May 10, 2017 at 11:42 AM, Bill Bejeck
 wrote:
> >
> >> +1
> >>
> >> Thanks,
> >> Bill
> >>
> >> On Wed, May 10, 2017 at 2:38 PM, Guozhang Wang

> wrote:
> >>
> >>> +1. Thank you!
> >>>
> >>> On Wed, M

[GitHub] kafka pull request #3019: [WIP] Test that Scala PR builder for 2.10 doesn't ...

2017-05-11 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3019

[WIP] Test that Scala PR builder for 2.10 doesn't trigger



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka test-scala-2.10-disabled

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3019.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3019


commit d525231c84a4c8af1352e06e3989fc287a2ca01f
Author: Ismael Juma 
Date:   2017-05-11T08:15:45Z

TEST




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3020: [WIP] Test PR builder for Scala 2.10 runs if targe...

2017-05-11 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3020

[WIP] Test PR builder for Scala 2.10 runs if target is 0.10.2



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
test-pr-builder-for-scala-2.10-enabled-if-target-0.10.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3020.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3020


commit c5907bc613afa51ff7414dc145fb3e561fb248b6
Author: Ismael Juma 
Date:   2017-05-11T08:15:45Z

TEST




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3019: [WIP] Test that Scala PR builder for 2.10 doesn't ...

2017-05-11 Thread ijuma
Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/3019


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3020: [WIP] Test PR builder for Scala 2.10 runs if targe...

2017-05-11 Thread ijuma
Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/3020


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


provide rc.d scripts

2017-05-11 Thread Werner Daehn
Would be nice to add autostart scripts to be used by the various
distributions.
Preferably not only Kafka but the confluent platform services (schema
registry, rest proxy) as well.


[GitHub] kafka pull request #3021: KAFKA-5006: change exception path

2017-05-11 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/3021

KAFKA-5006: change exception path

This should be backported to 0.10.2 as well. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka KAFKA-5006-put-exception

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3021.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3021


commit 237b6624289ac937c2781a3c513462269efe373e
Author: Eno Thereska 
Date:   2017-05-11T09:23:29Z

Change exception path




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5006) KeyValueStore.put may throw exception unrelated to the current put attempt

2017-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006136#comment-16006136
 ] 

ASF GitHub Bot commented on KAFKA-5006:
---

GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/3021

KAFKA-5006: change exception path

This should be backported to 0.10.2 as well. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka KAFKA-5006-put-exception

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3021.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3021


commit 237b6624289ac937c2781a3c513462269efe373e
Author: Eno Thereska 
Date:   2017-05-11T09:23:29Z

Change exception path




> KeyValueStore.put may throw exception unrelated to the current put attempt
> --
>
> Key: KAFKA-5006
> URL: https://issues.apache.org/jira/browse/KAFKA-5006
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.0.0, 0.10.1.0, 0.10.2.0
>Reporter: Xavier Léauté
>Assignee: Eno Thereska
>  Labels: user-experience
> Fix For: 0.11.0.0
>
>
> It is possible for {{KeyValueStore.put(K key, V value)}} to throw an 
> exception unrelated to the store in question. Due to [the way that 
> {{RecordCollector.send}} is currently 
> implemented|https://github.com/confluentinc/kafka/blob/3.2.x/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java#L76]
> the exception thrown would be for any previous record produced by the stream 
> task, possibly for a completely unrelated topic the same task is producing to.
> This can be very confusing for someone attempting to correctly handle 
> exceptions thrown by put(), as they would not be able to add any additional 
> debugging information to understand the operation that caused the problem. 
> Worse, such logging would likely confuse the user, since they might mislead 
> themselves into thinking the changelog record created by calling put() caused 
> the problem.
> Given that there is likely no way for the user to recover from an exception 
> thrown by an unrelated produce request, it is questionable whether we should 
> even try to raise the exception at this level. A short-term fix would be to 
> simply delegate this exception to the uncaught exception handler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5006) KeyValueStore.put may throw exception unrelated to the current put attempt

2017-05-11 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5006 started by Eno Thereska.
---
> KeyValueStore.put may throw exception unrelated to the current put attempt
> --
>
> Key: KAFKA-5006
> URL: https://issues.apache.org/jira/browse/KAFKA-5006
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.0.0, 0.10.1.0, 0.10.2.0
>Reporter: Xavier Léauté
>Assignee: Eno Thereska
>  Labels: user-experience
> Fix For: 0.11.0.0
>
>
> It is possible for {{KeyValueStore.put(K key, V value)}} to throw an 
> exception unrelated to the store in question. Due to [the way that 
> {{RecordCollector.send}} is currently 
> implemented|https://github.com/confluentinc/kafka/blob/3.2.x/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java#L76]
> the exception thrown would be for any previous record produced by the stream 
> task, possibly for a completely unrelated topic the same task is producing to.
> This can be very confusing for someone attempting to correctly handle 
> exceptions thrown by put(), as they would not be able to add any additional 
> debugging information to understand the operation that caused the problem. 
> Worse, such logging would likely confuse the user, since they might mislead 
> themselves into thinking the changelog record created by calling put() caused 
> the problem.
> Given that there is likely no way for the user to recover from an exception 
> thrown by an unrelated produce request, it is questionable whether we should 
> even try to raise the exception at this level. A short-term fix would be to 
> simply delegate this exception to the uncaught exception handler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5156) Options for handling exceptions in streams

2017-05-11 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5156 started by Eno Thereska.
---
> Options for handling exceptions in streams
> --
>
> Key: KAFKA-5156
> URL: https://issues.apache.org/jira/browse/KAFKA-5156
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.11.0.0
>
>
> This is a task around options for handling exceptions in streams. It focuses 
> around options for dealing with corrupt data (keep going, stop streams, log, 
> retry, etc).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5006) KeyValueStore.put may throw exception unrelated to the current put attempt

2017-05-11 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-5006:

Status: Patch Available  (was: In Progress)

> KeyValueStore.put may throw exception unrelated to the current put attempt
> --
>
> Key: KAFKA-5006
> URL: https://issues.apache.org/jira/browse/KAFKA-5006
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.2.0, 0.10.1.0, 0.10.0.0
>Reporter: Xavier Léauté
>Assignee: Eno Thereska
>  Labels: user-experience
> Fix For: 0.11.0.0
>
>
> It is possible for {{KeyValueStore.put(K key, V value)}} to throw an 
> exception unrelated to the store in question. Due to [the way that 
> {{RecordCollector.send}} is currently 
> implemented|https://github.com/confluentinc/kafka/blob/3.2.x/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java#L76]
> the exception thrown would be for any previous record produced by the stream 
> task, possibly for a completely unrelated topic the same task is producing to.
> This can be very confusing for someone attempting to correctly handle 
> exceptions thrown by put(), as they would not be able to add any additional 
> debugging information to understand the operation that caused the problem. 
> Worse, such logging would likely confuse the user, since they might mislead 
> themselves into thinking the changelog record created by calling put() caused 
> the problem.
> Given that there is likely no way for the user to recover from an exception 
> thrown by an unrelated produce request, it is questionable whether we should 
> even try to raise the exception at this level. A short-term fix would be to 
> simply delegate this exception to the uncaught exception handler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5006) KeyValueStore.put may throw exception unrelated to the current put attempt

2017-05-11 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-5006:

Priority: Blocker  (was: Major)

> KeyValueStore.put may throw exception unrelated to the current put attempt
> --
>
> Key: KAFKA-5006
> URL: https://issues.apache.org/jira/browse/KAFKA-5006
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.0.0, 0.10.1.0, 0.10.2.0
>Reporter: Xavier Léauté
>Assignee: Eno Thereska
>Priority: Blocker
>  Labels: user-experience
> Fix For: 0.11.0.0
>
>
> It is possible for {{KeyValueStore.put(K key, V value)}} to throw an 
> exception unrelated to the store in question. Due to [the way that 
> {{RecordCollector.send}} is currently 
> implemented|https://github.com/confluentinc/kafka/blob/3.2.x/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java#L76]
> the exception thrown would be for any previous record produced by the stream 
> task, possibly for a completely unrelated topic the same task is producing to.
> This can be very confusing for someone attempting to correctly handle 
> exceptions thrown by put(), as they would not be able to add any additional 
> debugging information to understand the operation that caused the problem. 
> Worse, such logging would likely confuse the user, since they might mislead 
> themselves into thinking the changelog record created by calling put() caused 
> the problem.
> Given that there is likely no way for the user to recover from an exception 
> thrown by an unrelated produce request, it is questionable whether we should 
> even try to raise the exception at this level. A short-term fix would be to 
> simply delegate this exception to the uncaught exception handler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5129) TransactionCoordinator - Add ACL check for each request

2017-05-11 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy reassigned KAFKA-5129:
-

Assignee: Damian Guy

> TransactionCoordinator - Add ACL check for each request
> ---
>
> Key: KAFKA-5129
> URL: https://issues.apache.org/jira/browse/KAFKA-5129
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> We need to add the ACL check for each of the new requests in TC



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5219) Move transaction expiration logic and scheduling to the Transaction Manager

2017-05-11 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-5219:
-

 Summary: Move transaction expiration logic and scheduling to the 
Transaction Manager
 Key: KAFKA-5219
 URL: https://issues.apache.org/jira/browse/KAFKA-5219
 Project: Kafka
  Issue Type: Sub-task
Reporter: Damian Guy


Presently the transaction expiration logic is spread between the 
{{TransactionStateManager}} and the {{TransactionCoordinator}}. It would be 
best if it was all in the {{TransactionStateManager}}. This requires moving the 
bulk of the commit/abort logic, too. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to normal : kafka-trunk-jdk8 #1502

2017-05-11 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #3022: HOTFIX: change compression codec in TransactionSta...

2017-05-11 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3022

HOTFIX: change compression codec in TransactionStateManager to 
UncompressedCodec

Change the compression code used for the transaction log to 
UncompressedCoded as it fails during creation when the codec is set to 
NoCompressionCodec. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka hotfix-tsm

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3022.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3022


commit da2089ada81ba72484300047904c3884a2b4d3bf
Author: Damian Guy 
Date:   2017-05-11T10:37:14Z

change compression codec in TSM to UncompressedCodec




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] Modify / Remove "Unstable" annotations in Streams API

2017-05-11 Thread Eno Thereska
Sounds reasonable.

Thanks,
Eno
> On May 11, 2017, at 7:39 AM, Ismael Juma  wrote:
> 
> Thanks for the proposal Guozhang. This sounds good to me.
> 
> Ismael
> 
> On Thu, May 11, 2017 at 6:02 AM, Guozhang Wang  wrote:
> 
>> Hello folks,
>> 
>> As we are approaching the feature freeze deadline of 0.11.0.0, one thing I
>> realized is that currently the Streams public APIs are still marked as
>> "Unstable", which is to indicate that the API itself does not provide
>> guarantees about backward compatibility across releases. On the other hand,
>> since Streams have now been widely adopted in production use cases by many
>> organizations, we are in fact evolving its APIs in a much stricter manner
>> than "Unstable" allows us: for all the current Streams related KIP
>> proposals under discussions right now [1], people have been working hard to
>> make sure none of them are going to break backward compatibility in the
>> coming releases. So I think it would be a good timing to change the Streams
>> API annotations.
>> 
>> My proposal would be the following:
>> 
>> 1. For "o.a.k.streams.errors" and "o.a.k.streams.state" packages: remove
>> the annotations except `StreamsMetrics`.
>> 
>> 2. For "o.a.k.streams.kstream": remove the annotations except "KStream",
>> "KTable", "GroupedKStream", "GroupedKTable", "GlobalKTable" and
>> "KStreamBuilder".
>> 
>> 3. For all the other public classes, including "o.a.k.streams.processor"
>> and the above mentioned classes, change the annotation to "Evolving", which
>> means "we might break compatibility at minor releases (i.e. 0.12.x, 0.13.x,
>> 1.0.x etc) only".
>> 
>> 
>> The ultimate goal is to make sure we won't break anything going forward,
>> hence in the future we should remove all the annotations to make that
>> clear. The above changes in 0.11.0.0 is to give us some "buffer time" in
>> case there are some major API change proposals after the release.
>> 
>> Would love to hear your thoughts.
>> 
>> 
>> [1]
>> 
>> KIP-95: Incremental Batch Processing for Kafka Streams
>> > 95%3A+Incremental+Batch+Processing+for+Kafka+Streams>
>> 
>> KIP-120: Cleanup Kafka Streams builder API
>> > 120%3A+Cleanup+Kafka+Streams+builder+API>
>> 
>> KIP-123: Allow per stream/table timestamp extractor
>> >> 
>> 
>> KIP 130: Expose states of active tasks to KafkaStreams public API
>> > 130%3A+Expose+states+of+active+tasks+to+KafkaStreams+public+API>
>> 
>> KIP-132: Augment KStream.print to allow extra parameters in the printed
>> string
>> > 132+-+Augment+KStream.print+to+allow+extra+parameters+in+
>> the+printed+string>
>> 
>> KIP-138: Change punctuate semantics
>> > 138%3A+Change+punctuate+semantics>
>> 
>> KIP-147: Add missing type parameters to StateStoreSupplier factories and
>> KGroupedStream/Table methods
>> >> 
>> 
>> KIP-149: Enabling key access in ValueTransformer, ValueMapper, and
>> ValueJoiner
>> > 149%3A+Enabling+key+access+in+ValueTransformer%2C+ValueMapper%2C+and+
>> ValueJoiner#KIP-149:EnablingkeyaccessinValueTransformer,ValueMapper,
>> andValueJoiner-RejectedAlternatives>
>> 
>> KIP-150 - Kafka-Streams Cogroup
>> > 150+-+Kafka-Streams+Cogroup>
>> 
>> KIP 155 - Add range scan for windowed state stores
>> > 155+-+Add+range+scan+for+windowed+state+stores>
>> 
>> KIP 156 Add option "dry run" to Streams application reset tool
>> >> 
>> 
>> 
>> --
>> -- Guozhang
>> 



[jira] [Work started] (KAFKA-4603) the argument of shell in doc wrong and command parsed error

2017-05-11 Thread Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4603 started by Xin.
--
> the argument of shell in doc wrong and command parsed error
> ---
>
> Key: KAFKA-4603
> URL: https://issues.apache.org/jira/browse/KAFKA-4603
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, documentation
>Affects Versions: 0.10.0.1, 0.10.2.0
> Environment: suse
>Reporter: Xin
>Assignee: Xin
>Priority: Minor
>
> according to the 7.6.2 Migrating clusters of document :
> ./zookeeper-security-migration.sh --zookeeper.acl=secure 
> --zookeeper.connection=localhost:2181
> joptsimple.OptionArgumentConversionException: Cannot parse argument 
> 'localhost:2181' of option zookeeper.connection.timeout
>   at joptsimple.AbstractOptionSpec.convertWith(AbstractOptionSpec.java:93)
>   at 
> joptsimple.ArgumentAcceptingOptionSpec.convert(ArgumentAcceptingOptionSpec.java:274)
>   at joptsimple.OptionSet.valuesOf(OptionSet.java:223)
>   at joptsimple.OptionSet.valueOf(OptionSet.java:172)
>   at kafka.admin.ZkSecurityMigrator$.run(ZkSecurityMigrator.scala:111)
>   at kafka.admin.ZkSecurityMigrator$.main(ZkSecurityMigrator.scala:119)
>   at kafka.admin.ZkSecurityMigrator.main(ZkSecurityMigrator.scala)
> Caused by: joptsimple.internal.ReflectionException: 
> java.lang.NumberFormatException: For input string: "localhost:2181"
>   at 
> joptsimple.internal.Reflection.reflectionException(Reflection.java:140)
>   at joptsimple.internal.Reflection.invoke(Reflection.java:122)
>   at 
> joptsimple.internal.MethodInvokingValueConverter.convert(MethodInvokingValueConverter.java:48)
>   at joptsimple.internal.Reflection.convertWith(Reflection.java:128)
>   at joptsimple.AbstractOptionSpec.convertWith(AbstractOptionSpec.java:90)
>   ... 6 more
> Caused by: java.lang.NumberFormatException: For input string: "localhost:2181"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Integer.parseInt(Integer.java:492)
>   at java.lang.Integer.valueOf(Integer.java:582)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at joptsimple.internal.Reflection.invoke(Reflection.java:119)
>   ... 9 more
> ===>the argument  "zookeeper.connection" has been parsed to 
> "zookeeper.connection.timeout"
> using help i found that  the argument  is :
> --zookeeper.connectSets the ZooKeeper connect string 
>  (ensemble). This parameter takes a  
>  comma-separated list of host:port   
>  pairs. (default: localhost:2181)
> --zookeeper.connection.timeout Sets the ZooKeeper connection timeout.
> the document describe wrong, and the code also has something wrong:
>  in ZkSecurityMigrator.scala,
>   val parser = new OptionParse()==>
> Any of --v, --ve, ... are accepted on the command line and treated as though 
> you had typed --verbose.
> To suppress this behavior, use the OptionParser constructor 
> OptionParser(boolean allowAbbreviations) and pass a value of false.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3023: KAFKA-4317: Checkpoint StateStores on commit inter...

2017-05-11 Thread dguy
Github user dguy closed the pull request at:

https://github.com/apache/kafka/pull/3023


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3023: KAFKA-4317: Checkpoint StateStores on commit inter...

2017-05-11 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3023

KAFKA-4317: Checkpoint StateStores on commit interval

This is a backport of: https://github.com/apache/kafka/pull/2471 from trunk

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka k4881-bp

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3023.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3023


commit 7946b2f35c0e3811ba7d706fc9c761d73ece47bb
Author: Damian Guy 
Date:   2017-01-16T19:40:47Z

MINOR: Remove unused constructor param from ProcessorStateManager

Remove applicationId parameter as it is no longer used.

Author: Damian Guy 

Reviewers: Guozhang Wang 

Closes #2385 from dguy/minor-remove-unused-param

commit 621dff22e79dc64b9a8748186dd985774044f91a
Author: Rajini Sivaram 
Date:   2017-01-17T11:16:29Z

KAFKA-4363; Documentation for sasl.jaas.config property

Author: Rajini Sivaram 

Reviewers: Ismael Juma 

Closes #2316 from rajinisivaram/KAFKA-4363

commit e3f4cdd0e249f78a7f4e8f064533bcd15eb11cbf
Author: Rajini Sivaram 
Date:   2017-01-17T12:55:07Z

KAFKA-4590; SASL/SCRAM system tests

Runs sanity test and one replication test using SASL/SCRAM.

Author: Rajini Sivaram 

Reviewers: Ewen Cheslack-Postava , Ismael Juma 


Closes #2355 from rajinisivaram/KAFKA-4590

commit 2b19ad9d8c47fb0f78a6e90d2f5711df6110bf1f
Author: Rajini Sivaram 
Date:   2017-01-17T18:42:55Z

KAFKA-4580; Use sasl.jaas.config for some system tests

Switched console_consumer, verifiable_consumer and verifiable_producer to 
use new sasl.jaas_config property instead of static JAAS configuration file 
when used with SASL_PLAINTEXT.

Author: Rajini Sivaram 

Reviewers: Ewen Cheslack-Postava , Ismael Juma 


Closes #2323 from rajinisivaram/KAFKA-4580

(cherry picked from commit 3f6c4f63c9c17424cf717ca76c74554bcf3b2e9a)
Signed-off-by: Ismael Juma 

commit 60d759a227087079e6fd270c68fd9e38441cb34a
Author: Jason Gustafson 
Date:   2017-01-17T18:42:05Z

MINOR: Some cleanups and additional testing for KIP-88

Author: Jason Gustafson 

Reviewers: Vahid Hashemian , Ismael Juma 


Closes #2383 from hachikuji/minor-cleanup-kip-88

commit c9b9acf6a8b542c2d0d825c17a4a20cf3fa5
Author: Damian Guy 
Date:   2017-01-17T20:33:11Z

KAFKA-4588: Wait for topics to be created in 
QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable

After debugging this i can see the times that it fails there is a race 
between when the topic is actually created/ready on the broker and when the 
assignment happens. When it fails `StreamPartitionAssignor.assign(..)` gets 
called with a `Cluster` with no topics. Hence the test hangs as no tasks get 
assigned. To fix this I added a `waitForTopics` method to 
`EmbeddedKafkaCluster`. This will wait until the topics have been created.

Author: Damian Guy 

Reviewers: Matthias J. Sax, Guozhang Wang

Closes #2371 from dguy/integration-test-fix

(cherry picked from commit 825f225bc5706b16af8ec44ca47ee1452c11e6f3)
Signed-off-by: Guozhang Wang 

commit 6f72a5a53c444278187fa6be58031168bcaffb26
Author: Damian Guy 
Date:   2017-01-17T22:13:46Z

KAFKA-3452 Follow-up: Refactoring StateStore hierarchies

This is a follow up of https://github.com/apache/kafka/pull/2166 - 
refactoring the store hierarchies as requested

Author: Damian Guy 

Reviewers: Guozhang Wang 

Closes #2360 from dguy/state-store-refactor

(cherry picked from commit 73b7ae0019d387407375f3865e263225c986a6ce)
Signed-off-by: Guozhang Wang 

commit eb62e5695506ae13bd37102c3c08e8a067eca0c8
Author: Ismael Juma 
Date:   2017-01-18T02:43:10Z

KAFKA-4591; Create Topic Policy follow-up

1. Added javadoc to public classes
2. Removed `s` from config name for consistency with interface name
3. The policy interface now implements Configurable and AutoCloseable as 
per the KIP
4. Use `null` instead of `-1` in `RequestMetadata`
5. Perform all broker validation before invoking the policy
6. Add tests

Author: Ismael Juma 

Reviewers: Jason Gustafson 

Closes #2388 from ijuma/create-topic-policy-docs-and-config-name-change

(cherry picked from commit fd6d7bcf335166a524dc9a29a50c96af8f1c1c02)
Signed-off-by: Ismael Juma 

commit e38794e020951adec5a5d0bbfe42c57294bf67bd
Author: Guozhang Wang 
Date:   2017-01-18T04:29:55Z

KAFKA-3502; move RocksDB options construction to init()

In RocksDBStore, options / wOptions / fOptions are constructed in the 
constructor, which needs to be dismissed 

[jira] [Commented] (KAFKA-4317) RocksDB checkpoint files lost on kill -9

2017-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006289#comment-16006289
 ] 

ASF GitHub Bot commented on KAFKA-4317:
---

Github user dguy closed the pull request at:

https://github.com/apache/kafka/pull/3023


> RocksDB checkpoint files lost on kill -9
> 
>
> Key: KAFKA-4317
> URL: https://issues.apache.org/jira/browse/KAFKA-4317
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Greg Fodor
>Assignee: Damian Guy
>Priority: Critical
>  Labels: architecture, needs-kip, user-experience
> Fix For: 0.11.0.0
>
>
> Right now, the checkpoint files for logged RocksDB stores are written during 
> a graceful shutdown, and removed upon restoration. Unfortunately this means 
> that in a scenario where the process is forcibly killed, the checkpoint files 
> are not there, so all RocksDB stores are rematerialized from scratch on the 
> next launch.
> In a way, this is good, because it simulates bootstrapping a new node (for 
> example, its a good way to see how much I/O is used to rematerialize the 
> stores) however it leads to longer recovery times when a non-graceful 
> shutdown occurs and we want to get the job up and running again.
> It seems that two possible things to consider:
> - Simply do not remove checkpoint files on restoring. This way a kill -9 will 
> result in only repeating the restoration of all the data generated in the 
> source topics since the last graceful shutdown.
> - Continually update the checkpoint files (perhaps on commit) -- this would 
> result in the least amount of overhead/latency in restarting, but the 
> additional complexity may not be worth it.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-116%3A+Add+State+Store+Checkpoint+Interval+Configuration



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4317) RocksDB checkpoint files lost on kill -9

2017-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006287#comment-16006287
 ] 

ASF GitHub Bot commented on KAFKA-4317:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3023

KAFKA-4317: Checkpoint StateStores on commit interval

This is a backport of: https://github.com/apache/kafka/pull/2471 from trunk

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka k4881-bp

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3023.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3023


commit 7946b2f35c0e3811ba7d706fc9c761d73ece47bb
Author: Damian Guy 
Date:   2017-01-16T19:40:47Z

MINOR: Remove unused constructor param from ProcessorStateManager

Remove applicationId parameter as it is no longer used.

Author: Damian Guy 

Reviewers: Guozhang Wang 

Closes #2385 from dguy/minor-remove-unused-param

commit 621dff22e79dc64b9a8748186dd985774044f91a
Author: Rajini Sivaram 
Date:   2017-01-17T11:16:29Z

KAFKA-4363; Documentation for sasl.jaas.config property

Author: Rajini Sivaram 

Reviewers: Ismael Juma 

Closes #2316 from rajinisivaram/KAFKA-4363

commit e3f4cdd0e249f78a7f4e8f064533bcd15eb11cbf
Author: Rajini Sivaram 
Date:   2017-01-17T12:55:07Z

KAFKA-4590; SASL/SCRAM system tests

Runs sanity test and one replication test using SASL/SCRAM.

Author: Rajini Sivaram 

Reviewers: Ewen Cheslack-Postava , Ismael Juma 


Closes #2355 from rajinisivaram/KAFKA-4590

commit 2b19ad9d8c47fb0f78a6e90d2f5711df6110bf1f
Author: Rajini Sivaram 
Date:   2017-01-17T18:42:55Z

KAFKA-4580; Use sasl.jaas.config for some system tests

Switched console_consumer, verifiable_consumer and verifiable_producer to 
use new sasl.jaas_config property instead of static JAAS configuration file 
when used with SASL_PLAINTEXT.

Author: Rajini Sivaram 

Reviewers: Ewen Cheslack-Postava , Ismael Juma 


Closes #2323 from rajinisivaram/KAFKA-4580

(cherry picked from commit 3f6c4f63c9c17424cf717ca76c74554bcf3b2e9a)
Signed-off-by: Ismael Juma 

commit 60d759a227087079e6fd270c68fd9e38441cb34a
Author: Jason Gustafson 
Date:   2017-01-17T18:42:05Z

MINOR: Some cleanups and additional testing for KIP-88

Author: Jason Gustafson 

Reviewers: Vahid Hashemian , Ismael Juma 


Closes #2383 from hachikuji/minor-cleanup-kip-88

commit c9b9acf6a8b542c2d0d825c17a4a20cf3fa5
Author: Damian Guy 
Date:   2017-01-17T20:33:11Z

KAFKA-4588: Wait for topics to be created in 
QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable

After debugging this i can see the times that it fails there is a race 
between when the topic is actually created/ready on the broker and when the 
assignment happens. When it fails `StreamPartitionAssignor.assign(..)` gets 
called with a `Cluster` with no topics. Hence the test hangs as no tasks get 
assigned. To fix this I added a `waitForTopics` method to 
`EmbeddedKafkaCluster`. This will wait until the topics have been created.

Author: Damian Guy 

Reviewers: Matthias J. Sax, Guozhang Wang

Closes #2371 from dguy/integration-test-fix

(cherry picked from commit 825f225bc5706b16af8ec44ca47ee1452c11e6f3)
Signed-off-by: Guozhang Wang 

commit 6f72a5a53c444278187fa6be58031168bcaffb26
Author: Damian Guy 
Date:   2017-01-17T22:13:46Z

KAFKA-3452 Follow-up: Refactoring StateStore hierarchies

This is a follow up of https://github.com/apache/kafka/pull/2166 - 
refactoring the store hierarchies as requested

Author: Damian Guy 

Reviewers: Guozhang Wang 

Closes #2360 from dguy/state-store-refactor

(cherry picked from commit 73b7ae0019d387407375f3865e263225c986a6ce)
Signed-off-by: Guozhang Wang 

commit eb62e5695506ae13bd37102c3c08e8a067eca0c8
Author: Ismael Juma 
Date:   2017-01-18T02:43:10Z

KAFKA-4591; Create Topic Policy follow-up

1. Added javadoc to public classes
2. Removed `s` from config name for consistency with interface name
3. The policy interface now implements Configurable and AutoCloseable as 
per the KIP
4. Use `null` instead of `-1` in `RequestMetadata`
5. Perform all broker validation before invoking the policy
6. Add tests

Author: Ismael Juma 

Reviewers: Jason Gustafson 

Closes #2388 from ijuma/create-topic-policy-docs-and-config-name-change

(cherry picked from commit fd6d7bcf335166a524dc9a29a50c96af8f1c1c02)
Signed-off-by: Ismael Juma 

commit e38794e020951adec5a

[GitHub] kafka pull request #3024: KAFKA-4317: Checkpoint state stores on commit inte...

2017-05-11 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3024

KAFKA-4317: Checkpoint state stores on commit interval

This is a backport of https://github.com/apache/kafka/pull/2471

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka k4881-bp

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3024.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3024


commit b3440c4a1c8328c9de3447c7830b3d2e59176628
Author: Damian Guy 
Date:   2017-02-17T22:41:28Z

backport from trunk




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4317) RocksDB checkpoint files lost on kill -9

2017-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006290#comment-16006290
 ] 

ASF GitHub Bot commented on KAFKA-4317:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3024

KAFKA-4317: Checkpoint state stores on commit interval

This is a backport of https://github.com/apache/kafka/pull/2471

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka k4881-bp

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3024.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3024


commit b3440c4a1c8328c9de3447c7830b3d2e59176628
Author: Damian Guy 
Date:   2017-02-17T22:41:28Z

backport from trunk




> RocksDB checkpoint files lost on kill -9
> 
>
> Key: KAFKA-4317
> URL: https://issues.apache.org/jira/browse/KAFKA-4317
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Greg Fodor
>Assignee: Damian Guy
>Priority: Critical
>  Labels: architecture, needs-kip, user-experience
> Fix For: 0.11.0.0
>
>
> Right now, the checkpoint files for logged RocksDB stores are written during 
> a graceful shutdown, and removed upon restoration. Unfortunately this means 
> that in a scenario where the process is forcibly killed, the checkpoint files 
> are not there, so all RocksDB stores are rematerialized from scratch on the 
> next launch.
> In a way, this is good, because it simulates bootstrapping a new node (for 
> example, its a good way to see how much I/O is used to rematerialize the 
> stores) however it leads to longer recovery times when a non-graceful 
> shutdown occurs and we want to get the job up and running again.
> It seems that two possible things to consider:
> - Simply do not remove checkpoint files on restoring. This way a kill -9 will 
> result in only repeating the restoration of all the data generated in the 
> source topics since the last graceful shutdown.
> - Continually update the checkpoint files (perhaps on commit) -- this would 
> result in the least amount of overhead/latency in restarting, but the 
> additional complexity may not be worth it.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-116%3A+Add+State+Store+Checkpoint+Interval+Configuration



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3025: KAFKA-4881: add internal.leave.group.config to con...

2017-05-11 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3025

KAFKA-4881: add internal.leave.group.config to consumer

Backport from https://github.com/apache/kafka/pull/2650

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-4881-bp

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3025.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3025


commit 1eb6b1af11100b406b067f8c1e1d0e99c6543836
Author: Damian Guy 
Date:   2017-03-27T17:30:38Z

backport from trunk




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4881) Add internal leave.group.on.close config to consumer

2017-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006306#comment-16006306
 ] 

ASF GitHub Bot commented on KAFKA-4881:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3025

KAFKA-4881: add internal.leave.group.config to consumer

Backport from https://github.com/apache/kafka/pull/2650

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-4881-bp

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3025.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3025


commit 1eb6b1af11100b406b067f8c1e1d0e99c6543836
Author: Damian Guy 
Date:   2017-03-27T17:30:38Z

backport from trunk




> Add internal leave.group.on.close config to consumer 
> -
>
> Key: KAFKA-4881
> URL: https://issues.apache.org/jira/browse/KAFKA-4881
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.11.0.0
>
>
> In streams we need to reduce the number of rebalances as they cause expensive 
> shuffling of state during {{onPartitionsAssigned}} and 
> {{onPartitionsRevoked}}. To achieve this we can choose to not send leave the 
> group when a streams consumer is closed. This means that during bounces (with 
> appropriate session timeout settings) we will see at most one rebalance per 
> instance bounce.
> As this is an optimization that is only relevant to streams at the moment, 
> initially we will do this by adding an internal config to the consumer 
> {{leave.group.on.close}}, this will default to true. When it is set to false 
> {{AbstractCoordinator}} won't send the {{LeaveGroupRequest}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-133: List and Alter Configs Admin APIs

2017-05-11 Thread Ismael Juma
Hey Colin,

I added Config#get(String) to the KIP.

Ismael

On Mon, May 8, 2017 at 6:47 PM, Colin McCabe  wrote:
>
> Good idea.  Should we add a Config#get(String) that can get the value of
> a specific ConfigEntry?
>


[jira] [Resolved] (KAFKA-5182) Transient failure: RequestQuotaTest.testResponseThrottleTime

2017-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5182.

   Resolution: Fixed
Fix Version/s: 0.11.0.0

Issue resolved by pull request 3014
[https://github.com/apache/kafka/pull/3014]

> Transient failure: RequestQuotaTest.testResponseThrottleTime
> 
>
> Key: KAFKA-5182
> URL: https://issues.apache.org/jira/browse/KAFKA-5182
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Rajini Sivaram
> Fix For: 0.11.0.0
>
>
> Stacktrace
> {code}
> java.util.concurrent.ExecutionException: java.lang.AssertionError: Response 
> not throttled: Client JOIN_GROUP apiKey JOIN_GROUP requests 3 requestTime 
> 0.009982775502048156 throttleTime 0.0
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:206)
>   at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:326)
>   at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:324)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
>   at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:324)
>   at 
> kafka.server.RequestQuotaTest.testResponseThrottleTime(RequestQuotaTest.scala:105)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.di

[GitHub] kafka pull request #3014: KAFKA-5182: Close txn coordinator threads during b...

2017-05-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3014


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5182) Transient failure: RequestQuotaTest.testResponseThrottleTime

2017-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006339#comment-16006339
 ] 

ASF GitHub Bot commented on KAFKA-5182:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3014


> Transient failure: RequestQuotaTest.testResponseThrottleTime
> 
>
> Key: KAFKA-5182
> URL: https://issues.apache.org/jira/browse/KAFKA-5182
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Rajini Sivaram
> Fix For: 0.11.0.0
>
>
> Stacktrace
> {code}
> java.util.concurrent.ExecutionException: java.lang.AssertionError: Response 
> not throttled: Client JOIN_GROUP apiKey JOIN_GROUP requests 3 requestTime 
> 0.009982775502048156 throttleTime 0.0
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:206)
>   at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:326)
>   at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:324)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
>   at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:324)
>   at 
> kafka.server.RequestQuotaTest.testResponseThrottleTime(RequestQuotaTest.scala:105)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatc

[GitHub] kafka pull request #3022: HOTFIX: change compression codec in TransactionSta...

2017-05-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3022


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #2167

2017-05-11 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5182; Close txn coordinator threads during broker shutdown

--
[...truncated 849.37 KB...]
kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
PASSED

kafka.log.ProducerStateManagerTest > testTakeSnapshot STARTED

kafka.log.ProducerStateManagerTest > testTakeSnapshot PASSED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore STARTED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore PASSED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached STARTED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction PASSED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
STARTED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
PASSED

kafka.log.ProducerStateManagerTest > testEvictUnretainedPids STARTED

kafka.log.ProducerStateManagerTest > testEvictUnretainedPids PASSED

kafka.log.ProducerStateManagerTest > 
testProducersWithOngoingTransactionsDontExpire STARTED

kafka.log.ProducerStateManagerTest > 
testProducersWithOngoingTransactionsDontExpire PASSED

kafka.log.ProducerStateManagerTest > testBasicIdMapping STARTED

kafka.log.ProducerStateManagerTest > testBasicIdMapping PASSED

kafka.log.ProducerStateManagerTest > updateProducerTransactionState STARTED

kafka.log.ProducerStateManagerTest > updateProducerTransactionState PASSED

kafka.log.ProducerStateManagerTest > testRecoverFromSnapshot STARTED

kafka.log.ProducerStateManagerTest > testRecoverFromSnapshot PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffset STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffset PASSED

kafka.log.ProducerStateManagerTest > testTxnFirstOffsetMetadataCached STARTED

kafka.log.ProducerStateManagerTest > testTxnFirstOffsetMetadataCached PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencedAfterReload STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencedAfterReload PASSED

kafka.log.ProducerStateManagerTest > testControlRecordBumpsEpoch STARTED

kafka.log.ProducerStateManagerTest > testControlRecordBumpsEpoch PASSED

kafka.log.ProducerStateManagerTest > testPidExpirationTimeout STARTED

kafka.log.ProducerStateManagerTest > testPidExpirationTimeout PASSED

kafka.log.ProducerStateManagerTest > testOldEpochForControlRecord STARTED

kafka.log.ProducerStateManagerTest > testOldEpochForControlRecord PASSED

kafka.log.ProducerStateManagerTest > testStartOffset STARTED

kafka.log.ProducerStateManagerTest > testStartOffset PASSED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction STARTED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction PASSED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged STARTED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > test

Build failed in Jenkins: kafka-trunk-jdk8 #1503

2017-05-11 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5182; Close txn coordinator threads during broker shutdown

--
[...truncated 850.73 KB...]
kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg STARTED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs STARTED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr STARTED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.JsonTest > testJsonEncoding STARTED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
STARTED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath STARTED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath PASSED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing STARTED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing PASSED

kafka.utils.IteratorTemplateTest > testIterator STARTED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.UtilsTest > testGenerateUuidAsBase64 STARTED

kafka.utils.UtilsTest > testGenerateUuidAsBase64 PASSED

kafka.utils.UtilsTest > testAbs STARTED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix STARTED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator STARTED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes STARTED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList STARTED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt STARTED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.UtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.UtilsTest > testCsvMap STARTED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock STARTED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow STARTED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.Asyn

Re: [DISCUSS]: KIP-149: Enabling key access in ValueTransformer, ValueMapper, and ValueJoiner

2017-05-11 Thread Jeyhun Karimov
Hi,

Thanks for your comments. I think we cannot extend the two interfaces if we
want to keep lambdas. I updated the KIP [1]. Maybe I should change the
title, because now we are not limiting the KIP to only ValueMapper,
ValueTransformer and ValueJoiner.
Please feel free to comment.

[1]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-149%3A+Enabling+key+access+in+ValueTransformer%2C+ValueMapper%2C+and+ValueJoiner


Cheers,
Jeyhun

On Tue, May 9, 2017 at 7:36 PM Matthias J. Sax 
wrote:

> If `ValueMapperWithKey` extends `ValueMapper` we don't need the new
> overlaod.
>
> And yes, we need to do one check -- but this happens when building the
> topology. At runtime (I mean after KafkaStream#start() we don't need any
> check as we will always use `ValueMapperWithKey`)
>
>
> -Matthias
>
>
> On 5/9/17 2:55 AM, Jeyhun Karimov wrote:
> > Hi,
> >
> > Thanks for feedback.
> > Then we need to overload method
> >KStream mapValues(ValueMapper
> > mapper);
> > with
> >KStream mapValues(ValueMapperWithKey VR>
> > mapper);
> >
> > and in runtime (inside processor) we still have to check it is
> ValueMapper
> > or ValueMapperWithKey before wrapping it into the rich function.
> >
> >
> > Please correct me if I am wrong.
> >
> > Cheers,
> > Jeyhun
> >
> >
> >
> >
> > On Tue, May 9, 2017 at 10:56 AM Michal Borowiecki <
> > michal.borowie...@openbet.com> wrote:
> >
> >> +1 :)
> >>
> >>
> >> On 08/05/17 23:52, Matthias J. Sax wrote:
> >>> Hi,
> >>>
> >>> I was reading the updated KIP and I am wondering, if we should do the
> >>> design a little different.
> >>>
> >>> Instead of distinguishing between a RichFunction and non-RichFunction
> at
> >>> runtime level, we would use RichFunctions all the time. Thus, on the
> DSL
> >>> entry level, if a user provides a non-RichFunction, we wrap it by a
> >>> RichFunction that is fully implemented by Streams. This RichFunction
> >>> would just forward the call omitting the key:
> >>>
> >>> Just to sketch the idea (incomplete code snippet):
> >>>
>  public StreamsRichValueMapper implements RichValueMapper() {
> private ValueMapper userProvidedMapper; // set by constructor
> 
> public VR apply(final K key, final V1 value1, final V2 value2) {
> return userProvidedMapper(value1, value2);
> }
>  }
> >>>
> >>>  From a performance point of view, I am not sure if the
> >>> "if(isRichFunction)" including casts etc would have more overhead than
> >>> this approach (we would do more nested method call for non-RichFunction
> >>> which should be more common than RichFunctions).
> >>>
> >>> This approach should not effect lambdas (or do I miss something?) and
> >>> might be cleaner, as we could have one more top level interface
> >>> `RichFunction` with methods `init()` and `close()` and also interfaces
> >>> for `RichValueMapper` etc. (thus, no abstract classes required).
> >>>
> >>>
> >>> Any thoughts?
> >>>
> >>>
> >>> -Matthias
> >>>
> >>>
> >>> On 5/6/17 5:29 PM, Jeyhun Karimov wrote:
>  Hi,
> 
>  Thanks for comments. I extended PR and KIP to include rich functions.
> I
>  will still have to evaluate the cost of deep copying of keys.
> 
>  Cheers,
>  Jeyhun
> 
>  On Fri, May 5, 2017 at 8:02 PM Mathieu Fenniak <
> >> mathieu.fenn...@replicon.com>
>  wrote:
> 
> > Hey Matthias,
> >
> > My opinion would be that documenting the immutability of the key is
> the
> > best approach available.  Better than requiring the key to be
> >> serializable
> > (as with Jeyhun's last pass at the PR), no performance risk.
> >
> > It'd be different if Java had immutable type constraints of some
> kind.
> >> :-)
> >
> > Mathieu
> >
> >
> > On Fri, May 5, 2017 at 11:31 AM, Matthias J. Sax <
> >> matth...@confluent.io>
> > wrote:
> >
> >> Agreed about RichFunction. If we follow this path, it should cover
> >> all(?) DSL interfaces.
> >>
> >> About guarding the key -- I am still not sure what to do about it...
> >> Maybe it might be enough to document it (and name the key parameter
> >> like
> >> `readOnlyKey` to make is very clear). Ultimately, I would prefer to
> >> guard against any modification, but I have no good idea how to do
> >> this.
> >> Not sure what others think about the risk of corrupted partitioning
> >> (what would be a user error and we could say, well, bad luck, you
> got
> >> a
> >> bug in your code, that's not our fault), vs deep copy with a
> potential
> >> performance hit (that we can't quantity atm without any performance
> > test).
> >> We do have a performance system test. Maybe it's worth for you to
> >> apply
> >> the deep copy strategy and run the test. It's very basic performance
> >> test only, but might give some insight. If you want to do this, look
> >> into folder "tests" for general test setup, and into
> >> "tests/kafaktests/benchmarks/streams" to find find th

Re: [VOTE] KIP-154 Add Kafka Connect configuration properties for creating internal topics

2017-05-11 Thread Ismael Juma
Thanks for the KIP, Randall. +1 (binding) from me.

On Mon, May 8, 2017 at 8:51 PM, Randall Hauch  wrote:

> Hi, everyone.
>
> Given the simple and non-controversial nature of the KIP, I would like to
> start the voting process for KIP-154: Add Kafka Connect configuration
> properties for creating internal topics:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 154+Add+Kafka+Connect+configuration+properties+for+
> creating+internal+topics
>
> The vote will run for a minimum of 72 hours.
>
> Thanks,
>
> Randall
>


Re: [VOTE] KIP-138: Change punctuate semantics

2017-05-11 Thread Ismael Juma
Thanks for the KIP, Michal. +1(binding) from me.

Ismael

On Sat, May 6, 2017 at 6:18 PM, Michal Borowiecki <
michal.borowie...@openbet.com> wrote:

> Hi all,
>
> Given I'm not seeing any contentious issues remaining on the discussion
> thread, I'd like to initiate the vote for:
>
> KIP-138: Change punctuate semantics
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 138%3A+Change+punctuate+semantics
>
>
> Thanks,
> Michał
> --
>  Michal Borowiecki
> Senior Software Engineer L4
> T: +44 208 742 1600 <020%208742%201600>
>
>
> +44 203 249 8448 <020%203249%208448>
>
>
>
> E: michal.borowie...@openbet.com
> W: www.openbet.com
> OpenBet Ltd
>
> Chiswick Park Building 9
>
> 566 Chiswick High Rd
>
> London
>
> W4 5XT
>
> UK
> 
> This message is confidential and intended only for the addressee. If you
> have received this message in error, please immediately notify the
> postmas...@openbet.com and delete it from your system as well as any
> copies. The content of e-mails as well as traffic data may be monitored by
> OpenBet for employment and security purposes. To protect the environment
> please do not print this e-mail unless necessary. OpenBet Ltd. Registered
> Office: Chiswick Park Building 9, 566 Chiswick High Road, London, W4 5XT,
> United Kingdom. A company registered in England and Wales. Registered no.
> 3134634. VAT no. GB927523612
>


[jira] [Commented] (KAFKA-4750) KeyValueIterator returns null values

2017-05-11 Thread Dmitry Minkovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006486#comment-16006486
 ] 

Dmitry Minkovsky commented on KAFKA-4750:
-

Affects 0.10.2.1 also

> KeyValueIterator returns null values
> 
>
> Key: KAFKA-4750
> URL: https://issues.apache.org/jira/browse/KAFKA-4750
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.1
>Reporter: Michal Borowiecki
>Assignee: Kamal Chandraprakash
>  Labels: newbie
>
> The API for ReadOnlyKeyValueStore.range method promises the returned iterator 
> will not return null values. However, after upgrading from 0.10.0.0 to 
> 0.10.1.1 we found null values are returned causing NPEs on our side.
> I found this happens after removing entries from the store and I found 
> resemblance to SAMZA-94 defect. The problem seems to be as it was there, when 
> deleting entries and having a serializer that does not return null when null 
> is passed in, the state store doesn't actually delete that key/value pair but 
> the iterator will return null value for that key.
> When I modified our serilizer to return null when null is passed in, the 
> problem went away. However, I believe this should be fixed in kafka streams, 
> perhaps with a similar approach as SAMZA-94.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4343) Connect REST API should expose whether each connector is a source or sink

2017-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4343:
---
Labels: kip newbie++  (was: needs-kip newbie++)

> Connect REST API should expose whether each connector is a source or sink
> -
>
> Key: KAFKA-4343
> URL: https://issues.apache.org/jira/browse/KAFKA-4343
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: dan norwood
>  Labels: kip, newbie++
> Fix For: 0.11.0.0
>
>
> Currently we don't expose information about whether a connector is a source 
> or sink. This is useful when, e.g., categorizing connectors in a UI. Given 
> naming conventions we try to encourage you *might* be able to determine this 
> via the connector's class name, but that isn't reliable.
> This may be related to https://issues.apache.org/jira/browse/KAFKA-4279 as we 
> might just want to expose this information on a per-connector-plugin basis 
> and expect any users to tie the results from multiple API requests together. 
> An alternative that would probably be simpler for users would be to include 
> it in the /connectors//status output.
> Note that this will require a KIP as it adds to the public REST API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4343) Connect REST API should expose whether each connector is a source or sink

2017-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4343:
---
Fix Version/s: 0.11.0.0

> Connect REST API should expose whether each connector is a source or sink
> -
>
> Key: KAFKA-4343
> URL: https://issues.apache.org/jira/browse/KAFKA-4343
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: dan norwood
>  Labels: kip, newbie++
> Fix For: 0.11.0.0
>
>
> Currently we don't expose information about whether a connector is a source 
> or sink. This is useful when, e.g., categorizing connectors in a UI. Given 
> naming conventions we try to encourage you *might* be able to determine this 
> via the connector's class name, but that isn't reliable.
> This may be related to https://issues.apache.org/jira/browse/KAFKA-4279 as we 
> might just want to expose this information on a per-connector-plugin basis 
> and expect any users to tie the results from multiple API requests together. 
> An alternative that would probably be simpler for users would be to include 
> it in the /connectors//status output.
> Note that this will require a KIP as it adds to the public REST API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4343) Connect REST API should expose whether each connector is a source or sink

2017-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4343:
---
Status: Patch Available  (was: Open)

> Connect REST API should expose whether each connector is a source or sink
> -
>
> Key: KAFKA-4343
> URL: https://issues.apache.org/jira/browse/KAFKA-4343
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: dan norwood
>  Labels: needs-kip, newbie++
>
> Currently we don't expose information about whether a connector is a source 
> or sink. This is useful when, e.g., categorizing connectors in a UI. Given 
> naming conventions we try to encourage you *might* be able to determine this 
> via the connector's class name, but that isn't reliable.
> This may be related to https://issues.apache.org/jira/browse/KAFKA-4279 as we 
> might just want to expose this information on a per-connector-plugin basis 
> and expect any users to tie the results from multiple API requests together. 
> An alternative that would probably be simpler for users would be to include 
> it in the /connectors//status output.
> Note that this will require a KIP as it adds to the public REST API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5218) New Short serializer, deserializer, serde

2017-05-11 Thread Mario Molina (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Molina updated KAFKA-5218:

External issue URL: https://github.com/apache/kafka/pull/3017

> New Short serializer, deserializer, serde
> -
>
> Key: KAFKA-5218
> URL: https://issues.apache.org/jira/browse/KAFKA-5218
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.1.1, 0.10.2.0
>Reporter: Mario Molina
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> There is no Short serializer/deserializer in the current clients component.
> It could be useful when using Kafka-Connect to write data to databases with 
> SMALLINT fields (or similar) and avoiding conversions to int improving a bit 
> the performance in terms of memory and network.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5218) New Short serializer, deserializer, serde

2017-05-11 Thread Mario Molina (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Molina updated KAFKA-5218:

External issue URL:   (was: https://github.com/apache/kafka/pull/3017)

> New Short serializer, deserializer, serde
> -
>
> Key: KAFKA-5218
> URL: https://issues.apache.org/jira/browse/KAFKA-5218
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.1.1, 0.10.2.0
>Reporter: Mario Molina
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> There is no Short serializer/deserializer in the current clients component.
> It could be useful when using Kafka-Connect to write data to databases with 
> SMALLINT fields (or similar) and avoiding conversions to int improving a bit 
> the performance in terms of memory and network.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


0.11.0.0 KIP Freeze Update

2017-05-11 Thread Ismael Juma
Hi all,

As you hopefully know, the KIP freeze for 0.11.0.0 was yesterday. Since
this is the first release with a KIP freeze and a number of people from the
community were in the Kafka Summit NY for a few days this week, I have
included added KIPs that have enough votes to pass (even if the 72 hour
period hasn't passed yet) to the release plan:

https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.11.0.0

For the next release, I suggest we aim to have all the KIP votes closed by
the KIP freeze.

As a reminder, here are the upcoming important dates (also documented in
the release plan):

   - Feature Freeze: May 17, 2017 (major features merged & working on
   stabilization, minor features have PR, release branch cut; anything not in
   this state will be automatically moved to the next release in JIRA)
   - Code Freeze: May 31, 2017 (first RC created now)
   - Release: June 14, 2017

The feature freeze is less than 1 week away and it's possible (perhaps even
likely) that some of the KIPs in the release plan may not make it.
Thankfully, the subsequent release will happen 4 months later, so the wait
won't be long. :)

KIPs: we have 33 adopted with 11 already committed and 19 with patches in
flight. The feature freeze is 6 days away so we have a lot of reviewing to
do, but significant changes have been merged already.

Open JIRAs: As usual, we have a lot!

*https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20fixVersion%20%3D%200.11.0.0%20ORDER%20BY%20priority%20DESC
*

152 at the moment. As we get nearer to the feature freeze, I will start
moving JIRAs out of this release.

* Closed JIRAs: So far ~212 closed tickets for 0.11.0.0:
https://issues.apache.org/jira/issues/?jql=project%20%
3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%
20AND%20fixVersion%20%3D%200.11.0.0%20ORDER%20BY%20priority%20DESC

* Release features:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.11.0.0 has
a "Release Features" section that will be included with the release
notes/email for the release. I added some items to get it going. Please add
to
this list anything you think is worth noting.

If you missed any of the updates on the new time-based releases we'll be
following, see
https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan
for an explanation.

I'll plan to give another update next week, around the feature freeze. As
usual, feedback and/or questions are welcome.

Thanks,
Ismael


Jenkins build is back to normal : kafka-trunk-jdk7 #2168

2017-05-11 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-138: Change punctuate semantics

2017-05-11 Thread Ewen Cheslack-Postava
+1 (binding)

-Ewen

On Thu, May 11, 2017 at 7:12 AM, Ismael Juma  wrote:

> Thanks for the KIP, Michal. +1(binding) from me.
>
> Ismael
>
> On Sat, May 6, 2017 at 6:18 PM, Michal Borowiecki <
> michal.borowie...@openbet.com> wrote:
>
>> Hi all,
>>
>> Given I'm not seeing any contentious issues remaining on the discussion
>> thread, I'd like to initiate the vote for:
>>
>> KIP-138: Change punctuate semantics
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-138%
>> 3A+Change+punctuate+semantics
>>
>>
>> Thanks,
>> Michał
>> --
>>  Michal Borowiecki
>> Senior Software Engineer L4
>> T: +44 208 742 1600 <020%208742%201600>
>>
>>
>> +44 203 249 8448 <020%203249%208448>
>>
>>
>>
>> E: michal.borowie...@openbet.com
>> W: www.openbet.com
>> OpenBet Ltd
>>
>> Chiswick Park Building 9
>>
>> 566 Chiswick High Rd
>>
>> London
>>
>> W4 5XT
>>
>> UK
>> 
>> This message is confidential and intended only for the addressee. If you
>> have received this message in error, please immediately notify the
>> postmas...@openbet.com and delete it from your system as well as any
>> copies. The content of e-mails as well as traffic data may be monitored by
>> OpenBet for employment and security purposes. To protect the environment
>> please do not print this e-mail unless necessary. OpenBet Ltd. Registered
>> Office: Chiswick Park Building 9, 566 Chiswick High Road, London, W4 5XT,
>> United Kingdom. A company registered in England and Wales. Registered no.
>> 3134634. VAT no. GB927523612
>>
>
>


Re: [VOTE] KIP-154 Add Kafka Connect configuration properties for creating internal topics

2017-05-11 Thread Ewen Cheslack-Postava
Randall is out atm and given the KIP deadline and 72h passed w/ only +1s,
I'm going to close this vote out.

The KIP passes:

4 binding +1s
5 non-binding +1s
0 -1s

Thanks everyone for voting!

-Ewen

On Thu, May 11, 2017 at 7:03 AM, Ismael Juma  wrote:

> Thanks for the KIP, Randall. +1 (binding) from me.
>
> On Mon, May 8, 2017 at 8:51 PM, Randall Hauch  wrote:
>
> > Hi, everyone.
> >
> > Given the simple and non-controversial nature of the KIP, I would like to
> > start the voting process for KIP-154: Add Kafka Connect configuration
> > properties for creating internal topics:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 154+Add+Kafka+Connect+configuration+properties+for+
> > creating+internal+topics
> >
> > The vote will run for a minimum of 72 hours.
> >
> > Thanks,
> >
> > Randall
> >
>


Surprising behaviour with a Kafka 8 Producer writing to a Kafka 10 Cluster

2017-05-11 Thread david.franklin
Hi,

I've noticed something a bit surprising (to me at least) when a Kafka 8 
producer writes to a Kafka 10 Cluster where the messages are subsequently 
processed by a Kafka Connect sink.  The messages are Avro encoded (a suitable 
Avro key/value converter is specified via worker.properties properties), which 
makes it a little difficult to process but creating a String from the byte[] 
gives a reasonable idea of the message contents.

The general pattern seems to be:

The toConnectData() method in the org.apache.kafka.connect.storage.Converter 
interface takes 2 parameters: the topic and a byte[].

On the first invocation, the byte[] only contains part of the message, 
specifically the id attribute - this causes the Avro decoder to fail with an 
EOFException (not surprisingly).  This is followed by a second invocation of 
the toConnectData method and this time the byte[] contains an entire and 
parseable message - in this case the message id component is prefixed by the 
character 'H' (i.e. the first element in the byte[]).

Is this expected behaviour?  Is there any way to suppress the first invocation?

Thanks,
David



Jenkins build is back to normal : kafka-trunk-jdk8 #1504

2017-05-11 Thread Apache Jenkins Server
See 




[jira] [Commented] (KAFKA-5072) Kafka topics should allow custom metadata configs within some config namespace

2017-05-11 Thread Soumabrata Chakraborty (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006738#comment-16006738
 ] 

Soumabrata Chakraborty commented on KAFKA-5072:
---

Hi [~ijuma], [~benstopford], [~mjsax],

I am not sure who would be the right folks to reach out to for a discussion on 
this JIRA (and the linked PR).
Could you please suggest.  Thanks for your time.


> Kafka topics should allow custom metadata configs within some config namespace
> --
>
> Key: KAFKA-5072
> URL: https://issues.apache.org/jira/browse/KAFKA-5072
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Affects Versions: 0.10.2.0
>Reporter: Soumabrata Chakraborty
>Assignee: Soumabrata Chakraborty
>Priority: Minor
>
> Kafka topics should allow custom metadata configs
> Such config properties may have some fixed namespace e.g. metadata* or custom*
> This is handy for governance.  For example, in large organizations sharing a 
> kafka cluster - it might be helpful to be able to configure properties like 
> metadata.contact.info, metadata.project, metadata.description on a topic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-143: Controller Health Metrics

2017-05-11 Thread Becket Qin
Hi Ismael,

Yes, a follow up KIP after the controller code settles down sounds good.

Thanks,

Jiangjie (Becket) Qin

On Wed, May 10, 2017 at 6:11 PM, Ismael Juma  wrote:

> Thanks Jun. Discussed this offline with Onur and Jun and I believe there's
> agreement so updated the KIP:
>
> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?
> pageId=69407758&selectedPageVersions=8&selectedPageVersions=7
>
> Ismael
>
> On Wed, May 10, 2017 at 4:46 PM, Jun Rao  wrote:
>
> > Hi, Onur,
> >
> > We probably don't want to do the 1-to-1 mapping from the event type to
> the
> > controller state since some of the event types are implementation
> details.
> > How about the following mapping?
> >
> > 0 - idle
> > 1 - controller change (Startup, ControllerChange, Reelect)
> > 2 - broker change (BrokerChange)
> > 3 - topic creation/change (TopicChange, PartitionModifications)
> > 4 - topic deletion (TopicDeletion, TopicDeletionStopReplicaResult)
> > 5 - partition reassigning (PartitionReassignment,
> > PartitionReassignmentIsrChange)
> > 6 - auto leader balancing (AutoPreferredReplicaLeaderElection)
> > 7 - manual leader balancing (PreferredReplicaLeaderElection)
> > 8 - controlled shutdown (ControlledShutdown)
> > 9 - isr change (IsrChangeNotification)
> >
> > For each state, we will add a corresponding timer to track the rate and
> the
> > latency, if it's not there already (e.g., broker change and controlled
> > shutdown). If there are future changes to the controller, we can make a
> > call whether the new event should be mapped to one of the existing states
> > or a new state.
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, May 9, 2017 at 6:17 PM, Onur Karaman <
> onurkaraman.apa...@gmail.com
> > >
> > wrote:
> >
> > > @Ismael, Jun
> > > After bringing up an earlier point twice now, it still doesn't feel
> like
> > > it's been commented on/addressed, so I'm going to give it one more
> shot:
> > > Assuming that ControllerState should reflect the current event being
> > > processed, the KIP is missing states.
> > >
> > > The controller currently has 14 event types:
> > > BrokerChange
> > > TopicChange
> > > PartitionModifications
> > > TopicDeletion
> > > PartitionReassignment
> > > PartitionReassignmentIsrChange
> > > IsrChangeNotification
> > > PreferredReplicaLeaderElection
> > > AutoPreferredReplicaLeaderElection
> > > ControlledShutdown
> > > TopicDeletionStopReplicaResult
> > > Startup
> > > ControllerChange
> > > Reelect
> > >
> > > The KIP only shows 10 event types (and it's not a proper subset of the
> > > above set).
> > >
> > > I think this mismatch would cause the ControllerState to incorrectly be
> > in
> > > the Idle state when in fact the controller could be doing a lot of
> work.
> > >
> > > 1. Should ControllerState exactly consist of the 14 controller event
> > types
> > > + the 1 Idle state?
> > > 2. If so, what's the process for adding/removing/merging event types
> > > w.r.t. this metric?
> > >
> > > On Tue, May 9, 2017 at 4:45 PM, Ismael Juma  wrote:
> > >
> > >> Becket,
> > >>
> > >> Are you OK with extending the metrics via a subsequent KIP (assuming
> > that
> > >> what we're doing here doesn't prevent that)? The KIP freeze is
> tomorrow
> > >> (although I will give it an extra day or two as many in the community
> > have
> > >> been attending the Kafka Summit this week), so we should avoid
> > increasing
> > >> the scope unless it's important for future improvements.
> > >>
> > >> Thanks,
> > >> Ismael
> > >>
> > >> On Wed, May 10, 2017 at 12:09 AM, Jun Rao  wrote:
> > >>
> > >> > Hi, Becket,
> > >> >
> > >> > q10. The reason why there is not a timer metric for broker change
> > event
> > >> is
> > >> > that the controller currently always has a
> LeaderElectionRateAndTimeMs
> > >> > timer metric (in ControllerStats).
> > >> >
> > >> > q11. I agree that that it's useful to know the queue time in the
> > >> > controller event queue and suggested that earlier. Onur thinks that
> > >> it's a
> > >> > bit too early to add that since we are about to change how to queue
> > >> events
> > >> > from ZK. Similarly, we will probably also make changes to batch
> > requests
> > >> > from the controller to the broker. So, perhaps we can add more
> metrics
> > >> once
> > >> > those changes in the controller have been made. For now, knowing the
> > >> > controller state and the controller channel queue size is probably
> > good
> > >> > enough.
> > >> >
> > >> > Thanks,
> > >> >
> > >> > Jun
> > >> >
> > >> >
> > >> >
> > >> > On Mon, May 8, 2017 at 10:05 PM, Becket Qin 
> > >> wrote:
> > >> >
> > >> >> @Ismael,
> > >> >>
> > >> >> About the stage and event type. Yes, I think each event handling
> > should
> > >> >> have those stages covered. It is similar to what we are doing for
> the
> > >> >> requests on the broker side. We have benefited from such systematic
> > >> metric
> > >> >> structure a lot so I think it would be worth following the same way
> > in
> > >> the
> > >> >> controller.
> > >

[GitHub] kafka pull request #3026: KAFKA-5713: Shutdown brokers in tests

2017-05-11 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/3026

KAFKA-5713: Shutdown brokers in tests

Add broker shutdown for `LeaderEpochIntegrationTest`. Move broker shutdown 
in other tests to `tearDown` to ensure brokers are shutdown even if tests fail. 
Also added assertion to `ZooKeeperTestHarness` to verify that controller event 
thread is not running since this thread may load JAAS configuration if ZK ports 
are reused.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-5173

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3026.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3026


commit 41bbe2c35f616e32c5eb0ae0ab5f8cb307db95b6
Author: Rajini Sivaram 
Date:   2017-05-11T15:31:59Z

KAFKA-5713: Shutdown brokers in tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-155 Add range scan for windowed state stores

2017-05-11 Thread Xavier Léauté
Thanks Michal, you are correct. I can see your point now, and I can get
behind returning Windowed as well for windowed stores.

It might make sense to revisit the single key iterator in the future and do
the same for consistency, but I'd rather not break backwards compatibility
unless we have a good reason to do so.

Everyone else who already voted on the KIP, are there any objections to
this change? I have updated the KIP accordingly.

Thank you,
Xavier

On Thu, May 11, 2017 at 12:44 AM Michal Borowiecki <
michal.borowie...@openbet.com> wrote:

> Also, wrt
>
> > In the case of the window store, the "key" of the single-key iterator is
> > the actual timestamp of the underlying entry, not just range of the
> > window,
> > so if we were to wrap the result key a window we wouldn't be getting back
> > the equivalent of the single key iterator.
> I believe the timestamp in the entry *is* the window start time (the end
> time is implicitly known by adding the window size to the window start
> time)
>
>
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamWindowReduce.java#L109
>
>
>
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamWindowAggregate.java#L111
>
> Both use window.start() as the timestamp when storing into the WindowStore.
>
> Or am I confusing something horribly here? Hope not ;-)
>
>
> If the above is correct, then using KeyValueIterator, V> as
> the return type of the new fetch method would indeed not lose anything
> the single key iterator is offering.
>
> The window end time could simply be calculated as window start time +
> window size (window size would have to be passed from the store supplier
> to the store implementation, which I think it isn't now but that's an
> implementation detail).
>
> If you take objection to exposing the window end time because the single
> key iterator doesn't do that, then an alternative could also be to have
> the return type of the new fetch be something like
> KeyValueItarator, V>, since the key is composed of the
> actual key and the timestamp together. peakNextKey() would then allow
> you to glimpse both the actual key and the associated window start time.
> This feels like a better workaround then putting the KeyValue pair in
> the V of the WindowStoreIterator.
>
> All-in-all, I'd still prefer KeyValueIterator, V> as it more
> clearly names what's what.
>
> What do you think?
>
> Thanks,
>
> Michal
>
> On 11/05/17 07:51, Michal Borowiecki wrote:
> > Well, another concern, apart from potential confusion, is that you
> > won't be able to peek the actual next key, just the timestamp. So the
> > tradeoff is between having consistency in return types versus
> > consistency in having the ability to know the next key without moving
> > the iterator. To me the latter just feels more important.
> >
> > Thanks,
> > Michal
> > On 11 May 2017 12:46 a.m., Xavier Léauté  wrote:
> >
> > Thank you for the feedback Michal.
> >
> > While I agree the return may be a little bit more confusing to reason
> > about, the reason for doing so was to keep the range query interfaces
> > consistent with their single-key counterparts.
> >
> > In the case of the window store, the "key" of the single-key
> > iterator is
> > the actual timestamp of the underlying entry, not just range of
> > the window,
> > so if we were to wrap the result key a window we wouldn't be
> > getting back
> > the equivalent of the single key iterator.
> >
> > In both cases peekNextKey is just returning the timestamp of the
> > next entry
> > in the window store that matches the query.
> >
> > In the case of the session store, we already return Windowed
> > for the
> > single-key method, so it made sense there to also return
> > Windowed for
> > the range method.
> >
> > Hope this make sense? Let me know if you still have concerns about
> > this.
> >
> > Thank you,
> > Xavier
> >
> > On Wed, May 10, 2017 at 12:25 PM Michal Borowiecki <
> > michal.borowie...@openbet.com> wrote:
> >
> > > Apologies, I missed the discussion (or lack thereof) about the
> > return
> > > type of:
> > >
> > > WindowStoreIterator> fetch(K from, K to, long
> > timeFrom,
> > > long timeTo)
> > >
> > >
> > > WindowStoreIterator (as the KIP mentions) is a subclass of
> > > KeyValueIterator
> > >
> > > KeyValueIterator has the following method:
> > >
> > > /** * Peek at the next key without advancing the iterator *
> > @return the
> > > key of the next value that would be returned from the next call
> > to next
> > > */ K peekNextKey();
> > >
> > > Given the type in this case will be Long, I assume what it would
> > return
> > > is the window timestamp of the next found record?
> > >
> > >
> > > In the case of W

[jira] [Work started] (KAFKA-5217) Improve Streams internal exception handling

2017-05-11 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5217 started by Matthias J. Sax.
--
> Improve Streams internal exception handling
> ---
>
> Key: KAFKA-5217
> URL: https://issues.apache.org/jira/browse/KAFKA-5217
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> Streams does not handle all exceptions gracefully atm, but tend to throw 
> exceptions to the user, even if we could handle them internally and recover 
> automatically. We want to revisit this exception handling to be more 
> resilient.
> For example, for any kind of rebalance exception, we should just log it, and 
> rejoin the consumer group.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5217) Improve Streams internal exception handling

2017-05-11 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16007013#comment-16007013
 ] 

Eno Thereska commented on KAFKA-5217:
-

Do you want this as a subtask of 
https://issues.apache.org/jira/browse/KAFKA-5156? Probably doesn't matter 
either way, just checking if we want to group all exception handling under one 
big jira.

> Improve Streams internal exception handling
> ---
>
> Key: KAFKA-5217
> URL: https://issues.apache.org/jira/browse/KAFKA-5217
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> Streams does not handle all exceptions gracefully atm, but tend to throw 
> exceptions to the user, even if we could handle them internally and recover 
> automatically. We want to revisit this exception handling to be more 
> resilient.
> For example, for any kind of rebalance exception, we should just log it, and 
> rejoin the consumer group.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5156) Options for handling exceptions in streams

2017-05-11 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska reassigned KAFKA-5156:
---

Assignee: (was: Eno Thereska)

> Options for handling exceptions in streams
> --
>
> Key: KAFKA-5156
> URL: https://issues.apache.org/jira/browse/KAFKA-5156
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
> Fix For: 0.11.0.0
>
>
> This is a task around options for handling exceptions in streams. It focuses 
> around options for dealing with corrupt data (keep going, stop streams, log, 
> retry, etc).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2985: KAFKA-5173: Log jaas.config if broker fails to sta...

2017-05-11 Thread rajinisivaram
Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/2985


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5173) SASL tests failing with Could not find a 'KafkaServer' or 'sasl_plaintext.KafkaServer' entry in the JAAS configuration

2017-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16007029#comment-16007029
 ] 

ASF GitHub Bot commented on KAFKA-5173:
---

Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/2985


> SASL tests failing with Could not find a 'KafkaServer' or 
> 'sasl_plaintext.KafkaServer' entry in the JAAS configuration
> --
>
> Key: KAFKA-5173
> URL: https://issues.apache.org/jira/browse/KAFKA-5173
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
> Fix For: 0.11.0.0
>
>
> I've seen this a few times. One example:
> {code}
> java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_plaintext.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is /tmp/kafka8162725028002772063.tmp
>   at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:131)
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:96)
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:78)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:100)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:73)
>   at kafka.network.Processor.(SocketServer.scala:423)
>   at kafka.network.SocketServer.newProcessor(SocketServer.scala:145)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1$$anonfun$apply$1.apply$mcVI$sp(SocketServer.scala:96)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:95)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:90)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at kafka.network.SocketServer.startup(SocketServer.scala:90)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:218)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
>   at 
> kafka.integration.BaseTopicMetadataTest.setUp(BaseTopicMetadataTest.scala:51)
>   at 
> kafka.integration.SaslPlaintextTopicMetadataTest.kafka$api$SaslTestHarness$$super$setUp(SaslPlaintextTopicMetadataTest.scala:23)
>   at kafka.api.SaslTestHarness$class.setUp(SaslTestHarness.scala:31)
>   at 
> kafka.integration.SaslPlaintextTopicMetadataTest.setUp(SaslPlaintextTopicMetadataTest.scala:23)
> {code}
> https://builds.apache.org/job/kafka-trunk-jdk8/1479/testReport/junit/kafka.integration/SaslPlaintextTopicMetadataTest/testIsrAfterBrokerShutDownAndJoinsBack/
> [~rsivaram], any ideas?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2936: MINOR: JoinGroupRequest V0 invalid rebalance timeo...

2017-05-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2936


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-146: Isolation of dependencies and classes in Kafka Connect (restarted voting thread)

2017-05-11 Thread Konstantine Karantasis
A little after 72h have passed I'm happy to announce that this KIP has been
accepted.

Additionally, since all votes were submitted within the KIP deadline for
KIPs targeting the 0.11.0.0 release, I intend to publish an implementation
for review, targeting the forthcoming Apache Kafka release.

Here are the votes:

+1s (binding): 5 - Guozhang Wang, Sriram Subramanian, Ismael Juma, Gwen
Shapira, Ewen Cheslack-Postava
+1s (non-binding): 3 - Stephane Maarek, Dan Norwood, Colin McCabe
-1s: 0

Thanks to everyone who reviewed and commented on this KIP.

-Konstantine


On Wed, May 10, 2017 at 9:54 PM, Konstantine Karantasis <
konstant...@confluent.io> wrote:

> The KIP description has been updated to reflect the use of the term
> plugin.path instead.
>
> -Konstantine
>
>
>
>
> On Wed, May 10, 2017 at 2:10 PM, Ismael Juma  wrote:
>
>> Konstantine, I am not convinced that it will represent similar
>> functionality as the goals are different. Also, I don't see a migration
>> path. To use Jigsaw, it makes sense to configure the module path during
>> start-up (-mp) like one configures the classpath. Whatever we are
>> implementing in Connect will be its own thing and it will be with us for
>> many years.
>>
>> Ewen, as far as the JVM goes, I think `module.path` is probably the name
>> most likely to create confusion since it refers to a concept that was
>> recently introduced, has very specific (and some would say unexpected)
>> behaviour and it will be supported by java/javac launchers, build tools,
>> etc.
>>
>> Gwen, `plugin.path` sounds good to me.
>>
>> In any case, I will leave it to you all to decide. :)
>>
>> Ismael
>>
>> On Wed, May 10, 2017 at 8:11 PM, Konstantine Karantasis <
>> konstant...@confluent.io> wrote:
>>
>> > Thank you Ismael for your vote as well as your comment.
>> >
>> > To give some context, it's exactly because of the similarities with
>> Jigsaw
>> > that module.path was selected initially.
>> > The thought was that it could allow for a potential integration with
>> Jigsaw
>> > in the future, without having to change property names significantly.
>> >
>> > Of course there are differences, as the ones you mention, mainly because
>> > Connect's module path currently is composed as a list of top-level
>> > directories that include the modules as subdirectories. However I'd be
>> > inclined to agree with Ewen. Maybe using a property name that presents
>> > similarities to other concepts in the JVM ecosystem reserves for us more
>> > flexibility than using a different name for something that will
>> eventually
>> > end up representing similar functionality.
>> >
>> > In any case, I don't feel very strong about it. Let me know if you
>> insist
>> > on a name change.
>> >
>> > -Konstantine
>> >
>> >
>> > On Wed, May 10, 2017 at 10:24 AM, Ewen Cheslack-Postava <
>> e...@confluent.io
>> > >
>> > wrote:
>> >
>> > > +1 binding, and I'm flexible on the config name. Somehow I am
>> guessing no
>> > > matter what terminology we use there somebody will find a way to be
>> > > confused.
>> > >
>> > > -Ewen
>> > >
>> > > On Wed, May 10, 2017 at 9:27 AM, Gwen Shapira 
>> wrote:
>> > >
>> > > > +1 and proposing 'plugin.path' as we use the term connector plugins
>> > when
>> > > > referring to the jars themselves.
>> > > >
>> > > > Gwen
>> > > >
>> > > > On Wed, May 10, 2017 at 8:31 AM, Ismael Juma 
>> > wrote:
>> > > >
>> > > > > Thanks for the KIP Konstantine, +1 (binding) from me. One comment;
>> > > > >
>> > > > > 1. One thing to think about: the config name `module.path` could
>> be
>> > > > > confusing in the future as Jigsaw introduces the concept of a
>> module
>> > > > > path[1] in Java 9. The module path co-exists with the classpath,
>> but
>> > > its
>> > > > > behaviour is quite different. To many people's surprise, Jigsaw
>> > doesn't
>> > > > > handle versioning and it disallows split packages (i.e. if the
>> same
>> > > > package
>> > > > > appears in 2 different modules, it is an error). What we are
>> > proposing
>> > > is
>> > > > > quite different and perhaps it may make sense to use a different
>> name
>> > > to
>> > > > > avoid confusion.
>> > > > >
>> > > > > Ismael
>> > > > >
>> > > > > [1] https://www.infoq.com/articles/Latest-Project-
>> > > Jigsaw-Usage-Tutorial
>> > > > >
>> > > > > On Mon, May 8, 2017 at 7:48 PM, Konstantine Karantasis <
>> > > > > konstant...@confluent.io> wrote:
>> > > > >
>> > > > > > ** Restarting the voting thread here, with a different title to
>> > avoid
>> > > > > > collapsing this thread's messages with the discussion thread's
>> > > messages
>> > > > > in
>> > > > > > mail clients. Apologies for the inconvenience. **
>> > > > > >
>> > > > > >
>> > > > > > Hi all,
>> > > > > >
>> > > > > > Given that the comments during the discussion seem to have been
>> > > > > addressed,
>> > > > > > I'm pleased to bring
>> > > > > >
>> > > > > > KIP-146: Classloading Isolation in Connect
>> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > > > > > 146+

[jira] [Created] (KAFKA-5220) Application Reset Tool does not work with SASL

2017-05-11 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-5220:
--

 Summary: Application Reset Tool does not work with SASL
 Key: KAFKA-5220
 URL: https://issues.apache.org/jira/browse/KAFKA-5220
 Project: Kafka
  Issue Type: Improvement
  Components: streams, tools
Reporter: Matthias J. Sax


Resetting an application with SASL enabled fails with
{noformat}
ERROR: Request GROUP_COORDINATOR failed on brokers List(localhost:9092 (id: -1 
rack: null))
{noformat}

We would need to allow additional configuration options that get picked up by 
the internally used ZK client and KafkaConsumer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5217) Improve Streams internal exception handling

2017-05-11 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5217:
---
Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-5156

> Improve Streams internal exception handling
> ---
>
> Key: KAFKA-5217
> URL: https://issues.apache.org/jira/browse/KAFKA-5217
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> Streams does not handle all exceptions gracefully atm, but tend to throw 
> exceptions to the user, even if we could handle them internally and recover 
> automatically. We want to revisit this exception handling to be more 
> resilient.
> For example, for any kind of rebalance exception, we should just log it, and 
> rejoin the consumer group.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5217) Improve Streams internal exception handling

2017-05-11 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16007159#comment-16007159
 ] 

Matthias J. Sax commented on KAFKA-5217:


Sure. Done.

> Improve Streams internal exception handling
> ---
>
> Key: KAFKA-5217
> URL: https://issues.apache.org/jira/browse/KAFKA-5217
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> Streams does not handle all exceptions gracefully atm, but tend to throw 
> exceptions to the user, even if we could handle them internally and recover 
> automatically. We want to revisit this exception handling to be more 
> resilient.
> For example, for any kind of rebalance exception, we should just log it, and 
> rejoin the consumer group.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk7 #2169

2017-05-11 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: JoinGroupRequest V0 invalid rebalance timeout

--
[...truncated 846.85 KB...]
kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization STARTED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion PASSED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testZkAclsDisabled STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testZkAclsDisabled PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl PASSED

kafka.api.SaslMultiMechanismConsumerTest > testMultipleBrokerMechanisms STARTED

kafka.api.SaslMultiMechanismConsumerTest > testMultipleBrokerMechanisms PASSED

kafka.api.SaslMultiMechanismConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslMultiMechanismConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslMultiMechanismConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslMultiMechanismConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testAcls STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testAcls PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsum

Re: [VOTE] KIP-133: List and Alter Configs Admin APIs (second attempt)

2017-05-11 Thread Ismael Juma
Thanks for the feedback Jun.

1. This is a good point. After thinking about it, I concluded that quotas
should be handled via separate APIs, so I moved it to future work. I quote
the reasoning from the KIP:

"Support for reading and updating client, user and replication quotas. We
initially included that in the KIP, but it subsequently became apparent
that a separate protocol and AdminClient API would be more appropriate. The
reason is that client/user quotas can be applied on a client id, user or
(client id, user) tuple. In the future, the hierarchy may get even more
complicated. So, it makes sense to keeping the API simple for the simple
cases while introducing a more sophisticated API for the more complex case."

2. I have clarified it.

Ismael

On Wed, May 10, 2017 at 4:55 PM, Jun Rao  wrote:

> Hi, Ismael,
>
> Thanks for the KIP. Looks good overall. A couple of minor comments.
>
> 1. Currently, quotas can be updated at the  combination
> level. So, it seems that we need to reflect that somehow in both the wire
> protocol and the admin api.
> 2. It would be useful to clarify what configs are considered read-only.
>
> Jun
>
> On Mon, May 8, 2017 at 8:52 AM, Ismael Juma  wrote:
>
> > Quick update, I renamed ListConfigs to DescribeConfigs (and related
> classes
> > and methods) as that is more consistent with other protocols (like
> > ListGroups and DescribeGroups). So the new link is:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 133%3A+Describe+and+Alter+Configs+Admin+APIs
> >
> > Ismael
> >
> > On Mon, May 8, 2017 at 5:01 AM, Ismael Juma  wrote:
> >
> > > [Seems like the original message ended up in the discuss thread in
> GMail,
> > > so trying again]
> > >
> > > Hi everyone,
> > >
> > > I believe I addressed the comments in the discussion thread and given
> the
> > > impending KIP freeze, I would like to start the voting process for
> > KIP-133:
> > > List and Alter Configs Admin APIs:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-133%3A
> > > +List+and+Alter+Configs+Admin+APIs
> > >
> > > As mentioned previously, this KIP and KIP-140 (Add administrative RPCs
> > for
> > > adding, deleting, and listing ACLs) complete the AdminClient work that
> > was
> > > originally proposed as part KIP-4.
> > >
> > > If you have additional feedback, please share it in the discuss thread.
> > >
> > > The vote will run for a minimum of 72 hours.
> > >
> > > Thanks,
> > > Ismael
> > >
> >
>


Re: [VOTE] KIP-151: Expose Connector type in REST API (first attempt :)

2017-05-11 Thread dan
here we are, only 72hrs later and we have reached agreement.

+4 binding from Guozhang, Ismael, Ram, and Ewen.
+2 non binding from Bharat and Konstantine.

thanks everyone who voted/commented.
dan



On Wed, May 10, 2017 at 11:38 AM, Guozhang Wang  wrote:

> +1
>
> On Wed, May 10, 2017 at 10:27 AM, Ismael Juma  wrote:
>
> > Thanks for the KIP, +1 (binding).
> >
> > Ismael
> >
> > On Mon, May 8, 2017 at 11:39 PM, dan  wrote:
> >
> > > i'd like to begin voting on
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 151+Expose+Connector+type+in+REST+API
> > >
> > > discussion should remain on
> > > http://mail-archives.apache.org/mod_mbox/kafka-dev/201705.
> > > mbox/%3CCAFJy-U-pF7YxSRadx_zAQYCX2+SswmVPSBcA4tDMPP5834s6Kg@mail.
> > > gmail.com%3E
> > >
> > > This voting thread will stay active for a minimum of 72 hours.
> > >
> > > thanks
> > > dan
> > >
> >
>
>
>
> --
> -- Guozhang
>


[GitHub] kafka pull request #3027: [WIP] KIP-155 KAFKA-5192 WindowStore range scan

2017-05-11 Thread xvrl
GitHub user xvrl opened a pull request:

https://github.com/apache/kafka/pull/3027

[WIP] KIP-155 KAFKA-5192 WindowStore range scan

Note: this implementation based on the initial version of the kip, and does 
not reflect the latest changes that have yet to be agreed on.

@dguy @mjsax @guozhangwang let me know if you have any initial feedback to 
share

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xvrl/kafka windowstore-range-scan

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3027.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3027


commit 36856f386f17c6ee03e297ec0857765f213f
Author: Xavier Léauté 
Date:   2017-05-11T22:50:21Z

fix Bytes.get() javadocs

commit b8be1489da12c40b0c827393846aeca7d756a8fd
Author: Xavier Léauté 
Date:   2017-05-10T19:37:15Z

range scan implementation for window / session stores




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3028: KAFKA-3487: Support classloading isolation in Conn...

2017-05-11 Thread kkonstantine
GitHub user kkonstantine opened a pull request:

https://github.com/apache/kafka/pull/3028

KAFKA-3487: Support classloading isolation in Connect.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kkonstantine/kafka 
KAFKA-3487-Support-classloading-isolation-in-Connect

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3028.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3028


commit 6e3e92cfadfa7c03090476a0f025b5a6b7989d5c
Author: Konstantine Karantasis 
Date:   2017-05-05T00:00:29Z

KAFKA-3487: Support classloading isolation in Connect.

  * Add module.path in worker config.

commit 84967f43712dd55523ca33e855fa0252f9cdb677
Author: Konstantine Karantasis 
Date:   2017-05-09T23:10:08Z

Add isolation package.

  * Add a delegating class loader
  * Add per module class loaders
  * Add module factories

commit 7e1169b081c451d0f4a3920e75e0bf1961ee84e8
Author: Konstantine Karantasis 
Date:   2017-05-12T00:24:08Z

Add config property only for standalone currently.

commit d8cd6330c6304267a5d7b07c8ef48aa201d95ac8
Author: Konstantine Karantasis 
Date:   2017-05-09T23:19:24Z

Add maven-artifact dependency for module versioning.

commit 50c8a1b79366ce04231b7c88b9abc36871afc143
Author: Konstantine Karantasis 
Date:   2017-05-09T23:21:15Z

Replace connector factory with new module factory.

commit df22a391e920b2f24ad795df929968cd0d5e2dd1
Author: Konstantine Karantasis 
Date:   2017-05-11T18:19:39Z

Consolidating Modules factory class.

commit ba3a78dbcf10a94a2080d7f7ef87585826c27ea8
Author: Konstantine Karantasis 
Date:   2017-05-12T00:20:41Z

Setting thread context class loader to a modules loader.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3487) Support per-connector/per-task classloaders in Connect

2017-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16007425#comment-16007425
 ] 

ASF GitHub Bot commented on KAFKA-3487:
---

GitHub user kkonstantine opened a pull request:

https://github.com/apache/kafka/pull/3028

KAFKA-3487: Support classloading isolation in Connect.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kkonstantine/kafka 
KAFKA-3487-Support-classloading-isolation-in-Connect

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3028.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3028


commit 6e3e92cfadfa7c03090476a0f025b5a6b7989d5c
Author: Konstantine Karantasis 
Date:   2017-05-05T00:00:29Z

KAFKA-3487: Support classloading isolation in Connect.

  * Add module.path in worker config.

commit 84967f43712dd55523ca33e855fa0252f9cdb677
Author: Konstantine Karantasis 
Date:   2017-05-09T23:10:08Z

Add isolation package.

  * Add a delegating class loader
  * Add per module class loaders
  * Add module factories

commit 7e1169b081c451d0f4a3920e75e0bf1961ee84e8
Author: Konstantine Karantasis 
Date:   2017-05-12T00:24:08Z

Add config property only for standalone currently.

commit d8cd6330c6304267a5d7b07c8ef48aa201d95ac8
Author: Konstantine Karantasis 
Date:   2017-05-09T23:19:24Z

Add maven-artifact dependency for module versioning.

commit 50c8a1b79366ce04231b7c88b9abc36871afc143
Author: Konstantine Karantasis 
Date:   2017-05-09T23:21:15Z

Replace connector factory with new module factory.

commit df22a391e920b2f24ad795df929968cd0d5e2dd1
Author: Konstantine Karantasis 
Date:   2017-05-11T18:19:39Z

Consolidating Modules factory class.

commit ba3a78dbcf10a94a2080d7f7ef87585826c27ea8
Author: Konstantine Karantasis 
Date:   2017-05-12T00:20:41Z

Setting thread context class loader to a modules loader.




> Support per-connector/per-task classloaders in Connect
> --
>
> Key: KAFKA-3487
> URL: https://issues.apache.org/jira/browse/KAFKA-3487
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>Priority: Critical
>  Labels: needs-kip
>
> Currently we just use the default ClassLoader in Connect. However, this 
> limits how we can compatibly load conflicting connector plugins. Ideally we 
> would use a separate class loader per connector/task that is instantiated to 
> avoid potential conflicts.
> Note that this also opens up options for other ways to provide jars to 
> instantiate connectors. For example, Spark uses this to dynamically publish 
> classes defined in the REPL and load them via URL: 
> https://ardoris.wordpress.com/2014/03/30/how-spark-does-class-loading/ But 
> much simpler examples (include URL in the connector class instead of just 
> class name) are also possible and could be a nice way to more support dynamic 
> sets of connectors, multiple versions of the same connector, etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3029: fix reference to SchemaBuilder call

2017-05-11 Thread smferguson
GitHub user smferguson opened a pull request:

https://github.com/apache/kafka/pull/3029

fix reference to SchemaBuilder call



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/smferguson/kafka schemabuilder_doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3029.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3029


commit 8f0349a565411bd8d6473c869773981786017089
Author: Scott Ferguson 
Date:   2017-05-12T00:46:33Z

fix reference to SchemaBuilder call




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-133: List and Alter Configs Admin APIs (second attempt)

2017-05-11 Thread Jun Rao
Hi, Ismael,

Thanks for the update KIP. +1

Jun

On Thu, May 11, 2017 at 4:13 PM, Ismael Juma  wrote:

> Thanks for the feedback Jun.
>
> 1. This is a good point. After thinking about it, I concluded that quotas
> should be handled via separate APIs, so I moved it to future work. I quote
> the reasoning from the KIP:
>
> "Support for reading and updating client, user and replication quotas. We
> initially included that in the KIP, but it subsequently became apparent
> that a separate protocol and AdminClient API would be more appropriate. The
> reason is that client/user quotas can be applied on a client id, user or
> (client id, user) tuple. In the future, the hierarchy may get even more
> complicated. So, it makes sense to keeping the API simple for the simple
> cases while introducing a more sophisticated API for the more complex
> case."
>
> 2. I have clarified it.
>
> Ismael
>
> On Wed, May 10, 2017 at 4:55 PM, Jun Rao  wrote:
>
> > Hi, Ismael,
> >
> > Thanks for the KIP. Looks good overall. A couple of minor comments.
> >
> > 1. Currently, quotas can be updated at the  combination
> > level. So, it seems that we need to reflect that somehow in both the wire
> > protocol and the admin api.
> > 2. It would be useful to clarify what configs are considered read-only.
> >
> > Jun
> >
> > On Mon, May 8, 2017 at 8:52 AM, Ismael Juma  wrote:
> >
> > > Quick update, I renamed ListConfigs to DescribeConfigs (and related
> > classes
> > > and methods) as that is more consistent with other protocols (like
> > > ListGroups and DescribeGroups). So the new link is:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 133%3A+Describe+and+Alter+Configs+Admin+APIs
> > >
> > > Ismael
> > >
> > > On Mon, May 8, 2017 at 5:01 AM, Ismael Juma  wrote:
> > >
> > > > [Seems like the original message ended up in the discuss thread in
> > GMail,
> > > > so trying again]
> > > >
> > > > Hi everyone,
> > > >
> > > > I believe I addressed the comments in the discussion thread and given
> > the
> > > > impending KIP freeze, I would like to start the voting process for
> > > KIP-133:
> > > > List and Alter Configs Admin APIs:
> > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-133%3A
> > > > +List+and+Alter+Configs+Admin+APIs
> > > >
> > > > As mentioned previously, this KIP and KIP-140 (Add administrative
> RPCs
> > > for
> > > > adding, deleting, and listing ACLs) complete the AdminClient work
> that
> > > was
> > > > originally proposed as part KIP-4.
> > > >
> > > > If you have additional feedback, please share it in the discuss
> thread.
> > > >
> > > > The vote will run for a minimum of 72 hours.
> > > >
> > > > Thanks,
> > > > Ismael
> > > >
> > >
> >
>


Re: [VOTE] KIP-133: List and Alter Configs Admin APIs (second attempt)

2017-05-11 Thread Ismael Juma
Thanks for all who provided feedback and voted. The vote passed with 5
binding +1s (Sriram, Guozhang, Jason, Jun, Ismael) and 3 non-binding +1s
(Colin, Robert, James).

Ismael

On Fri, May 12, 2017 at 1:54 AM, Jun Rao  wrote:

> Hi, Ismael,
>
> Thanks for the update KIP. +1
>
> Jun
>
> On Thu, May 11, 2017 at 4:13 PM, Ismael Juma  wrote:
>
> > Thanks for the feedback Jun.
> >
> > 1. This is a good point. After thinking about it, I concluded that quotas
> > should be handled via separate APIs, so I moved it to future work. I
> quote
> > the reasoning from the KIP:
> >
> > "Support for reading and updating client, user and replication quotas. We
> > initially included that in the KIP, but it subsequently became apparent
> > that a separate protocol and AdminClient API would be more appropriate.
> The
> > reason is that client/user quotas can be applied on a client id, user or
> > (client id, user) tuple. In the future, the hierarchy may get even more
> > complicated. So, it makes sense to keeping the API simple for the simple
> > cases while introducing a more sophisticated API for the more complex
> > case."
> >
> > 2. I have clarified it.
> >
> > Ismael
> >
> > On Wed, May 10, 2017 at 4:55 PM, Jun Rao  wrote:
> >
> > > Hi, Ismael,
> > >
> > > Thanks for the KIP. Looks good overall. A couple of minor comments.
> > >
> > > 1. Currently, quotas can be updated at the  combination
> > > level. So, it seems that we need to reflect that somehow in both the
> wire
> > > protocol and the admin api.
> > > 2. It would be useful to clarify what configs are considered read-only.
> > >
> > > Jun
> > >
> > > On Mon, May 8, 2017 at 8:52 AM, Ismael Juma  wrote:
> > >
> > > > Quick update, I renamed ListConfigs to DescribeConfigs (and related
> > > classes
> > > > and methods) as that is more consistent with other protocols (like
> > > > ListGroups and DescribeGroups). So the new link is:
> > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 133%3A+Describe+and+Alter+Configs+Admin+APIs
> > > >
> > > > Ismael
> > > >
> > > > On Mon, May 8, 2017 at 5:01 AM, Ismael Juma 
> wrote:
> > > >
> > > > > [Seems like the original message ended up in the discuss thread in
> > > GMail,
> > > > > so trying again]
> > > > >
> > > > > Hi everyone,
> > > > >
> > > > > I believe I addressed the comments in the discussion thread and
> given
> > > the
> > > > > impending KIP freeze, I would like to start the voting process for
> > > > KIP-133:
> > > > > List and Alter Configs Admin APIs:
> > > > >
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-133%3A
> > > > > +List+and+Alter+Configs+Admin+APIs
> > > > >
> > > > > As mentioned previously, this KIP and KIP-140 (Add administrative
> > RPCs
> > > > for
> > > > > adding, deleting, and listing ACLs) complete the AdminClient work
> > that
> > > > was
> > > > > originally proposed as part KIP-4.
> > > > >
> > > > > If you have additional feedback, please share it in the discuss
> > thread.
> > > > >
> > > > > The vote will run for a minimum of 72 hours.
> > > > >
> > > > > Thanks,
> > > > > Ismael
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] KIP-138: Change punctuate semantics

2017-05-11 Thread Ismael Juma
Michal, you have enough votes, would you like to close the vote?

Ismael

On Thu, May 11, 2017 at 4:49 PM, Ewen Cheslack-Postava 
wrote:

> +1 (binding)
>
> -Ewen
>
> On Thu, May 11, 2017 at 7:12 AM, Ismael Juma  wrote:
>
> > Thanks for the KIP, Michal. +1(binding) from me.
> >
> > Ismael
> >
> > On Sat, May 6, 2017 at 6:18 PM, Michal Borowiecki <
> > michal.borowie...@openbet.com> wrote:
> >
> >> Hi all,
> >>
> >> Given I'm not seeing any contentious issues remaining on the discussion
> >> thread, I'd like to initiate the vote for:
> >>
> >> KIP-138: Change punctuate semantics
> >>
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-138%
> >> 3A+Change+punctuate+semantics
> >>
> >>
> >> Thanks,
> >> Michał
> >> --
> >>  Michal Borowiecki
> >> Senior Software Engineer L4
> >> T: +44 208 742 1600 <020%208742%201600>
> >>
> >>
> >> +44 203 249 8448 <020%203249%208448>
> >>
> >>
> >>
> >> E: michal.borowie...@openbet.com
> >> W: www.openbet.com
> >> OpenBet Ltd
> >>
> >> Chiswick Park Building 9
> >>
> >> 566 Chiswick High Rd
> >>
> >> London
> >>
> >> W4 5XT
> >>
> >> UK
> >> 
> >> This message is confidential and intended only for the addressee. If you
> >> have received this message in error, please immediately notify the
> >> postmas...@openbet.com and delete it from your system as well as any
> >> copies. The content of e-mails as well as traffic data may be monitored
> by
> >> OpenBet for employment and security purposes. To protect the environment
> >> please do not print this e-mail unless necessary. OpenBet Ltd.
> Registered
> >> Office: Chiswick Park Building 9, 566 Chiswick High Road, London, W4
> 5XT,
> >> United Kingdom. A company registered in England and Wales. Registered
> no.
> >> 3134634. VAT no. GB927523612
> >>
> >
> >
>


[GitHub] kafka pull request #2960: KAFKA-4343: Expose Connector type in REST API

2017-05-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2960


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4343) Connect REST API should expose whether each connector is a source or sink

2017-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16007470#comment-16007470
 ] 

ASF GitHub Bot commented on KAFKA-4343:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2960


> Connect REST API should expose whether each connector is a source or sink
> -
>
> Key: KAFKA-4343
> URL: https://issues.apache.org/jira/browse/KAFKA-4343
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: dan norwood
>  Labels: kip, newbie++
> Fix For: 0.11.0.0
>
>
> Currently we don't expose information about whether a connector is a source 
> or sink. This is useful when, e.g., categorizing connectors in a UI. Given 
> naming conventions we try to encourage you *might* be able to determine this 
> via the connector's class name, but that isn't reliable.
> This may be related to https://issues.apache.org/jira/browse/KAFKA-4279 as we 
> might just want to expose this information on a per-connector-plugin basis 
> and expect any users to tie the results from multiple API requests together. 
> An alternative that would probably be simpler for users would be to include 
> it in the /connectors//status output.
> Note that this will require a KIP as it adds to the public REST API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4343) Connect REST API should expose whether each connector is a source or sink

2017-05-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4343:
-
Resolution: Fixed
  Reviewer: Ewen Cheslack-Postava
Status: Resolved  (was: Patch Available)

> Connect REST API should expose whether each connector is a source or sink
> -
>
> Key: KAFKA-4343
> URL: https://issues.apache.org/jira/browse/KAFKA-4343
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: dan norwood
>  Labels: kip, newbie++
> Fix For: 0.11.0.0
>
>
> Currently we don't expose information about whether a connector is a source 
> or sink. This is useful when, e.g., categorizing connectors in a UI. Given 
> naming conventions we try to encourage you *might* be able to determine this 
> via the connector's class name, but that isn't reliable.
> This may be related to https://issues.apache.org/jira/browse/KAFKA-4279 as we 
> might just want to expose this information on a per-connector-plugin basis 
> and expect any users to tie the results from multiple API requests together. 
> An alternative that would probably be simpler for users would be to include 
> it in the /connectors//status output.
> Note that this will require a KIP as it adds to the public REST API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4195) support throttling on request rate

2017-05-11 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16007481#comment-16007481
 ] 

Jun Rao commented on KAFKA-4195:


[~rsivaram], are you working on the 2 other remaining subtasks? It would be 
useful to at least complete the one on documentation before feature freeze of 
0.11.0.0.

> support throttling on request rate
> --
>
> Key: KAFKA-4195
> URL: https://issues.apache.org/jira/browse/KAFKA-4195
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Rajini Sivaram
>  Labels: needs-kip
>
> Currently, we can throttle the client by data volume. However, if a client 
> sends requests too quickly (e.g., a consumer with min.byte configured to 0), 
> it can still overwhelm the broker. It would be useful to additionally support 
> throttling by request rate. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3026: KAFKA-5713: Shutdown brokers in tests

2017-05-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3026


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3487) Support per-connector/per-task classloaders in Connect

2017-05-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3487:
-
 Assignee: Konstantine Karantasis  (was: Liquan Pei)
 Reviewer: Ewen Cheslack-Postava
Fix Version/s: 0.11.0.0
   Status: Patch Available  (was: Open)

> Support per-connector/per-task classloaders in Connect
> --
>
> Key: KAFKA-3487
> URL: https://issues.apache.org/jira/browse/KAFKA-3487
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Konstantine Karantasis
>Priority: Critical
>  Labels: needs-kip
> Fix For: 0.11.0.0
>
>
> Currently we just use the default ClassLoader in Connect. However, this 
> limits how we can compatibly load conflicting connector plugins. Ideally we 
> would use a separate class loader per connector/task that is instantiated to 
> avoid potential conflicts.
> Note that this also opens up options for other ways to provide jars to 
> instantiate connectors. For example, Spark uses this to dynamically publish 
> classes defined in the REPL and load them via URL: 
> https://ardoris.wordpress.com/2014/03/30/how-spark-does-class-loading/ But 
> much simpler examples (include URL in the connector class instead of just 
> class name) are also possible and could be a nice way to more support dynamic 
> sets of connectors, multiple versions of the same connector, etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-5173) SASL tests failing with Could not find a 'KafkaServer' or 'sasl_plaintext.KafkaServer' entry in the JAAS configuration

2017-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5173.

Resolution: Fixed
  Assignee: Rajini Sivaram

We believe this was fixed by https://github.com/apache/kafka/pull/3026 (the PR 
had a typo in the JIRA id).

> SASL tests failing with Could not find a 'KafkaServer' or 
> 'sasl_plaintext.KafkaServer' entry in the JAAS configuration
> --
>
> Key: KAFKA-5173
> URL: https://issues.apache.org/jira/browse/KAFKA-5173
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
> Fix For: 0.11.0.0
>
>
> I've seen this a few times. One example:
> {code}
> java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_plaintext.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is /tmp/kafka8162725028002772063.tmp
>   at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:131)
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:96)
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:78)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:100)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:73)
>   at kafka.network.Processor.(SocketServer.scala:423)
>   at kafka.network.SocketServer.newProcessor(SocketServer.scala:145)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1$$anonfun$apply$1.apply$mcVI$sp(SocketServer.scala:96)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:95)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:90)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at kafka.network.SocketServer.startup(SocketServer.scala:90)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:218)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
>   at 
> kafka.integration.BaseTopicMetadataTest.setUp(BaseTopicMetadataTest.scala:51)
>   at 
> kafka.integration.SaslPlaintextTopicMetadataTest.kafka$api$SaslTestHarness$$super$setUp(SaslPlaintextTopicMetadataTest.scala:23)
>   at kafka.api.SaslTestHarness$class.setUp(SaslTestHarness.scala:31)
>   at 
> kafka.integration.SaslPlaintextTopicMetadataTest.setUp(SaslPlaintextTopicMetadataTest.scala:23)
> {code}
> https://builds.apache.org/job/kafka-trunk-jdk8/1479/testReport/junit/kafka.integration/SaslPlaintextTopicMetadataTest/testIsrAfterBrokerShutDownAndJoinsBack/
> [~rsivaram], any ideas?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5192) Range Scan for Windowed State Stores

2017-05-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-5192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xavier Léauté reassigned KAFKA-5192:


Assignee: Xavier Léauté

> Range Scan for Windowed State Stores
> 
>
> Key: KAFKA-5192
> URL: https://issues.apache.org/jira/browse/KAFKA-5192
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>
> Windowed state stores currently do not support key range scans, even though 
> it seems reasonable to be able to – at least in a given window – do the same 
> operations you would do on a key-value store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5192) Range Scan for Windowed State Stores

2017-05-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-5192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5192 started by Xavier Léauté.

> Range Scan for Windowed State Stores
> 
>
> Key: KAFKA-5192
> URL: https://issues.apache.org/jira/browse/KAFKA-5192
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>
> Windowed state stores currently do not support key range scans, even though 
> it seems reasonable to be able to – at least in a given window – do the same 
> operations you would do on a key-value store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk8 #1506

2017-05-11 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-4343: Expose Connector type in REST API (KIP-151)

--
[...truncated 850.11 KB...]

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer STARTED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testFromString STARTED

kafka.security.auth.OperationTest > testFromString PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducer

Jenkins build is back to normal : kafka-trunk-jdk7 #2170

2017-05-11 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #2864: KAFKA-5078; PartitionRecords.fetchRecords(...) sho...

2017-05-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2864


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5078) PartitionRecords.fetchRecords(...) should defer exception to the next call if iterator has already moved across any valid record

2017-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16007565#comment-16007565
 ] 

ASF GitHub Bot commented on KAFKA-5078:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2864


> PartitionRecords.fetchRecords(...) should defer exception to the next call if 
> iterator has already moved across any valid record
> 
>
> Key: KAFKA-5078
> URL: https://issues.apache.org/jira/browse/KAFKA-5078
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
>
> Suppose there are two valid records followed by one invalid records in the 
> FetchResponse.PartitionData(). As of current implementation, 
> PartitionRecords.fetchRecords(...) will throw exception without returning the 
> two valid records. The next call to PartitionRecords.fetchRecords(...) will 
> not return that two valid records either because the iterator has already 
> moved across them.
> We can fix this problem by deferring exception to the next call of 
> PartitionRecords.fetchRecords(...) if iterator has already moved across any 
> valid record.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3959) KIP-115: __consumer_offsets wrong number of replicas at startup

2017-05-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3959:
-
Summary: KIP-115: __consumer_offsets wrong number of replicas at startup  
(was: __consumer_offsets wrong number of replicas at startup)

> KIP-115: __consumer_offsets wrong number of replicas at startup
> ---
>
> Key: KAFKA-3959
> URL: https://issues.apache.org/jira/browse/KAFKA-3959
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, offset manager, replication
>Affects Versions: 0.9.0.1, 0.10.0.0, 0.10.0.1, 0.10.0.2, 0.10.1.0, 
> 0.10.1.1, 0.10.1.2
> Environment: Brokers of 3 kafka nodes running Red Hat Enterprise 
> Linux Server release 7.2 (Maipo)
>Reporter: Alban Hurtaud
>Assignee: Onur Karaman
>Priority: Blocker
>  Labels: needs-kip, reliability
> Fix For: 0.11.0.0
>
>
> When creating a stack of 3 kafka brokers, the consumer is starting faster 
> than kafka nodes and when trying to read a topic, only one kafka node is 
> available.
> So the __consumer_offsets is created with a replication factor set to 1 
> (instead of configured 3) :
> offsets.topic.replication.factor=3
> default.replication.factor=3
> min.insync.replicas=2
> Then, other kafka nodes go up and we have exceptions because the replicas # 
> for __consumer_offsets is 1 and min insync is 2. So exceptions are thrown.
> What I missed is : Why the __consumer_offsets is created with replication to 
> 1 (when 1 broker is running) whereas in server.properties it is set to 3 ?
> To reproduce : 
> - Prepare 3 kafka nodes with the 3 lines above added to servers.properties.
> - Run one kafka,
> - Run one consumer (the __consumer_offsets is created with replicas =1)
> - Run 2 more kafka nodes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3487) KIP-146: Support per-connector/per-task classloaders in Connect

2017-05-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3487:
-
Summary: KIP-146: Support per-connector/per-task classloaders in Connect  
(was: Support per-connector/per-task classloaders in Connect)

> KIP-146: Support per-connector/per-task classloaders in Connect
> ---
>
> Key: KAFKA-3487
> URL: https://issues.apache.org/jira/browse/KAFKA-3487
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Konstantine Karantasis
>Priority: Critical
>  Labels: needs-kip
> Fix For: 0.11.0.0
>
>
> Currently we just use the default ClassLoader in Connect. However, this 
> limits how we can compatibly load conflicting connector plugins. Ideally we 
> would use a separate class loader per connector/task that is instantiated to 
> avoid potential conflicts.
> Note that this also opens up options for other ways to provide jars to 
> instantiate connectors. For example, Spark uses this to dynamically publish 
> classes defined in the REPL and load them via URL: 
> https://ardoris.wordpress.com/2014/03/30/how-spark-does-class-loading/ But 
> much simpler examples (include URL in the connector class instead of just 
> class name) are also possible and could be a nice way to more support dynamic 
> sets of connectors, multiple versions of the same connector, etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4343) KIP-151: Connect REST API should expose whether each connector is a source or sink

2017-05-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4343:
-
Summary: KIP-151: Connect REST API should expose whether each connector is 
a source or sink  (was: Connect REST API should expose whether each connector 
is a source or sink)

> KIP-151: Connect REST API should expose whether each connector is a source or 
> sink
> --
>
> Key: KAFKA-4343
> URL: https://issues.apache.org/jira/browse/KAFKA-4343
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: dan norwood
>  Labels: kip, newbie++
> Fix For: 0.11.0.0
>
>
> Currently we don't expose information about whether a connector is a source 
> or sink. This is useful when, e.g., categorizing connectors in a UI. Given 
> naming conventions we try to encourage you *might* be able to determine this 
> via the connector's class name, but that isn't reliable.
> This may be related to https://issues.apache.org/jira/browse/KAFKA-4279 as we 
> might just want to expose this information on a per-connector-plugin basis 
> and expect any users to tie the results from multiple API requests together. 
> An alternative that would probably be simpler for users would be to include 
> it in the /connectors//status output.
> Note that this will require a KIP as it adds to the public REST API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4783) KIP-128: Blackbox or pass through converter or ByteArrayConverter for connect

2017-05-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4783:
-
Summary: KIP-128: Blackbox or pass through converter or ByteArrayConverter 
for connect  (was: Blackbox or pass through converter or ByteArrayConverter for 
connect)

> KIP-128: Blackbox or pass through converter or ByteArrayConverter for connect
> -
>
> Key: KAFKA-4783
> URL: https://issues.apache.org/jira/browse/KAFKA-4783
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Antony Stubbs
>Assignee: Ewen Cheslack-Postava
>  Labels: needs-kip
> Fix For: 0.11.0.0
>
>
> Connect is missing a pass through converter / ByteArrayConverter that doesn't 
> manipulate the message payload. This is needed for binary messages that don't 
> have a plaintext, json or avro interpretation. For example, messages that 
> contain a binary encoding of a proprietary or esoteric protocol, to be 
> decoded later.
> Currently there's a public class available here: 
> https://github.com/qubole/streamx/blob/8b9d43008d9901b8e6020404f137944ed97522e2/src/main/java/com/qubole/streamx/ByteArrayConverter.java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4667) KIP-154: Connect should create internal topics

2017-05-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4667:
-
Summary: KIP-154: Connect should create internal topics  (was: Connect 
should create internal topics)

> KIP-154: Connect should create internal topics
> --
>
> Key: KAFKA-4667
> URL: https://issues.apache.org/jira/browse/KAFKA-4667
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Emanuele Cesena
>Assignee: Randall Hauch
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> I'm reporting this as an issue but in fact it requires more investigation 
> (which unfortunately I'm not able to perform at this time).
> Repro steps:
> - configure Kafka for consistency, for example:
> default.replication.factor=3
> min.insync.replicas=2
> unclean.leader.election.enable=false
> - run Connect for the first time, which should create its internal topics
> I believe these topics are created with the broker's default, in particular:
> min.insync.replicas=2
> unclean.leader.election.enable=false
> but connect doesn't produce with acks=all, which in turn may cause the 
> cluster to go in a bad state (see, e.g., 
> https://issues.apache.org/jira/browse/KAFKA-4666).
> Solution would be to force availability mode, i.e. force:
> unclean.leader.election.enable=true
> when creating the connect topics, or viceversa detect availability vs 
> consistency mode and turn acks=all if needed.
> I assume the same happens with other kafka-based services such as streams.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-3107) Error when trying to shut down auto balancing scheduler of controller

2017-05-11 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman resolved KAFKA-3107.
-
Resolution: Fixed

This problem should no longer exist after KAFKA-5028.

> Error when trying to shut down auto balancing scheduler of controller
> -
>
> Key: KAFKA-3107
> URL: https://issues.apache.org/jira/browse/KAFKA-3107
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Flavio Junqueira
>
> We observed the following exception when a controller was shutting down:
> {noformat}
> [run] Error handling event ZkEvent[New session event sent to 
> kafka.controller.KafkaController$SessionExpirationListener@3278c211]
> java.lang.IllegalStateException: Kafka scheduler has not been started
> at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
> at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
> at 
> kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
> at 
> kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply$mcZ$sp(KafkaController.scala:1108)
> at 
> kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1107)
> at 
> kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1107)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at 
> kafka.controller.KafkaController$SessionExpirationListener.handleNewSession(KafkaController.scala:1107)
> at org.I0Itec.zkclient.ZkClient$4.run(ZkClient.java:472)
> at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {noformat}
> The scheduler should have been started.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to normal : kafka-trunk-jdk8 #1507

2017-05-11 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-5120) Several controller metrics block if controller lock is held by another thread

2017-05-11 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman resolved KAFKA-5120.
-
Resolution: Fixed

KAFKA-5028 has been checked in so this should no longer be an issue.

> Several controller metrics block if controller lock is held by another thread
> -
>
> Key: KAFKA-5120
> URL: https://issues.apache.org/jira/browse/KAFKA-5120
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, metrics
>Affects Versions: 0.10.2.0
>Reporter: Tim Carey-Smith
>Priority: Minor
>
> We have been tracking latency issues surrounding queries to Controller 
> MBeans. Upon digging into the root causes, we discovered that several metrics 
> acquire the controller lock within the gauge. 
> The affected metrics are: 
> * {{ActiveControllerCount}}
> * {{OfflinePartitionsCount}}
> * {{PreferredReplicaImbalanceCount}}
> If the controller is currently holding the lock and a MBean request is 
> received, the thread executing the request will block until the controller 
> releases the lock. 
> We discovered this in a cluster where the controller was holding the lock for 
> extended periods of time for normal operations. We have documented this issue 
> in KAFKA-5116. 
> Several possible solutions exist: 
> * Remove the lock from inside these {{Gauge}} s. 
> * Store and update the metric values in {{AtomicLong}} s. 
> Modifying the {{ActiveControllerCount}} metric seems to be straight-forward 
> while the other 2 metrics seem to be more involved. 
> We're happy to contribute a patch, but wanted to discuss potential solutions 
> and their tradeoffs before proceeding. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)