[jira] [Created] (KAFKA-6292) ReplicaManager not respect the rolled log

2017-12-01 Thread Terence Yi (JIRA)
Terence Yi created KAFKA-6292:
-

 Summary: ReplicaManager not respect the rolled log
 Key: KAFKA-6292
 URL: https://issues.apache.org/jira/browse/KAFKA-6292
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.11.0.0
 Environment: OS:Red Hat Enterprise Linux Server release 7.3 (Maipo)
Kafka: kafka_2.12-0.11.0.0
JDK: jdk1.8.0_121


Reporter: Terence Yi



Consumer constantly poll from the Kafka cluster, and consume the message in 
order and end with a manually commit for each record.
Here is the log in consumer side:
2017-11-28 20:44:26.560 WARN X 
[HOSTNAME][device-message-subscriber-pool-3-thread-1] 
org.apache.kafka.clients.consumer.internals.Fetcher {} Unknown error fetching 
data for topic-partition 
DDI.DISPATCHER.MESSAGE_FORWARD_d694b9fa-d99a-4f4d-9062-b75e73b466a0-3


Observe below ERROR log in server.log 

[2017-11-27 12:16:24,182] INFO Rolled new log segment for 
'DDI.DISPATCHER.P_TVIN.W_SL.P_appx.P_ul.P_pos-3' in 1 ms. (kafka.log.Log)
[2017-11-27 12:16:35,555] INFO Rolled new log segment for 
'DDI.DISPATCHER.MESSAGE_FORWARD_d694b9fa-d99a-4f4d-9062-b75e73b466a0-3' in 1 
ms. (kafka.log.Log)
[2017-11-27 12:16:35,569] ERROR [Replica Manager on Broker 4]: Error processing 
fetch operation on partition 
DDI.DISPATCHER.MESSAGE_FORWARD_d694b9fa-d99a-4f4d-9062-b75e73b466a0-3, offset 
12813782 (kafka.server.ReplicaManager)
org.apache.kafka.common.KafkaException: java.io.EOFException: Failed to read 
`log header` from file channel `sun.nio.ch.FileChannelImpl@1a493fba`. Expected 
to read 17 bytes, but reached end of file after reading 0 bytes. Started read 
from position 2147483635.
at 
org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:40)
at 
org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:24)
at 
org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79)
at 
org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45)
at 
org.apache.kafka.common.record.FileRecords.searchForOffsetWithSize(FileRecords.java:279)
at kafka.log.LogSegment.translateOffset(LogSegment.scala:176)
at kafka.log.LogSegment.read(LogSegment.scala:228)
at kafka.log.Log.read(Log.scala:938)
at kafka.server.ReplicaManager.read$1(ReplicaManager.scala:719)
at 
kafka.server.ReplicaManager.$anonfun$readFromLocalLog$6(ReplicaManager.scala:780)
at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:779)
at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:617)
at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:615)
at kafka.server.KafkaApis.handle(KafkaApis.scala:98)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:66)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException: Failed to read `log header` from file channel 
`sun.nio.ch.FileChannelImpl@1a493fba`. Expected to read 17 bytes, but reached 
end of file after reading 0 bytes. Started read from position 2147483635.
at org.apache.kafka.common.utils.Utils.readFullyOrFail(Utils.java:751)
at 
org.apache.kafka.common.record.FileLogInputStream.nextBatch(FileLogInputStream.java:66)
at 
org.apache.kafka.common.record.FileLogInputStream.nextBatch(FileLogInputStream.java:40)
at 
org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:35)
... 18 more



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-12-01 Thread Jan Filipiak

Hi,

good catch about the rotation.
This is probably not a too big blocker. Plenty of ideas spring to my mind
of how this can be done. Maybe one can offer different algorithms here.
(nothing, random shuffle, client sends bitmask which it wants to fetch 
first, broker logic... many more)


Thank you for considering my ideas. I am pretty convinced we don't need
to aim for the 100% empty fetch request across TCP sessions. Maybe my ideas
offer decent tradeoffs.

Best Jan





On 01.12.2017 08:43, Becket Qin wrote:

Hi Jan,

I agree that we probably don't want to make the protocol too complicated
just for exception cases.

The current FetchRequest contains an ordered list of partitions that may
rotate based on the priority. Therefore it is kind of difficult to do the
order matching. But you brought a good point about order, we may want to
migrate the rotation logic from the clients to the server. Not sure if this
will introduce some complexity to the broker. Intuitively it seems fine.
The logic would basically be similar to the draining logic in the
RecordAccumulator of the producer.

Thanks,

Jiangjie (Becket) Qin

On Thu, Nov 30, 2017 at 11:29 PM, Jan Filipiak 
wrote:


Hi,

this discussion is going a little bit far from what I intended this thread
for.
I can see all of this beeing related.

To let you guys know what I am currently thinking is the following:

I do think the handling of Id's and epoch is rather complicated. I think
the complexity
comes from aiming for to much.

1. Currently all the work is towards making fetchRequest
completely empty. This brings all sorts of pain with regards to the broker
actually needs
to know what he send even though it tries to use sendfile as much as
possible.
2. Currently all the work is towards also making empty fetch request
across TCP sessions.

In this thread I aimed to relax our goals with regards to point 2.
Connection resets for us
are really the exceptions and I would argue, trying to introduce
complexity for sparing
1 full request on connection reset is not worth it. Therefore I argued to
keep the Server
side information with the Session instead somewhere global. Its not going
to bring in the
results.

As the discussion unvields I also want to challenge our approach for point
1.
I do not see a reason to introduce complexity (and
  especially on the fetch answer path). Did we consider that from the
client we just send the offsets
we want to fetch and skip the topic partition description and just use the
order to match the information
on the broker side again? This would also reduce the fetch sizes a lot
while skipping a ton of complexity.

Hope these ideas are interesting

best Jan



On 01.12.2017 01:47, Becket Qin wrote:


Hi Colin,

Thanks for updating the KIP. I have two comments:

1. The session epoch seems introducing some complexity. It would be good
if
we don't have to maintain the epoch.
2. If all the partitions has data returned (even a few messages), the next
fetch would be equivalent to a full request. This means the clusters with
continuously small throughput may not save much from the incremental
fetch.

I am wondering if we can avoid session epoch maintenance and address the
fetch efficiency in general with some modifications to the solution. Not
sure if the following would work, but just want to give my ideas.

To solve 1, the basic idea is to let the leader return the partition data
with its expected client's position for each partition. If the client
disagree with the leader's expectation, a full FetchRequest is then sent
to
ask the leader to update the client's position.
To solve 2, when possible, we just let the leader to infer the clients
position instead of asking the clients to provide the position, so the
incremental fetch can be empty in most cases.

More specifically, the protocol will have the following change.
1. Add a new flag called FullFetch to the FetchRequest.
 1) A full FetchRequest is the same as the current FetchRequest with
FullFetch=true.
 2) An incremental FetchRequest is always empty with FullFetch=false.
2. Add a new field called ExpectedPosition(INT64) to each partition data
in
the FetchResponse.

The leader logic:
1. The leader keeps a map from client-id (client-uuid) to the interested
partitions of that client. For each interested partition, the leader keeps
the client's position for that client.
2. When the leader receives a full fetch request (FullFetch=true), the
leader
  1) replaces the interested partitions for the client id with the
partitions in that full fetch request.
  2) updates the client position with the offset specified in that full
fetch request.
  3) if the client is a follower, update the high watermark, etc.
3. When the leader receives an incremental fetch request (typically
empty),
the leader returns the data from all the interested partitions (if any)
according to the position in the interested partitions map.
4. In the FetchResponse, the leader will include an ExpectedFetching

Re: [DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-12-01 Thread Jan Filipiak

BTW:

the shuffle problem would exist in all our solutions. An empty fetch 
request had the same issue about
what order to serve the topic and partitions. So my suggestion is not 
introducing this problem.


Best Jan

On 01.12.2017 08:29, Jan Filipiak wrote:

Hi,

this discussion is going a little bit far from what I intended this 
thread for.

I can see all of this beeing related.

To let you guys know what I am currently thinking is the following:

I do think the handling of Id's and epoch is rather complicated. I 
think the complexity

comes from aiming for to much.

1. Currently all the work is towards making fetchRequest
completely empty. This brings all sorts of pain with regards to the 
broker actually needs
to know what he send even though it tries to use sendfile as much as 
possible.
2. Currently all the work is towards also making empty fetch request 
across TCP sessions.


In this thread I aimed to relax our goals with regards to point 2. 
Connection resets for us
are really the exceptions and I would argue, trying to introduce 
complexity for sparing
1 full request on connection reset is not worth it. Therefore I argued 
to keep the Server
side information with the Session instead somewhere global. Its not 
going to bring in the

results.

As the discussion unvields I also want to challenge our approach for 
point 1.

I do not see a reason to introduce complexity (and
 especially on the fetch answer path). Did we consider that from the 
client we just send the offsets
we want to fetch and skip the topic partition description and just use 
the order to match the information
on the broker side again? This would also reduce the fetch sizes a lot 
while skipping a ton of complexity.


Hope these ideas are interesting

best Jan


On 01.12.2017 01:47, Becket Qin wrote:

Hi Colin,

Thanks for updating the KIP. I have two comments:

1. The session epoch seems introducing some complexity. It would be 
good if

we don't have to maintain the epoch.
2. If all the partitions has data returned (even a few messages), the 
next
fetch would be equivalent to a full request. This means the clusters 
with
continuously small throughput may not save much from the incremental 
fetch.


I am wondering if we can avoid session epoch maintenance and address the
fetch efficiency in general with some modifications to the solution. Not
sure if the following would work, but just want to give my ideas.

To solve 1, the basic idea is to let the leader return the partition 
data

with its expected client's position for each partition. If the client
disagree with the leader's expectation, a full FetchRequest is then 
sent to

ask the leader to update the client's position.
To solve 2, when possible, we just let the leader to infer the clients
position instead of asking the clients to provide the position, so the
incremental fetch can be empty in most cases.

More specifically, the protocol will have the following change.
1. Add a new flag called FullFetch to the FetchRequest.
1) A full FetchRequest is the same as the current FetchRequest with
FullFetch=true.
2) An incremental FetchRequest is always empty with FullFetch=false.
2. Add a new field called ExpectedPosition(INT64) to each partition 
data in

the FetchResponse.

The leader logic:
1. The leader keeps a map from client-id (client-uuid) to the interested
partitions of that client. For each interested partition, the leader 
keeps

the client's position for that client.
2. When the leader receives a full fetch request (FullFetch=true), the
leader
 1) replaces the interested partitions for the client id with the
partitions in that full fetch request.
 2) updates the client position with the offset specified in that 
full

fetch request.
 3) if the client is a follower, update the high watermark, etc.
3. When the leader receives an incremental fetch request (typically 
empty),

the leader returns the data from all the interested partitions (if any)
according to the position in the interested partitions map.
4. In the FetchResponse, the leader will include an 
ExpectedFetchingOffset
that the leader thinks the client is fetching at. The value is the 
client
position of the partition in the interested partition map. This is 
just to
confirm with the client that the client position in the leader is 
correct.
5. After sending back the FetchResponse, the leader updates the 
position of

the client's interested partitions. (There may be some overhead for the
leader to know of offsets, but I think the trick of returning at index
entry boundary or log end will work efficiently).
6. The leader will expire the client interested partitions if the client
hasn't fetch for some time. And if an incremental request is received 
when
the map does not contain the client info, an error will be returned 
to the

client to ask for a FullFetch.

The clients logic:
1. Start with sending a full FetchRequest, including partitions and 
offsets.
2. When get a response, check the Ex

[GitHub] kafka pull request #4281: [WIP] KAFKA-5693 rationalise policy interfaces

2017-12-01 Thread tombentley
GitHub user tombentley opened a pull request:

https://github.com/apache/kafka/pull/4281

[WIP] KAFKA-5693 rationalise policy interfaces

As described in KIP-201 (not yet accepted), this:

* deprecates the CreateTopicPolicy and AlterConfigPolicy 
* adds a new TopicManagementPolicy. 
* adds validateOnly() option to AdminClient.deleteTopics() and 
AdminClient.deleteRecords()

The existing policy tests are duplicated to test both old and new policy 
interfaces. A new DeleteRecordsRequestTest (and *WithPolicy subclass) are added 
to further test delete records with and without policy. A new 
DeleteTopicsRequestTestWithPolicy is added, subclassing the existing (but 
updated) DeleteTopicsRequestTest.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tombentley/kafka 
KAFKA-5693-rationalise-policy-interfaces

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4281.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4281


commit e92fdb3e40529d275d431ba6bf1ce234da462223
Author: Tom Bentley 
Date:   2017-10-04T10:36:12Z

Add TopicManagementPolicy.java

commit a6c3f5e9697ed9dc29308b5126146f8a972fd0a9
Author: Tom Bentley 
Date:   2017-10-04T15:32:48Z

fixup

commit f18766de07f013332f9fb0778bb34bdf0ac66eaa
Author: Tom Bentley 
Date:   2017-10-05T13:30:03Z

Fixup TopicManagementPolicy

commit 186281d5345718ed9415543779b5fd6a28aba1a8
Author: Tom Bentley 
Date:   2017-10-05T13:30:40Z

Use TopicManagementPolicy in AdminManager, with back-compat with old 
policies. TESTS STILL PASS

commit 23b5b3404c6fc7d90eac326cc78313499599c5ba
Author: Tom Bentley 
Date:   2017-10-05T19:21:57Z

Minor fix. Note the change in AdminManager was prompted by a test failure 
that looked a lot like an assumed Map ordering problem

commit 728103664bf35d37b5ab57119fae128a4835a84d
Author: Tom Bentley 
Date:   2017-10-05T19:22:34Z

Deprecate tests for old policies and add equivalent tests for new policy

commit 6258b063e19809114df94bae30241796d35781b8
Author: Tom Bentley 
Date:   2017-10-06T11:03:37Z

Add validate_only to DeleteTopicsRequest and DeleteRecordsRequest

commit ae701f3a908b033943efedf53d8907a883ae570b
Author: Tom Bentley 
Date:   2017-10-06T11:16:54Z

Add TODO list

commit b4d933122bd1877184d2cd6340ddebd3fef1cd2f
Author: Tom Bentley 
Date:   2017-10-06T14:48:15Z

Add messages to responses

commit bedc161140b5e1578dfb3afe5265cdc1452152d2
Author: Tom Bentley 
Date:   2017-10-09T08:22:27Z

Implement support for validateOnly on the broker

commit 51e424bd9395d8b6ac1d5ca3ea5d8df32cab388f
Author: Tom Bentley 
Date:   2017-10-09T11:16:41Z

Policy checks for delete(topics|records) and move policy instantiation to 
KafkaServer

commit 5a025bd9b2a2559dd6f5b69b22ef1897a860f4d1
Author: Tom Bentley 
Date:   2017-10-09T11:17:00Z

fixup

commit 21f1709c9f83864d68d2f5b633c0c15a28d5bf55
Author: Tom Bentley 
Date:   2017-10-09T11:22:14Z

fix long lines

commit 51e9fcca3f097dab36fbfd0bf391276c104fbe03
Author: Tom Bentley 
Date:   2017-10-09T15:37:25Z

Add test for deleteTopics(validateOnly=true)

commit ed303a1827e20fc44356a4e662369db34db74b20
Author: Tom Bentley 
Date:   2017-10-10T09:24:52Z

Small fixup in DeleteRecordsResponse

commit efec8c875501eac54c2b05c532c87c5302daadfd
Author: Tom Bentley 
Date:   2017-10-10T09:25:35Z

Small fix in ClusterStateImpl

commit 1deba2cbd2383fe6acccdd2d8c45669b3e80bdc5
Author: Tom Bentley 
Date:   2017-10-10T09:28:33Z

Small fix in ClusterStateImpl

commit 33f1f82f53f52197968d424fd2f3984806379239
Author: Tom Bentley 
Date:   2017-10-10T09:29:17Z

Add DeleteRecordsRequestTest

commit 8fce29596fbc3472807a18f5145b674d9bd3fbcb
Author: Tom Bentley 
Date:   2017-10-10T09:30:06Z

Fix DeleteTopicsRequestTest for Errors -> ApiError and add a test with a 
policy.

commit feb6600f334da680584879c105677b7c89910172
Author: Tom Bentley 
Date:   2017-10-10T16:45:39Z

Version-aware responses

commit 5e7b8dd9430f5edcfeece509e16860c29ab13003
Author: Tom Bentley 
Date:   2017-10-10T16:47:29Z

Don't timeout a validateOnly DeleteTopicRequest

commit e3507e51514aef407763cd0e7415b8e19d9c7e82
Author: Tom Bentley 
Date:   2017-10-10T16:49:43Z

Handle errors pertaining to the topic specifically

commit 63ff19771bb4a0ecd0aabdd3277343925a3044f8
Author: Tom Bentley 
Date:   2017-10-10T16:51:46Z

Tests

commit 97550942e1582aa25e7b0bb3634aebe8655d9b78
Author: Tom Bentley 
Date:   2017-10-11T10:48:58Z

Add test with policy

commit 30f9246c9260993e21fa33426eeb83372cff37f9
Author: Tom Bentley 
Date

Re: [VOTE] KIP-201: Rationalising policy interfaces

2017-12-01 Thread Tom Bentley
I have a PR for this (https://github.com/apache/kafka/pull/4281) in case
anyone wants to look at the implementation in detail, but right now this
KIP still lacks any committer votes.

Cheers,

Tom

On 22 November 2017 at 17:32, Tom Bentley  wrote:

> Hi everyone,
>
> I just wanted to highlight to committers that although this KIP has three
> non-binding votes, it currently lacks any binding votes: Any feedback would
> be appreciated.
>
> Cheers,
>
> Tom
>
> On 7 November 2017 at 20:42, Stephane Maarek  au> wrote:
>
>> Okay makes sense thanks! As you said maybe in the future (or now), it's
>> worth starting a server java dependency jar that's not called "client".
>> Probably a debate for another day (
>>
>> Tom, crossing fingers to see more votes on this! Good stuff
>>
>>
>> On 7/11/17, 9:51 pm, "Ismael Juma" > ism...@juma.me.uk> wrote:
>>
>> The idea is that you only depend on a Java jar. The core jar includes
>> the
>> Scala version in the name and you should not care about that when
>> implementing a Java interface.
>>
>> Ismael
>>
>> On Tue, Nov 7, 2017 at 10:37 AM, Stephane Maarek <
>> steph...@simplemachines.com.au> wrote:
>>
>> > Thanks !
>> >
>> > How about a java folder package in the core then ? It's not a
>> separate jar
>> > and it's still java?
>> >
>> > Nonetheless I agree these are details. I just got really confused
>> when
>> > trying to write my policy and would hope that confusion is not
>> shared by
>> > others because it's a "client " class although should only reside
>> within a
>> > broker
>> >
>> > On 7 Nov. 2017 9:04 pm, "Ismael Juma"  wrote:
>> >
>> > The location of the policies is fine. Note that the package _does
>> not_
>> > include clients in the name. If we ever have enough server side
>> only code
>> > to merit a separate JAR, we can do that and it's mostly compatible
>> (users
>> > would only have to update their build dependency). Generally, all
>> public
>> > APIs going forward will be in Java.
>> >
>> > Ismael
>> >
>> > On Tue, Nov 7, 2017 at 9:44 AM, Stephane Maarek <
>> > steph...@simplemachines.com.au> wrote:
>> >
>> > > Hi Tom,
>> > >
>> > > Regarding the java / scala compilation, I believe this is fine
>> (the
>> > > compiler will know), but any reason why you don't want the policy
>> to be
>> > > implemented using Scala ? (like the Authorizer)
>> > > It's usually not best practice to mix in scala / java code.
>> > >
>> > > Thanks!
>> > > Stephane
>> > >
>> > > Kind regards,
>> > > Stephane
>> > >
>> > > [image: Simple Machines]
>> > >
>> > > Stephane Maarek | Developer
>> > >
>> > > +61 416 575 980
>> > > steph...@simplemachines.com.au
>> > > simplemachines.com.au
>> > > Level 2, 145 William Street, Sydney NSW 2010
>> > >
>> > > On 7 November 2017 at 20:27, Tom Bentley 
>> wrote:
>> > >
>> > > > Hi Stephane,
>> > > >
>> > > > The vote on this KIP is on-going.
>> > > >
>> > > > I think it would be OK to make minor changes, but Edoardo and
>> Mickael
>> > > would
>> > > > have to to not disagree with them.
>> > > >
>> > > > The packages have not been brought up as a problem before now.
>> I don't
>> > > know
>> > > > the reason they're in the client's package, but I agree that
>> it's not
>> > > > ideal. To me the situation with the policies is analogous to the
>> > > situation
>> > > > with the Authorizer which is in core: They're both broker-side
>> > extensions
>> > > > points which users can provide their own implementations of. I
>> don't
>> > know
>> > > > whether the scala compiler is OK compiling interdependent scala
>> and
>> > java
>> > > > code (maybe Ismael knows?), but if it is, I would be happy if
>> these
>> > > > server-side policies were moved.
>> > > >
>> > > > Cheers,
>> > > >
>> > > > Tom
>> > > >
>> > > > On 7 November 2017 at 08:45, Stephane Maarek <
>> > > steph...@simplemachines.com.
>> > > > au
>> > > > > wrote:
>> > > >
>> > > > > Hi Tom,
>> > > > >
>> > > > > What's the status of this? I was about to create a KIP to
>> implement a
>> > > > > SimpleCreateTopicPolicy
>> > > > > (and Alter, etc...)
>> > > > > These policies would have some most basic parameter to check
>> for
>> > > > > replication factor and min insync replicas (mostly) so that
>> end users
>> > > can
>> > > > > leverage them out of the box. This KIP obviously changes the
>> > interface
>> > > so
>> > > > > I'd like this to be in before I propose my KIP
>> > > > >
>> > > > > I'll add my +1 to this, and hopefully we get quick progress
>> so I can
>> > > > > propose my KIP.
>> > > > >
>> > > > > Finally, have the packages been discussed?
>> > > > > I find it extremely awkward 

Re: [DISCUSS] KIP-218: Make KafkaFuture.Function java 8 lambda compatible

2017-12-01 Thread Tom Bentley
Hi Steven,

I'm particularly interested in seeing progress on this KIP as the work for
KIP-183 needs a public version of BiConsumer. do you have any idea when the
KIP might be ready for voting?

Thanks,

Tom

On 10 November 2017 at 13:38, Steven Aerts  wrote:

> Collin, Ben,
>
> Thanks for the input.
>
> I will work out this proposa, so I get an idea on the impact.
>
> Do you think it is a good idea to line up the new method names with those
> of CompletableFuture?
>
> Thanks,
>
>
>Steven
>
> Op vr 10 nov. 2017 om 12:12 schreef Ben Stopford :
>
> > Sounds like a good middle ground to me. What do you think Steven?
> >
> > On Mon, Nov 6, 2017 at 8:18 PM Colin McCabe  wrote:
> >
> > > It would definitely be nice to use the jdk8 CompletableFuture.  I think
> > > that's a bit of a separate discussion, though, since it has such heavy
> > > compatibility implications.
> > >
> > > How about making KIP-218 backwards compatible?  As a starting point,
> you
> > > can change KafkaFuture#BiConsumer to an interface with no compatibility
> > > implications, since there are currently no public functions exposed
> that
> > > use it.  That leaves KafkaFuture#Function, which is publicly used now.
> > >
> > > For the purposes of KIP-218, how about adding a new interface
> > > FunctionInterface?  Then you can add a function like this:
> > >
> > > >  public abstract  KafkaFuture thenApply(FunctionInterface
> > > function);
> > >
> > > And mark the older declaration as deprecated:
> > >
> > > >  @deprecated
> > > >  public abstract  KafkaFuture thenApply(Function
> function);
> > >
> > > This is a 100% compatible way to make things nicer for java 8.
> > >
> > > cheers,
> > > Colin
> > >
> > >
> > > On Thu, Nov 2, 2017, at 10:38, Steven Aerts wrote:
> > > > Hi Tom,
> > > >
> > > > Nice observation.
> > > > I changed "Rejected Alternatives" section to "Other Alternatives", as
> > > > I see myself as too much of an outsider to the kafka community to be
> > > > able to decide without this discussion.
> > > >
> > > > I see two major factors to decide:
> > > >  - how soon will KIP-118 (drop support of java 7) be implemented?
> > > >  - for which reasons do we drop backwards compatibility for public
> > > > interfaces marked as Evolving
> > > >
> > > > If KIP-118 which is scheduled for version 2.0.0 is going to be
> > > > implemented soon, I agree with you that replacing KafkaFuture with
> > > > CompletableFuture (or CompletionStage) is a preferable option.
> > > > But as I am not familiar with the roadmap it is difficult to tell for
> > me.
> > > >
> > > >
> > > > Thanks,
> > > >
> > > >
> > > >Steven
> > > >
> > > >
> > > > 2017-11-02 11:27 GMT+01:00 Tom Bentley :
> > > > > Hi Steven,
> > > > >
> > > > > I notice you've renamed the template's "Rejected Alternatives"
> > section
> > > to
> > > > > "Other Alternatives", suggesting they're not rejected yet (or, if
> you
> > > have
> > > > > rejected them, I think you should give your reasons).
> > > > >
> > > > > Personally, I'd like to understand the arguments against simply
> > > replacing
> > > > > KafkaFuture with CompletableFuture in Kafka 2.0. In other words, if
> > we
> > > were
> > > > > starting without needing to support Java 7 what would be the
> > arguments
> > > for
> > > > > having our own KafkaFuture?
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Tom
> > > > >
> > > > > On 1 November 2017 at 16:01, Ted Yu  wrote:
> > > > >
> > > > >> KAFKA-4423 is still open.
> > > > >> When would Java 7 be dropped ?
> > > > >>
> > > > >> Thanks
> > > > >>
> > > > >> On Wed, Nov 1, 2017 at 8:56 AM, Ismael Juma 
> > > wrote:
> > > > >>
> > > > >> > On Wed, Nov 1, 2017 at 3:51 PM, Ted Yu 
> > wrote:
> > > > >> >
> > > > >> > > bq. Wait for a kafka release which will not support java 7
> > anymore
> > > > >> > >
> > > > >> > > Do you want to raise a separate thread for the above ?
> > > > >> > >
> > > > >> >
> > > > >> > There is already a KIP for this so a separate thread is not
> > needed.
> > > > >> >
> > > > >> > Ismael
> > > > >> >
> > > > >>
> > >
> >
>


[jira] [Created] (KAFKA-6293) Support for Avro formatter in ConsoleConsumer With Confluent Schema Registry

2017-12-01 Thread Eric Thiebaut-George (JIRA)
Eric Thiebaut-George created KAFKA-6293:
---

 Summary: Support for Avro formatter in ConsoleConsumer With 
Confluent Schema Registry
 Key: KAFKA-6293
 URL: https://issues.apache.org/jira/browse/KAFKA-6293
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Eric Thiebaut-George


Add the ability the display Avro payloads when listening for messages in 
kafka-console-consumer.sh.

The proposed PR will display Avro payloads (in JSON) when executed with the 
following parameters:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mytopic 
--confluent-server localhost:8081 --formatter kafka.tools.AvroMessageFormatter



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4282: KAFKA-6293 Support for Avro formatter in ConsoleCo...

2017-12-01 Thread ethiebaut
GitHub user ethiebaut opened a pull request:

https://github.com/apache/kafka/pull/4282

KAFKA-6293 Support for Avro formatter in ConsoleConsumer

Support for Avro formatter in ConsoleConsumer With Confluent Schema 
Registry as per https://issues.apache.org/jira/browse/KAFKA-6293

This adds the ability the display Avro payloads when listening for messages 
in kafka-console-consumer.sh.
This proposed PR will display Avro payloads (in JSON) when executed with 
the following parameters:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic 
mytopic --confluent-server localhost:8081 --formatter 
kafka.tools.AvroMessageFormatter

This contribution is my original work and I license the work to the project 
under the project's open source license.

All tests passed:
BUILD SUCCESSFUL in 21m 18s
90 actionable tasks: 60 executed, 30 up-to-date


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ethiebaut/kafka avro-deserializer

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4282.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4282


commit a49102979ddd0183070e8aef2cf1cafb9ee317cd
Author: Eric Thiebaut-George 
Date:   2017-12-01T12:54:39Z

KAFKA-6293 Support for Avro formatter in ConsoleConsumer With Confluent 
Schema Registry




---


[GitHub] kafka pull request #4283: (WIP) KAFKA-6193: Flaky shouldPerformMultipleReass...

2017-12-01 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/4283

(WIP) KAFKA-6193: Flaky 
shouldPerformMultipleReassignmentOperationsOverVariousTopics

I have been unable to reproduce it locally, so enabled additional
logging while running it in Jenkins.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-6193-flaky-shouldPerformMultipleReassignmentOperationsOverVariousTopics

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4283.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4283


commit 3bbbae786432275ed690733f69289fff76f78a9e
Author: Ismael Juma 
Date:   2017-12-01T15:21:56Z

KAFKA-6193: Flaky 
shouldPerformMultipleReassignmentOperationsOverVariousTopics




---


Re: [DISCUSS] KIP 226 - Dynamic Broker Configuration

2017-12-01 Thread Rajini Sivaram
Hi all,

If there are no other suggestions or comments, I will start vote next week.
In the meantime, please feel free to review and add any suggestions or
improvements.

Regards,

Rajini

On Wed, Nov 29, 2017 at 11:52 AM, Rajini Sivaram 
wrote:

> Hi Jason,
>
> Thanks for reviewing the KIP.
>
> I hadn't included *inter.broker.protocol.version*, but you have provided
> a good reason to do that in order to avoid an additional rolling restart
> during upgrade. I had included *log.message.format.version* along with
> other default topic configs, but it probably makes sense to do these two
> together.
>
>
> On Wed, Nov 29, 2017 at 12:00 AM, Jason Gustafson 
> wrote:
>
>> Hi Rajini,
>>
>> One quick question I was wondering about is whether this could be used to
>> update the inter-broker protocol version or the message format version?
>> Potentially then we'd only need one rolling restart to upgrade the
>> cluster.
>> I glanced quickly through the uses of this config in the code and it seems
>> like it might be possible. Not sure if there are any complications you or
>> others can think of.
>>
>> Thanks,
>> Jason
>>
>> On Tue, Nov 28, 2017 at 2:48 PM, Rajini Sivaram 
>> wrote:
>>
>> > Hi Colin,
>> >
>> > Thank you for reviewing the KIP.
>> >
>> > *kaka-configs.sh* will be converted to use *AdminClient* under
>> KAFKA-5722.
>> > This is targeted for the next release (1.1.0). Under this KIP, we will
>> > implement *AdminClient#alterConfigs* for the dynamic configs listed in
>> the
>> > KIP. This will include validation of the configs and will return
>> > appropriate errors if configs are invalid. Integration tests will also
>> be
>> > added using AdmnClient. Only the actual conversion of ConfigCommand to
>> use
>> > AdminClient will be left to be done under KAFKA-5722.
>> >
>> > Once KAFKA-5722 is implemented,* kafka-confgs.sh* can be used to obtain
>> the
>> > current configuration, which can be redirected to a text file to create
>> a
>> > static *server.properties* file. This should help when downgrading -
>> but it
>> > does require brokers to be running. We can also document how to obtain
>> the
>> > properties using *zookeeper-shell.sh* while downgrading if brokers are
>> > down.
>> >
>> > If we rename properties, we should add the new property to ZK based on
>> the
>> > value of the old property when the upgraded broker starts up. But we
>> would
>> > probably leave the old property as is. The old property will be returned
>> > and used as a synonym only as long as the broker is on a version where
>> it
>> > is still valid. But it can remain in ZK and be updated if downgrading -
>> it
>> > will be up to the user to update the old property if downgrading or
>> delete
>> > it if not needed. Renaming properties is likely to be confusing in any
>> case
>> > even without dynamic configs, so hopefully it will be very rare.
>> >
>> >
>> > Rajini
>> >
>> > On Tue, Nov 28, 2017 at 7:47 PM, Colin McCabe 
>> wrote:
>> >
>> > > Hi Rajini,
>> > >
>> > > This seems like a nice improvement!
>> > >
>> > > One thing that is a bit concerning is that, if bin/kafka-configs.sh is
>> > > used, there is no  way for the broker to give feedback or error
>> > > messages.  The broker can't say "sorry, I can't reconfigure that in
>> that
>> > > way."  Or even "that configuration property is not reconfigurable in
>> > > this version of the software."  It seems like in the examples give,
>> > > users will simply set a configuration using bin/kafka-configs.sh, but
>> > > then they have to check the broker log files to see if it could
>> actually
>> > > be applied.  And even if it couldn't be applied, then it still lingers
>> > > in ZooKeeper.
>> > >
>> > > This seems like it would lead to a lot of user confusion, since they
>> get
>> > > no feedback when reconfiguring something.  For example, there will be
>> a
>> > > lot of scenarios where someone finds a reconfiguration command on
>> > > Google, runs it, but then it doesn't do anything because the software
>> > > version is different.  And there's no error message or feedback about
>> > > this.  It just doesn't work.
>> > >
>> > > To prevent this, I think we should convert bin/kafka-configs.sh to use
>> > > AdminClient's AlterConfigsRequest.  This allows us to detect scenarios
>> > > where, because of a typo, different software version, or a value of
>> the
>> > > wrong type (eg. string vs. int), the given configuration cannot be
>> > > applied.  We really should convert kafka-configs.sh to use AdminClient
>> > > anyway, for all the usual reasons-- people want to lock down
>> ZooKeeper,
>> > > ACLs should be enforced, internal representations should be hidden, we
>> > > should support environments where ZK is not exposed, etc. etc.
>> > >
>> > > Another issue that I see here is, how does this interact with
>> downgrade?
>> > >  Presumably, if the user downgrades to a version that doesn't support
>> > > KIP-226, all the dynamic configurations stored in ZK revert to
>> whatever
>> > > v

Re: [DISCUSS] KIP 226 - Dynamic Broker Configuration

2017-12-01 Thread Rajini Sivaram
Made one change to the KIP - I added a *validate* method to the
*Reconfigurable* interface so that we can validate new configs before they
are applied.

A couple of initial implementations:

   1. Dynamic updates of keystores:
   
https://github.com/rajinisivaram/kafka/commit/3071b686973a59a45546e9db6bdb05578d2edc0b
   2. Metrics reporter reconfiguration:
   
https://github.com/rajinisivaram/kafka/commit/7c0aa1ea1d81273fe6c082d47fff16208885d3df


On Fri, Dec 1, 2017 at 4:04 PM, Rajini Sivaram 
wrote:

> Hi all,
>
> If there are no other suggestions or comments, I will start vote next
> week. In the meantime, please feel free to review and add any suggestions
> or improvements.
>
> Regards,
>
> Rajini
>
> On Wed, Nov 29, 2017 at 11:52 AM, Rajini Sivaram 
> wrote:
>
>> Hi Jason,
>>
>> Thanks for reviewing the KIP.
>>
>> I hadn't included *inter.broker.protocol.version*, but you have provided
>> a good reason to do that in order to avoid an additional rolling restart
>> during upgrade. I had included *log.message.format.version* along with
>> other default topic configs, but it probably makes sense to do these two
>> together.
>>
>>
>> On Wed, Nov 29, 2017 at 12:00 AM, Jason Gustafson 
>> wrote:
>>
>>> Hi Rajini,
>>>
>>> One quick question I was wondering about is whether this could be used to
>>> update the inter-broker protocol version or the message format version?
>>> Potentially then we'd only need one rolling restart to upgrade the
>>> cluster.
>>> I glanced quickly through the uses of this config in the code and it
>>> seems
>>> like it might be possible. Not sure if there are any complications you or
>>> others can think of.
>>>
>>> Thanks,
>>> Jason
>>>
>>> On Tue, Nov 28, 2017 at 2:48 PM, Rajini Sivaram >> >
>>> wrote:
>>>
>>> > Hi Colin,
>>> >
>>> > Thank you for reviewing the KIP.
>>> >
>>> > *kaka-configs.sh* will be converted to use *AdminClient* under
>>> KAFKA-5722.
>>> > This is targeted for the next release (1.1.0). Under this KIP, we will
>>> > implement *AdminClient#alterConfigs* for the dynamic configs listed in
>>> the
>>> > KIP. This will include validation of the configs and will return
>>> > appropriate errors if configs are invalid. Integration tests will also
>>> be
>>> > added using AdmnClient. Only the actual conversion of ConfigCommand to
>>> use
>>> > AdminClient will be left to be done under KAFKA-5722.
>>> >
>>> > Once KAFKA-5722 is implemented,* kafka-confgs.sh* can be used to
>>> obtain the
>>> > current configuration, which can be redirected to a text file to
>>> create a
>>> > static *server.properties* file. This should help when downgrading -
>>> but it
>>> > does require brokers to be running. We can also document how to obtain
>>> the
>>> > properties using *zookeeper-shell.sh* while downgrading if brokers are
>>> > down.
>>> >
>>> > If we rename properties, we should add the new property to ZK based on
>>> the
>>> > value of the old property when the upgraded broker starts up. But we
>>> would
>>> > probably leave the old property as is. The old property will be
>>> returned
>>> > and used as a synonym only as long as the broker is on a version where
>>> it
>>> > is still valid. But it can remain in ZK and be updated if downgrading
>>> - it
>>> > will be up to the user to update the old property if downgrading or
>>> delete
>>> > it if not needed. Renaming properties is likely to be confusing in any
>>> case
>>> > even without dynamic configs, so hopefully it will be very rare.
>>> >
>>> >
>>> > Rajini
>>> >
>>> > On Tue, Nov 28, 2017 at 7:47 PM, Colin McCabe 
>>> wrote:
>>> >
>>> > > Hi Rajini,
>>> > >
>>> > > This seems like a nice improvement!
>>> > >
>>> > > One thing that is a bit concerning is that, if bin/kafka-configs.sh
>>> is
>>> > > used, there is no  way for the broker to give feedback or error
>>> > > messages.  The broker can't say "sorry, I can't reconfigure that in
>>> that
>>> > > way."  Or even "that configuration property is not reconfigurable in
>>> > > this version of the software."  It seems like in the examples give,
>>> > > users will simply set a configuration using bin/kafka-configs.sh, but
>>> > > then they have to check the broker log files to see if it could
>>> actually
>>> > > be applied.  And even if it couldn't be applied, then it still
>>> lingers
>>> > > in ZooKeeper.
>>> > >
>>> > > This seems like it would lead to a lot of user confusion, since they
>>> get
>>> > > no feedback when reconfiguring something.  For example, there will
>>> be a
>>> > > lot of scenarios where someone finds a reconfiguration command on
>>> > > Google, runs it, but then it doesn't do anything because the software
>>> > > version is different.  And there's no error message or feedback about
>>> > > this.  It just doesn't work.
>>> > >
>>> > > To prevent this, I think we should convert bin/kafka-configs.sh to
>>> use
>>> > > AdminClient's AlterConfigsRequest.  This allows us to detect
>>> scenarios
>>> > > where, because of a typo, different software version

[jira] [Created] (KAFKA-6294) ZkClient is not joining it's event thread during an interrupt

2017-12-01 Thread Zeynep Arikoglu (JIRA)
Zeynep Arikoglu created KAFKA-6294:
--

 Summary: ZkClient is not joining it's event thread during an 
interrupt
 Key: KAFKA-6294
 URL: https://issues.apache.org/jira/browse/KAFKA-6294
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.1
Reporter: Zeynep Arikoglu


ZkClient is not joining it's event thread when there is an active interrupt.
There was a similar issue with KafkaProducer thread which was partially solved 
(thread join was solved, but still not passing the interrupted state).
KAFKA-4767



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP 226 - Dynamic Broker Configuration

2017-12-01 Thread Ted Yu
Thanks for the update.

bq. To enable this, the version of the broker will be added to the JSON
registered by each broker during startup at */brokers/ids/id*

*It seems the path has typo. Should it be:*
*/config/brokers/id*

*Cheers*

On Fri, Dec 1, 2017 at 8:32 AM, Rajini Sivaram 
wrote:

> Made one change to the KIP - I added a *validate* method to the
> *Reconfigurable* interface so that we can validate new configs before they
> are applied.
>
> A couple of initial implementations:
>
>1. Dynamic updates of keystores:
>https://github.com/rajinisivaram/kafka/commit/
> 3071b686973a59a45546e9db6bdb05578d2edc0b
>2. Metrics reporter reconfiguration:
>https://github.com/rajinisivaram/kafka/commit/
> 7c0aa1ea1d81273fe6c082d47fff16208885d3df
>
>
> On Fri, Dec 1, 2017 at 4:04 PM, Rajini Sivaram 
> wrote:
>
> > Hi all,
> >
> > If there are no other suggestions or comments, I will start vote next
> > week. In the meantime, please feel free to review and add any suggestions
> > or improvements.
> >
> > Regards,
> >
> > Rajini
> >
> > On Wed, Nov 29, 2017 at 11:52 AM, Rajini Sivaram <
> rajinisiva...@gmail.com>
> > wrote:
> >
> >> Hi Jason,
> >>
> >> Thanks for reviewing the KIP.
> >>
> >> I hadn't included *inter.broker.protocol.version*, but you have
> provided
> >> a good reason to do that in order to avoid an additional rolling restart
> >> during upgrade. I had included *log.message.format.version* along with
> >> other default topic configs, but it probably makes sense to do these two
> >> together.
> >>
> >>
> >> On Wed, Nov 29, 2017 at 12:00 AM, Jason Gustafson 
> >> wrote:
> >>
> >>> Hi Rajini,
> >>>
> >>> One quick question I was wondering about is whether this could be used
> to
> >>> update the inter-broker protocol version or the message format version?
> >>> Potentially then we'd only need one rolling restart to upgrade the
> >>> cluster.
> >>> I glanced quickly through the uses of this config in the code and it
> >>> seems
> >>> like it might be possible. Not sure if there are any complications you
> or
> >>> others can think of.
> >>>
> >>> Thanks,
> >>> Jason
> >>>
> >>> On Tue, Nov 28, 2017 at 2:48 PM, Rajini Sivaram <
> rajinisiva...@gmail.com
> >>> >
> >>> wrote:
> >>>
> >>> > Hi Colin,
> >>> >
> >>> > Thank you for reviewing the KIP.
> >>> >
> >>> > *kaka-configs.sh* will be converted to use *AdminClient* under
> >>> KAFKA-5722.
> >>> > This is targeted for the next release (1.1.0). Under this KIP, we
> will
> >>> > implement *AdminClient#alterConfigs* for the dynamic configs listed
> in
> >>> the
> >>> > KIP. This will include validation of the configs and will return
> >>> > appropriate errors if configs are invalid. Integration tests will
> also
> >>> be
> >>> > added using AdmnClient. Only the actual conversion of ConfigCommand
> to
> >>> use
> >>> > AdminClient will be left to be done under KAFKA-5722.
> >>> >
> >>> > Once KAFKA-5722 is implemented,* kafka-confgs.sh* can be used to
> >>> obtain the
> >>> > current configuration, which can be redirected to a text file to
> >>> create a
> >>> > static *server.properties* file. This should help when downgrading -
> >>> but it
> >>> > does require brokers to be running. We can also document how to
> obtain
> >>> the
> >>> > properties using *zookeeper-shell.sh* while downgrading if brokers
> are
> >>> > down.
> >>> >
> >>> > If we rename properties, we should add the new property to ZK based
> on
> >>> the
> >>> > value of the old property when the upgraded broker starts up. But we
> >>> would
> >>> > probably leave the old property as is. The old property will be
> >>> returned
> >>> > and used as a synonym only as long as the broker is on a version
> where
> >>> it
> >>> > is still valid. But it can remain in ZK and be updated if downgrading
> >>> - it
> >>> > will be up to the user to update the old property if downgrading or
> >>> delete
> >>> > it if not needed. Renaming properties is likely to be confusing in
> any
> >>> case
> >>> > even without dynamic configs, so hopefully it will be very rare.
> >>> >
> >>> >
> >>> > Rajini
> >>> >
> >>> > On Tue, Nov 28, 2017 at 7:47 PM, Colin McCabe 
> >>> wrote:
> >>> >
> >>> > > Hi Rajini,
> >>> > >
> >>> > > This seems like a nice improvement!
> >>> > >
> >>> > > One thing that is a bit concerning is that, if bin/kafka-configs.sh
> >>> is
> >>> > > used, there is no  way for the broker to give feedback or error
> >>> > > messages.  The broker can't say "sorry, I can't reconfigure that in
> >>> that
> >>> > > way."  Or even "that configuration property is not reconfigurable
> in
> >>> > > this version of the software."  It seems like in the examples give,
> >>> > > users will simply set a configuration using bin/kafka-configs.sh,
> but
> >>> > > then they have to check the broker log files to see if it could
> >>> actually
> >>> > > be applied.  And even if it couldn't be applied, then it still
> >>> lingers
> >>> > > in ZooKeeper.
> >>> > >
> >>> > > This seems like it would lead t

[jira] [Created] (KAFKA-6295) Add 'Coordinator Id' to consumer metrics

2017-12-01 Thread Ryan P (JIRA)
Ryan P created KAFKA-6295:
-

 Summary: Add 'Coordinator Id' to consumer metrics
 Key: KAFKA-6295
 URL: https://issues.apache.org/jira/browse/KAFKA-6295
 Project: Kafka
  Issue Type: Improvement
Reporter: Ryan P


It would be incredibly helpful to be able to review which broker was the 
coordinator for a consumer at a given point in time. The easiest way to achieve 
this in my opinion would be to expose a coordinator id metric. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP 226 - Dynamic Broker Configuration

2017-12-01 Thread Rajini Sivaram
Hi Ted,

Thank you for the review. */config/brokers/id *contains the persistent
configuration of broker *. *That will be configured using
kafka-configs.sh/AdminClient. */brokers/ids/id* contains the ephemeral
metadata registered by broker *.* For broker version, we don't want the
data to outlive the broker. Hence adding it to */brokers/ids/id.*

On Fri, Dec 1, 2017 at 5:38 PM, Ted Yu  wrote:

> Thanks for the update.
>
> bq. To enable this, the version of the broker will be added to the JSON
> registered by each broker during startup at */brokers/ids/id*
>
> *It seems the path has typo. Should it be:*
> */config/brokers/id*
>
> *Cheers*
>
> On Fri, Dec 1, 2017 at 8:32 AM, Rajini Sivaram 
> wrote:
>
> > Made one change to the KIP - I added a *validate* method to the
> > *Reconfigurable* interface so that we can validate new configs before
> they
> > are applied.
> >
> > A couple of initial implementations:
> >
> >1. Dynamic updates of keystores:
> >https://github.com/rajinisivaram/kafka/commit/
> > 3071b686973a59a45546e9db6bdb05578d2edc0b
> >2. Metrics reporter reconfiguration:
> >https://github.com/rajinisivaram/kafka/commit/
> > 7c0aa1ea1d81273fe6c082d47fff16208885d3df
> >
> >
> > On Fri, Dec 1, 2017 at 4:04 PM, Rajini Sivaram 
> > wrote:
> >
> > > Hi all,
> > >
> > > If there are no other suggestions or comments, I will start vote next
> > > week. In the meantime, please feel free to review and add any
> suggestions
> > > or improvements.
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > > On Wed, Nov 29, 2017 at 11:52 AM, Rajini Sivaram <
> > rajinisiva...@gmail.com>
> > > wrote:
> > >
> > >> Hi Jason,
> > >>
> > >> Thanks for reviewing the KIP.
> > >>
> > >> I hadn't included *inter.broker.protocol.version*, but you have
> > provided
> > >> a good reason to do that in order to avoid an additional rolling
> restart
> > >> during upgrade. I had included *log.message.format.version* along with
> > >> other default topic configs, but it probably makes sense to do these
> two
> > >> together.
> > >>
> > >>
> > >> On Wed, Nov 29, 2017 at 12:00 AM, Jason Gustafson  >
> > >> wrote:
> > >>
> > >>> Hi Rajini,
> > >>>
> > >>> One quick question I was wondering about is whether this could be
> used
> > to
> > >>> update the inter-broker protocol version or the message format
> version?
> > >>> Potentially then we'd only need one rolling restart to upgrade the
> > >>> cluster.
> > >>> I glanced quickly through the uses of this config in the code and it
> > >>> seems
> > >>> like it might be possible. Not sure if there are any complications
> you
> > or
> > >>> others can think of.
> > >>>
> > >>> Thanks,
> > >>> Jason
> > >>>
> > >>> On Tue, Nov 28, 2017 at 2:48 PM, Rajini Sivaram <
> > rajinisiva...@gmail.com
> > >>> >
> > >>> wrote:
> > >>>
> > >>> > Hi Colin,
> > >>> >
> > >>> > Thank you for reviewing the KIP.
> > >>> >
> > >>> > *kaka-configs.sh* will be converted to use *AdminClient* under
> > >>> KAFKA-5722.
> > >>> > This is targeted for the next release (1.1.0). Under this KIP, we
> > will
> > >>> > implement *AdminClient#alterConfigs* for the dynamic configs listed
> > in
> > >>> the
> > >>> > KIP. This will include validation of the configs and will return
> > >>> > appropriate errors if configs are invalid. Integration tests will
> > also
> > >>> be
> > >>> > added using AdmnClient. Only the actual conversion of ConfigCommand
> > to
> > >>> use
> > >>> > AdminClient will be left to be done under KAFKA-5722.
> > >>> >
> > >>> > Once KAFKA-5722 is implemented,* kafka-confgs.sh* can be used to
> > >>> obtain the
> > >>> > current configuration, which can be redirected to a text file to
> > >>> create a
> > >>> > static *server.properties* file. This should help when downgrading
> -
> > >>> but it
> > >>> > does require brokers to be running. We can also document how to
> > obtain
> > >>> the
> > >>> > properties using *zookeeper-shell.sh* while downgrading if brokers
> > are
> > >>> > down.
> > >>> >
> > >>> > If we rename properties, we should add the new property to ZK based
> > on
> > >>> the
> > >>> > value of the old property when the upgraded broker starts up. But
> we
> > >>> would
> > >>> > probably leave the old property as is. The old property will be
> > >>> returned
> > >>> > and used as a synonym only as long as the broker is on a version
> > where
> > >>> it
> > >>> > is still valid. But it can remain in ZK and be updated if
> downgrading
> > >>> - it
> > >>> > will be up to the user to update the old property if downgrading or
> > >>> delete
> > >>> > it if not needed. Renaming properties is likely to be confusing in
> > any
> > >>> case
> > >>> > even without dynamic configs, so hopefully it will be very rare.
> > >>> >
> > >>> >
> > >>> > Rajini
> > >>> >
> > >>> > On Tue, Nov 28, 2017 at 7:47 PM, Colin McCabe 
> > >>> wrote:
> > >>> >
> > >>> > > Hi Rajini,
> > >>> > >
> > >>> > > This seems like a nice improvement!
> > >>> > >
> > >>> > > One thing that is a bit concernin

Re: [DISCUSS] KIP 226 - Dynamic Broker Configuration

2017-12-01 Thread Ted Yu
Thanks for the explanation.

On Fri, Dec 1, 2017 at 10:13 AM, Rajini Sivaram 
wrote:

> Hi Ted,
>
> Thank you for the review. */config/brokers/id *contains the persistent
> configuration of broker *. *That will be configured using
> kafka-configs.sh/AdminClient. */brokers/ids/id* contains the ephemeral
> metadata registered by broker *.* For broker version, we don't want the
> data to outlive the broker. Hence adding it to */brokers/ids/id.*
>
> On Fri, Dec 1, 2017 at 5:38 PM, Ted Yu  wrote:
>
> > Thanks for the update.
> >
> > bq. To enable this, the version of the broker will be added to the JSON
> > registered by each broker during startup at */brokers/ids/id*
> >
> > *It seems the path has typo. Should it be:*
> > */config/brokers/id*
> >
> > *Cheers*
> >
> > On Fri, Dec 1, 2017 at 8:32 AM, Rajini Sivaram 
> > wrote:
> >
> > > Made one change to the KIP - I added a *validate* method to the
> > > *Reconfigurable* interface so that we can validate new configs before
> > they
> > > are applied.
> > >
> > > A couple of initial implementations:
> > >
> > >1. Dynamic updates of keystores:
> > >https://github.com/rajinisivaram/kafka/commit/
> > > 3071b686973a59a45546e9db6bdb05578d2edc0b
> > >2. Metrics reporter reconfiguration:
> > >https://github.com/rajinisivaram/kafka/commit/
> > > 7c0aa1ea1d81273fe6c082d47fff16208885d3df
> > >
> > >
> > > On Fri, Dec 1, 2017 at 4:04 PM, Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > If there are no other suggestions or comments, I will start vote next
> > > > week. In the meantime, please feel free to review and add any
> > suggestions
> > > > or improvements.
> > > >
> > > > Regards,
> > > >
> > > > Rajini
> > > >
> > > > On Wed, Nov 29, 2017 at 11:52 AM, Rajini Sivaram <
> > > rajinisiva...@gmail.com>
> > > > wrote:
> > > >
> > > >> Hi Jason,
> > > >>
> > > >> Thanks for reviewing the KIP.
> > > >>
> > > >> I hadn't included *inter.broker.protocol.version*, but you have
> > > provided
> > > >> a good reason to do that in order to avoid an additional rolling
> > restart
> > > >> during upgrade. I had included *log.message.format.version* along
> with
> > > >> other default topic configs, but it probably makes sense to do these
> > two
> > > >> together.
> > > >>
> > > >>
> > > >> On Wed, Nov 29, 2017 at 12:00 AM, Jason Gustafson <
> ja...@confluent.io
> > >
> > > >> wrote:
> > > >>
> > > >>> Hi Rajini,
> > > >>>
> > > >>> One quick question I was wondering about is whether this could be
> > used
> > > to
> > > >>> update the inter-broker protocol version or the message format
> > version?
> > > >>> Potentially then we'd only need one rolling restart to upgrade the
> > > >>> cluster.
> > > >>> I glanced quickly through the uses of this config in the code and
> it
> > > >>> seems
> > > >>> like it might be possible. Not sure if there are any complications
> > you
> > > or
> > > >>> others can think of.
> > > >>>
> > > >>> Thanks,
> > > >>> Jason
> > > >>>
> > > >>> On Tue, Nov 28, 2017 at 2:48 PM, Rajini Sivaram <
> > > rajinisiva...@gmail.com
> > > >>> >
> > > >>> wrote:
> > > >>>
> > > >>> > Hi Colin,
> > > >>> >
> > > >>> > Thank you for reviewing the KIP.
> > > >>> >
> > > >>> > *kaka-configs.sh* will be converted to use *AdminClient* under
> > > >>> KAFKA-5722.
> > > >>> > This is targeted for the next release (1.1.0). Under this KIP, we
> > > will
> > > >>> > implement *AdminClient#alterConfigs* for the dynamic configs
> listed
> > > in
> > > >>> the
> > > >>> > KIP. This will include validation of the configs and will return
> > > >>> > appropriate errors if configs are invalid. Integration tests will
> > > also
> > > >>> be
> > > >>> > added using AdmnClient. Only the actual conversion of
> ConfigCommand
> > > to
> > > >>> use
> > > >>> > AdminClient will be left to be done under KAFKA-5722.
> > > >>> >
> > > >>> > Once KAFKA-5722 is implemented,* kafka-confgs.sh* can be used to
> > > >>> obtain the
> > > >>> > current configuration, which can be redirected to a text file to
> > > >>> create a
> > > >>> > static *server.properties* file. This should help when
> downgrading
> > -
> > > >>> but it
> > > >>> > does require brokers to be running. We can also document how to
> > > obtain
> > > >>> the
> > > >>> > properties using *zookeeper-shell.sh* while downgrading if
> brokers
> > > are
> > > >>> > down.
> > > >>> >
> > > >>> > If we rename properties, we should add the new property to ZK
> based
> > > on
> > > >>> the
> > > >>> > value of the old property when the upgraded broker starts up. But
> > we
> > > >>> would
> > > >>> > probably leave the old property as is. The old property will be
> > > >>> returned
> > > >>> > and used as a synonym only as long as the broker is on a version
> > > where
> > > >>> it
> > > >>> > is still valid. But it can remain in ZK and be updated if
> > downgrading
> > > >>> - it
> > > >>> > will be up to the user to update the old property if downgrading
> or
> > > >>> de

[jira] [Created] (KAFKA-6296) Transient failure in NetworkClientTest.testConnectionDelayDisconnected

2017-12-01 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-6296:
--

 Summary: Transient failure in 
NetworkClientTest.testConnectionDelayDisconnected
 Key: KAFKA-6296
 URL: https://issues.apache.org/jira/browse/KAFKA-6296
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


{code
java.lang.AssertionError: expected:<17048.0> but was:<23983.0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:553)
at org.junit.Assert.assertEquals(Assert.java:683)
at 
org.apache.kafka.clients.NetworkClientTest.testConnectionDelayDisconnected(NetworkClientTest.java:303)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4284: MINOR: imporve EOS docs

2017-12-01 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/4284

MINOR: imporve EOS docs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka minor-improve-eos-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4284.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4284


commit 7776d85f05ebffc8286eecfdfdf9c397ccbbb03d
Author: Matthias J. Sax 
Date:   2017-12-01T18:59:38Z

MINOR: imporve EOS docs




---


[GitHub] kafka pull request #4285: KAFKA-6296: Increase jitter to fix transient failu...

2017-12-01 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/4285

KAFKA-6296: Increase jitter to fix transient failure in 
NetworkClientTest.testConnectionDelayDisconnected

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-6296

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4285.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4285


commit 04dd99039670c4fbcf8f6030c41fa8f39297838c
Author: Jason Gustafson 
Date:   2017-12-01T19:09:27Z

KAFKA-6296: Increase jitter to fix transient failure in 
NetworkClientTest.testConnectionDelayDisconnected




---


[jira] [Created] (KAFKA-6297) Consumer fetcher should handle UnsupportedVersionException more diligently

2017-12-01 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-6297:


 Summary: Consumer fetcher should handle 
UnsupportedVersionException more diligently
 Key: KAFKA-6297
 URL: https://issues.apache.org/jira/browse/KAFKA-6297
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: Guozhang Wang


Today if the consumer is talking to an older versioned broker that does not 
support newer fetch versions, it will simply block without printing any warning 
logs. This is because when {{UnsupportedVersionException}} gets raised inside 
{{ConsumerNetworkClient}}, the {{Fetcher}}'s handling logic is only logging it 
and moves on (and hence retries forever):

{code}
   @Override
public void onFailure(RuntimeException e) {
log.debug("Fetch request {} to {} failed", 
request.fetchData(), fetchTarget, e);
}
{code}

We should at least logging {{UnsupportedVersionException}} specifically as WARN 
or even let the consumer to fail fast and gracefully upon this error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4276: KAFKA-6260: Ensure selection keys are removed from...

2017-12-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4276


---


Re: [DISCUSS] KIP-210: Provide for custom error handling when Kafka Streams fails to produce

2017-12-01 Thread Matt Farmer
Bump! It's been three days here and I haven't seen any further feedback.
Eager to get this resolved, approved, and merged. =)

On Tue, Nov 28, 2017 at 9:53 AM Matt Farmer  wrote:

> Hi there, sorry for the delay in responding. Last week had a holiday and
> several busy work days in it so I'm just now getting around to responding.
>
> > We would only exclude
> > exception Streams can handle itself (like ProducerFencedException) --
> > thus, if the handler has code to react to this, it would not be bad, as
> > this code is just never called.
> [...]
> > Thus, I am still in favor of not calling the ProductionExceptionHandler
> > for fatal exception.
>
> Acknowledged, so is ProducerFencedException the only kind of exception I
> need to change my behavior on? Or are there other types I need to check? Is
> there a comprehensive list somewhere?
>
> > About the "always continue" case. Sounds good to me to remove it (not
> > sure why we need it in test package?)
>
> I include it in the test package because I have tests that assert that if
> the record collector impl encounters an Exception and receives a CONTINUE
> that it actually does CONTINUE.
>
> > What is there reasoning for invoking the handler only for the first
> > exception?
>
> I didn't want to invoke the handler in places where the CONTINUE or FAIL
> result would just be ignored. Presumably, after a FAIL has been returned,
> subsequent exceptions are likely to be repeats or noise from my
> understanding of the code paths. If this is incorrect we can revisit.
>
> Once I get the answers to these questions I can make another pass on the
> pull request!
>
> Matt
>
> On Mon, Nov 20, 2017 at 4:07 PM Matthias J. Sax 
> wrote:
>
>> Thanks for following up!
>>
>> One thought about an older reply from you:
>>
>>  I strongly disagree here. The purpose of this handler isn't *just* to
>>  make a decision for streams. There may also be desirable side
>> effects that
>>  users wish to cause when production exceptions occur. There may be
>>  side-effects that they wish to cause when AuthenticationExceptions
>> occur,
>>  as well. We should still give them the hooks to preform those side
>> effects.
>>
>> And your follow up:
>>
>> >>- I think I would rather invoke it for all exceptions that could
>> occur
>> >>from attempting to produce - even those exceptions were returning
>> CONTINUE
>> >>may not be a good idea (e.g. Authorization exception). Until there
>> is a
>> >>different type for exceptions that are totally fatal (for example a
>> >>FatalStreamsException or some sort), maintaining a list of
>> exceptions that
>> >>you can intercept with this handler and exceptions you cannot would
>> be
>> >>really error-prone and hard to keep correct.
>>
>> I understand what you are saying, however, consider that Streams needs
>> to die for a fatal exception. Thus, if you call the handler for a fatal
>> exception, we would  need to ignore the return value and fail -- this
>> seems to be rather intuitive. Furthermore, users can register an
>> uncaught-exception-handler and side effects for fatal errors can be
>> triggered there.
>>
>> Btw: an AuthorizationException is fatal -- not sure what you mean by an
>> "totally fatal" exception -- there is no superlative to fatal from my
>> understanding.
>>
>> About maintaining a list of exceptions: I don't think this is too hard,
>> and users also don't need to worry about it IMHO. We would only exclude
>> exception Streams can handle itself (like ProducerFencedException) --
>> thus, if the handler has code to react to this, it would not be bad, as
>> this code is just never called. In case that there is an exception
>> Streams could actually handle but we still call the handler (ie, bug),
>> there should be no harm either -- the handler needs to return either
>> CONTINUE or FAIL and we would obey. It could only happen, that Streams
>> dies---as request by the user(!)---even if Streams could actually handle
>> the error and move on. But this seems to be not a issue from my point of
>> view.
>>
>> Thus, I am still in favor of not calling the ProductionExceptionHandler
>> for fatal exception.
>>
>>
>>
>> About the "always continue" case. Sounds good to me to remove it (not
>> sure why we need it in test package?) and to rename the "failing"
>> handler to "Default" (even if "default" is less descriptive and I would
>> still prefer "Fail" in the name).
>>
>>
>> Last question:
>>
>> >>   - Continue to *only* invoke it on the first exception that we
>> >>   encounter (before sendException is set)
>>
>>
>> What is there reasoning for invoking the handler only for the first
>> exception?
>>
>>
>>
>>
>> -Matthias
>>
>> On 11/20/17 10:43 AM, Matt Farmer wrote:
>> > Alright, here are some updates I'm planning to make after thinking on
>> this
>> > for awhile:
>> >
>> >- Given that the "always continue" handler isn't something I'd
>> recommend
>> >for production use as is, I'm going to 

Re: [DISCUSS] KIP-227: Introduce Incremental FetchRequests to Increase Partition Scalability

2017-12-01 Thread Dong Lin
On Thu, Nov 30, 2017 at 9:37 AM, Colin McCabe  wrote:

> On Wed, Nov 29, 2017, at 18:59, Dong Lin wrote:
> > Hey Colin,
> >
> > Thanks much for the update. I have a few questions below:
> >
> > 1. I am not very sure that we need Fetch Session Epoch. It seems that
> > Fetch
> > Session Epoch is only needed to help leader distinguish between "a full
> > fetch request" and "a full fetch request and request a new incremental
> > fetch session". Alternatively, follower can also indicate "a full fetch
> > request and request a new incremental fetch session" by setting Fetch
> > Session ID to -1 without using Fetch Session Epoch. Does this make sense?
>
> Hi Dong,
>
> The fetch session epoch is very important for ensuring correctness.  It
> prevents corrupted or incomplete fetch data due to network reordering or
> loss.
>
> For example, consider a scenario where the follower sends a fetch
> request to the leader.  The leader responds, but the response is lost
> because of network problems which affected the TCP session.  In that
> case, the follower must establish a new TCP session and re-send the
> incremental fetch request.  But the leader does not know that the
> follower didn't receive the previous incremental fetch response.  It is
> only the incremental fetch epoch which lets the leader know that it
> needs to resend that data, and not data which comes afterwards.
>
> You could construct similar scenarios with message reordering,
> duplication, etc.  Basically, this is a stateful protocol on an
> unreliable network, and you need to know whether the follower got the
> previous data you sent before you move on.  And you need to handle
> issues like duplicated or delayed requests.  These issues do not affect
> the full fetch request, because it is not stateful-- any full fetch
> request can be understood and properly responded to in isolation.
>

Thanks for the explanation. This makes sense. On the other hand I would be
interested in learning more about whether Becket's solution can help
simplify the protocol by not having the echo field and whether that is
worth doing.



>
> >
> > 2. It is said that Incremental FetchRequest will include partitions whose
> > fetch offset or maximum number of fetch bytes has been changed. If
> > follower's logStartOffet of a partition has changed, should this
> > partition also be included in the next FetchRequest to the leader?
> Otherwise, it
> > may affect the handling of DeleteRecordsRequest because leader may not
> know
> > the corresponding data has been deleted on the follower.
>
> Yeah, the follower should include the partition if the logStartOffset
> has changed.  That should be spelled out on the KIP.  Fixed.
>
> >
> > 3. In the section "Per-Partition Data", a partition is not considered
> > dirty if its log start offset has changed. Later in the section
> "FetchRequest
> > Changes", it is said that incremental fetch responses will include a
> > partition if its logStartOffset has changed. It seems inconsistent. Can
> > you update the KIP to clarify it?
> >
>
> In the "Per-Partition Data" section, it does say that logStartOffset
> changes make a partition dirty, though, right?  The first bullet point
> is:
>
> > * The LogCleaner deletes messages, and this changes the log start offset
> of the partition on the leader., or
>

Ah I see. I think I didn't notice this because statement assumes that the
LogStartOffset in the leader only changes due to LogCleaner. In fact the
LogStartOffset can change on the leader due to either log retention and
DeleteRecordsRequest. I haven't verified whether LogCleaner can change
LogStartOffset though. It may be a bit better to just say that a partition
is considered dirty if LogStartOffset changes.


>
> > 4. In "Fetch Session Caching" section, it is said that each broker has a
> > limited number of slots. How is this number determined? Does this require
> > a new broker config for this number?
>
> Good point.  I added two broker configuration parameters to control this
> number.
>

I am curious to see whether we can avoid some of these new configs. For
example, incremental.fetch.session.cache.slots.per.broker is probably not
necessary because if a leader knows that a FetchRequest comes from a
follower, we probably want the leader to always cache the information from
that follower. Does this make sense?

Maybe we can discuss the config later after there is agreement on how the
protocol would look like.


>
> > What is the error code if broker does
> > not have new log for the incoming FetchRequest?
>
> Hmm, is there a typo in this question?  Maybe you meant to ask what
> happens if there is no new cache slot for the incoming FetchRequest?
> That's not an error-- the incremental fetch session ID just gets set to
> 0, indicating no incremental fetch session was created.
>

Yeah there is a typo. You have answered my question.


>
> >
> > 5. Can you clarify what happens if follower adds a partition to the
> > ReplicaFetcherThread after recei

Build failed in Jenkins: kafka-trunk-jdk8 #2251

2017-12-01 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6260; Ensure selection keys are removed from all collections on

--
[...truncated 1.42 MB...]

org.apache.kafka.common.config.ConfigDefTest > testValidateCannotParse STARTED

org.apache.kafka.common.config.ConfigDefTest > testValidateCannotParse PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidate STARTED

org.apache.kafka.common.config.ConfigDefTest > testValidate PASSED

org.apache.kafka.common.config.ConfigDefTest > 
testInternalConfigDoesntShowUpInDocs STARTED

org.apache.kafka.common.config.ConfigDefTest > 
testInternalConfigDoesntShowUpInDocs PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString STARTED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString PASSED

org.apache.kafka.common.config.ConfigDefTest > testSslPasswords STARTED

org.apache.kafka.common.config.ConfigDefTest > testSslPasswords PASSED

org.apache.kafka.common.config.ConfigDefTest > testBaseConfigDefDependents 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testBaseConfigDefDependents 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringClass 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringClass 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringShort 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringShort 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault STARTED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > toRst STARTED

org.apache.kafka.common.config.ConfigDefTest > toRst PASSED

org.apache.kafka.common.config.ConfigDefTest > testCanAddInternalConfig STARTED

org.apache.kafka.common.config.ConfigDefTest > testCanAddInternalConfig PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired STARTED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired PASSED

org.apache.kafka.common.config.ConfigDefTest > 
testParsingEmptyDefaultValueForStringFieldShouldSucceed STARTED

org.apache.kafka.common.config.ConfigDefTest > 
testParsingEmptyDefaultValueForStringFieldShouldSucceed PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringPassword 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringPassword 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringList 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringList 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringLong 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringLong 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringBoolean 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringBoolean 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testNullDefaultWithValidator 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testNullDefaultWithValidator 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingDependentConfigs 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testMissingDependentConfigs 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringDouble 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringDouble 
PASSED

org.apache.kafka.common.config.ConfigDefTest > toEnrichedRst STARTED

org.apache.kafka.common.config.ConfigDefTest > toEnrichedRst PASSED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice STARTED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringString 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringString 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testNestedClass STARTED

org.apache.kafka.common.config.ConfigDefTest > testNestedClass PASSED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs STARTED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidateMissingConfigKey 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testValidateMissingConfigKey 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testOriginalsWithPrefix 
STARTED

org.apache.kafka.common.config.AbstractConfigTest > testOriginalsWithPrefix 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testConfiguredInstances 
STARTED

org.apache.kafka.common.config.AbstractConfigTest > testConfiguredInstances 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > 
testValuesWithPrefixOverride STARTED

org.apache.kafka.common.config.AbstractConfigTest > 
testValuesWithPrefixOverri

Jenkins build is back to normal : kafka-trunk-jdk9 #232

2017-12-01 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-6296) Transient failure in NetworkClientTest.testConnectionDelayDisconnected

2017-12-01 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6296.

   Resolution: Fixed
Fix Version/s: 1.1.0

Issue resolved by pull request 4285
[https://github.com/apache/kafka/pull/4285]

> Transient failure in NetworkClientTest.testConnectionDelayDisconnected
> --
>
> Key: KAFKA-6296
> URL: https://issues.apache.org/jira/browse/KAFKA-6296
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>  Labels: test-failure
> Fix For: 1.1.0
>
>
> {code}
> java.lang.AssertionError: expected:<17048.0> but was:<23983.0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:553)
>   at org.junit.Assert.assertEquals(Assert.java:683)
>   at 
> org.apache.kafka.clients.NetworkClientTest.testConnectionDelayDisconnected(NetworkClientTest.java:303)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4285: KAFKA-6296: Increase jitter to fix transient failu...

2017-12-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4285


---


Build failed in Jenkins: kafka-trunk-jdk7 #3012

2017-12-01 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6260; Ensure selection keys are removed from all collections on

--
[...truncated 408.39 KB...]

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset STARTED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursive STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthori

[GitHub] kafka pull request #4286: Minor fix broker compatibility tests

2017-12-01 Thread bbejeck
GitHub user bbejeck opened a pull request:

https://github.com/apache/kafka/pull/4286

Minor fix broker compatibility tests

*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbejeck/kafka 
MINOR_fix_broker_compatibility_tests

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4286.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4286


commit 104f5f6d42296e11d3a8a3f20559db53aed9a028
Author: Bill Bejeck 
Date:   2017-11-30T21:01:47Z

Changes for BrokerCompatibilityTest in system tests

commit 8949fe4c3f315e378ace88358157192198917367
Author: Bill Bejeck 
Date:   2017-12-01T00:40:36Z

updated error messages added condition for older brokers timeout

commit 5c17ec897c56e70e050ddd7104f8323b11e91e7f
Author: Bill Bejeck 
Date:   2017-12-01T04:12:25Z

updated error messages added condition for older brokers timeout

commit fc4dc13a05d4424789f6a462bfb318abb18d202f
Author: Bill Bejeck 
Date:   2017-12-01T12:19:51Z

increase timeout, flush System.err

commit 683d96b4b05445f387dc8c9dbdb8513be50387e0
Author: Bill Bejeck 
Date:   2017-12-01T12:50:17Z

updated error message

commit 22ad2b1d0026fb17a6afb28121077b7c0dacba29
Author: Bill Bejeck 
Date:   2017-12-01T13:33:37Z

increase timeout

commit 3581d6006c46dc704c6efa303eea6d2a757f5d62
Author: Bill Bejeck 
Date:   2017-12-01T17:17:17Z

put back DEV_BRANCH taken out by accident

commit 899c828dab0e3185670f308cf8474a407b864837
Author: Bill Bejeck 
Date:   2017-12-01T17:43:20Z

Reverting kafkatest.version imports

commit 705463f5bca44e58d3b1370f0efba850a00287d2
Author: Bill Bejeck 
Date:   2017-12-01T18:15:05Z

Trying a portion of the error message

commit 119dc5857d68f74ea38a6733b2ac42d9e1c12514
Author: Bill Bejeck 
Date:   2017-12-01T19:38:08Z

ignore pre 10 brokers test, set timeout back to original value

commit a5558aff1ac6a58a8cfc4c96f3daed4f5da05565
Author: Bill Bejeck 
Date:   2017-12-01T20:16:46Z

proper use of ignore

commit 49e7d178516c37cc0e93fea3304dc5ae041e26e2
Author: Bill Bejeck 
Date:   2017-12-01T20:32:00Z

 proper use of ignore round two

commit ab22c2b96cabeb98d92860794114b56db6bcecc9
Author: Bill Bejeck 
Date:   2017-12-01T20:50:56Z

put @ignore first




---


Build failed in Jenkins: kafka-trunk-jdk8 #2252

2017-12-01 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6296; Increase jitter to fix transient failure in

--
[...truncated 409.25 KB...]

kafka.utils.CoreUtilsTest > testCsvList STARTED

kafka.utils.CoreUtilsTest > testCsvList PASSED

kafka.utils.CoreUtilsTest > testReadInt STARTED

kafka.utils.CoreUtilsTest > testReadInt PASSED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate STARTED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate PASSED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.CoreUtilsTest > testCsvMap STARTED

kafka.utils.CoreUtilsTest > testCsvMap PASSED

kafka.utils.CoreUtilsTest > testInLock STARTED

kafka.utils.CoreUtilsTest > testInLock PASSED

kafka.utils.CoreUtilsTest > testSwallow STARTED

kafka.utils.CoreUtilsTest > testSwallow PASSED

kafka.utils.IteratorTemplateTest > testIterator STARTED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > testDecodeMap STARTED

kafka.utils.json.JsonValueTest > testDecodeMap PASSED

kafka.utils.json.JsonValueTest > testDecodeSeq STARTED

kafka.utils.json.JsonValueTest > testDecodeSeq PASSED

kafka.utils.json.JsonValueTest > testJsonObjectGet STARTED

kafka.utils.json.JsonValueTest > testJsonObjectGet PASSED

kafka.utils.json.JsonValueTest > testJsonValueEquals STARTED

kafka.utils.json.JsonValueTest > testJsonValueEquals PASSED

kafka.utils.json.JsonValueTest > testJsonArrayIterator STARTED

kafka.utils.json.JsonValueTest > testJsonArrayIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectApply STARTED

kafka.utils.json.JsonValueTest > testJsonObjectApply PASSED

kafka.utils.json.JsonValueTest > testDecodeBoolean STARTED

kafka.utils.json.JsonValueTest > testDecodeBoolean PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder STARTED

kafka.producer.AsyncProdu

Build failed in Jenkins: kafka-trunk-jdk7 #3013

2017-12-01 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6296; Increase jitter to fix transient failure in

--
[...truncated 403.30 KB...]

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset STARTED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursive STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > t

[jira] [Reopened] (KAFKA-3508) Transient failure in kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls

2017-12-01 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reopened KAFKA-3508:
--
  Assignee: (was: Grant Henke)

> Transient failure in 
> kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls
> --
>
> Key: KAFKA-3508
> URL: https://issues.apache.org/jira/browse/KAFKA-3508
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 1.0.0
>Reporter: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> {code}
> Stacktrace
> java.lang.AssertionError: Should support many concurrent calls failed with 
> exception(s) ArrayBuffer(java.util.concurrent.ExecutionException: 
> java.lang.IllegalStateException: Failed to update ACLs for Topic:test after 
> trying a maximum of 10 times)
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at kafka.utils.TestUtils$.assertConcurrent(TestUtils.scala:1123)
>   at 
> kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls(SimpleAclAuthorizerTest.scala:335)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableE

Re: [VOTE] KIP-203: Add toLowerCase support to sasl.kerberos.principal.to.local rule

2017-12-01 Thread Guozhang Wang
Made a pass over the KIP. +1.

Thanks!


Guozhang

On Wed, Nov 29, 2017 at 3:30 PM, Jason Gustafson  wrote:

> +1 Thanks for the KIP!
>
> On Fri, Nov 3, 2017 at 10:08 AM, Manikumar 
> wrote:
>
> > Bump up. waiting for few more binding votes.
> >
> > On Wed, Oct 18, 2017 at 6:57 PM, Rajini Sivaram  >
> > wrote:
> >
> > > +1 (binding)
> > >
> > > On Mon, Oct 9, 2017 at 5:32 PM, Manikumar 
> > > wrote:
> > >
> > > > I'm bumping this up to get some attention :)
> > > >
> > > > On Wed, Sep 27, 2017 at 8:46 PM, Tom Bentley 
> > > > wrote:
> > > >
> > > > > +1 (nonbinding)
> > > > >
> > > > > On 27 September 2017 at 16:10, Manikumar <
> manikumar.re...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > I'd like to start the vote on KIP-203. Details are here:
> > > > > >
> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > 203%3A+Add+toLowerCase+support+to+sasl.kerberos.
> > > > principal.to.local+rule
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > >
> > > >
> > >
> >
>



-- 
-- Guozhang


[GitHub] kafka pull request #4284: MINOR: imporve EOS docs

2017-12-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4284


---


[jira] [Resolved] (KAFKA-6136) Transient test failure: SaslPlainSslEndToEndAuthorizationTest.testTwoConsumersWithDifferentSaslCredentials

2017-12-01 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6136.

Resolution: Duplicate

> Transient test failure: 
> SaslPlainSslEndToEndAuthorizationTest.testTwoConsumersWithDifferentSaslCredentials
> --
>
> Key: KAFKA-6136
> URL: https://issues.apache.org/jira/browse/KAFKA-6136
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>  Labels: transient-unit-test-failure
> Fix For: 1.1.0
>
>
> Looks like a cleanup issue:
> {code}
> testTwoConsumersWithDifferentSaslCredentials – 
> kafka.api.SaslPlainSslEndToEndAuthorizationTest
> a few seconds
> Error
> org.apache.kafka.common.errors.GroupAuthorizationException: Not authorized to 
> access group: group
> Stacktrace
> org.apache.kafka.common.errors.GroupAuthorizationException: Not authorized to 
> access group: group
> Standard Output
> [2017-10-27 00:37:47,919] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> Adding ACLs for resource `Cluster:kafka-cluster`: 
>   User:admin has Allow permission for operations: ClusterAction from 
> hosts: * 
> Current ACLs for resource `Cluster:kafka-cluster`: 
>   User:admin has Allow permission for operations: ClusterAction from 
> hosts: * 
> [2017-10-27 00:37:48,961] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition __consumer_offsets-0 from broker 0 
> (kafka.server.ReplicaFetcherThread:107)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [Topic authorization failed.]
> [2017-10-27 00:37:48,967] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition __consumer_offsets-0 from broker 0 
> (kafka.server.ReplicaFetcherThread:107)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [Topic authorization failed.]
> Adding ACLs for resource `Topic:*`: 
>   User:admin has Allow permission for operations: Read from hosts: * 
> Current ACLs for resource `Topic:*`: 
>   User:admin has Allow permission for operations: Read from hosts: * 
> [2017-10-27 00:37:52,330] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> [2017-10-27 00:37:52,345] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> Adding ACLs for resource `Cluster:kafka-cluster`: 
>   User:admin has Allow permission for operations: ClusterAction from 
> hosts: * 
> Current ACLs for resource `Cluster:kafka-cluster`: 
>   User:admin has Allow permission for operations: ClusterAction from 
> hosts: * 
> [2017-10-27 00:37:53,459] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition __consumer_offsets-0 from broker 0 
> (kafka.server.ReplicaFetcherThread:107)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [Topic authorization failed.]
> [2017-10-27 00:37:53,462] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition __consumer_offsets-0 from broker 0 
> (kafka.server.ReplicaFetcherThread:107)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [Topic authorization failed.]
> Adding ACLs for resource `Topic:*`: 
>   User:admin has Allow permission for operations: Read from hosts: * 
> Current ACLs for resource `Topic:*`: 
>   User:admin has Allow permission for operations: Read from hosts: * 
> Adding ACLs for resource `Topic:e2etopic`: 
>   User:user has Allow permission for operations: Write from hosts: *
>   User:user has Allow permission for operations: Describe from hosts: * 
> Adding ACLs for resource `Cluster:kafka-cluster`: 
>   User:user has Allow permission for operations: Create from hosts: * 
> Current ACLs for resource `Topic:e2etopic`: 
>   User:user has Allow permission for operations: Write from hosts: *
>   User:user has Allow permission for operations: Describe from hosts: * 
> Adding ACLs for resource `Topic:e2etopic`: 
>   User:user has Allow permission for operations: Read from hosts: *
>   User:user has Allow permission for operations: Describe from hosts: * 
> Adding ACLs for resource `Group:group`: 
>   User:user has Allow permission for operations: Read from hosts: * 
> Current ACLs for resource `Topic:e2etopic`: 
>   User:user has Allow permission for

[jira] [Created] (KAFKA-6298) Line numbers on log messages are incorrect

2017-12-01 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-6298:
--

 Summary: Line numbers on log messages are incorrect
 Key: KAFKA-6298
 URL: https://issues.apache.org/jira/browse/KAFKA-6298
 Project: Kafka
  Issue Type: Bug
Reporter: Colin P. McCabe


The line numbers on log messages are all incorrect now.

For example, AdminClient should have this log message on line 394:
{code}
394 log.debug("Kafka admin client initialized")
{code}

But instead, it shows up as being on line 177:
{code}
[2017-12-01 15:42:18,710] DEBUG [AdminClient clientId=adminclient-1] Kafka 
admin client initialized (org.apache.kafka.clients.admin.KafkaAdminClient:177)
{code}

The line numbers appear to be coming from {{LogContext.java}}:
{code}
174@Override
175public void debug(String message) {
176if (logger.isDebugEnabled())
177logger.debug(logPrefix + message);
178}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-0.11.0-jdk7 #342

2017-12-01 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Improve EOS related config docs

--
[...truncated 2.45 MB...]
org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologies PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskGetsFencedUsingIsolatedAppInstances STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskGetsFencedUsingIsolatedAppInstances PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFailsWithState STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFailsWithState PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldRestoreTransactionalMessages STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldRestoreTransactionalMessages PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce STARTED

org.apache.kafka.

Build failed in Jenkins: kafka-trunk-jdk7 #3014

2017-12-01 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Improve EOS related config docs

--
[...truncated 406.45 KB...]
kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset STARTED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursive STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSE

Jenkins build is back to normal : kafka-trunk-jdk8 #2253

2017-12-01 Thread Apache Jenkins Server
See 




[DISCUSS] KIP-232: Detect outdated metadata by adding ControllerMetadataEpoch field

2017-12-01 Thread Dong Lin
Hi all,

I have created KIP-232: Detect outdated metadata by adding
ControllerMetadataEpoch field:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-232%3A+Detect+outdated+metadata+by+adding+ControllerMetadataEpoch+field
.

The KIP proposes to add fields in MetadataResponse and
UpdateMetadataRequest so that client can reject outdated metadata and avoid
unnecessary OffsetOutOfRangeException. Otherwise there is currently race
condition that can cause consumer to reset offset which negatively affect
the consumer's availability.

Feedback and suggestions are welcome!

Regards,
Dong


[GitHub] kafka pull request #4287: KAFKA-6118: Fix transient failure testTwoConsumers...

2017-12-01 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/4287

KAFKA-6118: Fix transient failure 
testTwoConsumersWithDifferentSaslCredentials

It's rare, but it can happen that the initial FindCoordinator request 
returns before the first Metadata request. Both authorization errors are fine 
for this test case.

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-6118

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4287.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4287


commit 334cd1d9150351f38c736efc0ae364bd9698e8c5
Author: Jason Gustafson 
Date:   2017-12-02T06:03:16Z

KAFKA-6118: Fix transient failure 
testTwoConsumersWithDifferentSaslCredentials




---


Re: Doc says when doing re-balance, sort by leader then partition, but the code seems sort only on partition

2017-12-01 Thread Guozhang Wang
Hello Xiang,

I took a look at the PartitionAssignor and I agree with you that the doc is
referring to an older version of the code now. It could be updated.

BTW, this logic is referring the old ZK-dependent consumer behavior, which
has been deprecated and will likely be removed in the future major
releases; so we'd probably just remove this part of the web docs when we do
that.



Guozhang


On Thu, Nov 30, 2017 at 4:31 AM, 李响  wrote:

> Dear Kafka community,
>
> In the doc -> https://kafka.apache.org/documentation/#distributionimpl
> 4. sort Pt (so partitions on the same broker are clustered together)
> and
> During rebalancing, we try to assign partitions to consumers in such a way
> that reduces the number of broker nodes each consumer has to connect to
>
> If I get it correctly, it means when sorting the partitions, firstly it is
> sorted by leader. Then among those partitions which have the same leader,
> then it is sorted by partition id in numeric order.
>
> While the code of RangeAssignor of
> https://github.com/apache/kafka/blob/trunk/core/src/
> main/scala/kafka/consumer/PartitionAssignor.scala
> seems tell that it is sorted only on partition, not on leader.
>
> My trial also demonstrated that the code is correct.
>
> Is the doc out of date? Or the doc follows some previous versions of Kafka?
> Or my understanding on code has something wrong?
>
> Please kindly advise, thanks!!
>
>
> --
>
>李响 Xiang Li
>
> 邮件 e-mail  :wate...@gmail.com
>



-- 
-- Guozhang


[GitHub] kafka pull request #4287: KAFKA-6118: Fix transient failure testTwoConsumers...

2017-12-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4287


---


[jira] [Resolved] (KAFKA-6118) Transient failure in kafka.api.SaslScramSslEndToEndAuthorizationTest.testTwoConsumersWithDifferentSaslCredentials

2017-12-01 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6118.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.3
   1.0.1
   1.1.0

Issue resolved by pull request 4287
[https://github.com/apache/kafka/pull/4287]

> Transient failure in 
> kafka.api.SaslScramSslEndToEndAuthorizationTest.testTwoConsumersWithDifferentSaslCredentials
> -
>
> Key: KAFKA-6118
> URL: https://issues.apache.org/jira/browse/KAFKA-6118
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 1.0.0
>Reporter: Guozhang Wang
>Assignee: Jason Gustafson
> Fix For: 1.1.0, 1.0.1, 0.11.0.3
>
>
> Saw this failure on trunk jenkins job:
> https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2274/testReport/junit/kafka.api/SaslScramSslEndToEndAuthorizationTest/testTwoConsumersWithDifferentSaslCredentials/
> {code}
> Stacktrace
> org.apache.kafka.common.errors.GroupAuthorizationException: Not authorized to 
> access group: group
> Standard Output
> [2017-10-25 15:09:49,986] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> Adding ACLs for resource `Cluster:kafka-cluster`: 
>   User:scram-admin has Allow permission for operations: ClusterAction 
> from hosts: * 
> Current ACLs for resource `Cluster:kafka-cluster`: 
>   User:scram-admin has Allow permission for operations: ClusterAction 
> from hosts: * 
> Completed Updating config for entity: user-principal 'scram-admin'.
> [2017-10-25 15:09:50,654] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-0 from broker 2 
> (kafka.server.ReplicaFetcherThread:107)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [Topic authorization failed.]
> [2017-10-25 15:09:50,654] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-0 from broker 2 
> (kafka.server.ReplicaFetcherThread:107)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [Topic authorization failed.]
> Adding ACLs for resource `Topic:*`: 
>   User:scram-admin has Allow permission for operations: Read from hosts: 
> * 
> Current ACLs for resource `Topic:*`: 
>   User:scram-admin has Allow permission for operations: Read from hosts: 
> * 
> Completed Updating config for entity: user-principal 'scram-user'.
> Completed Updating config for entity: user-principal 'scram-user2'.
> Adding ACLs for resource `Topic:e2etopic`: 
>   User:scram-user has Allow permission for operations: Write from hosts: *
>   User:scram-user has Allow permission for operations: Describe from 
> hosts: * 
> Adding ACLs for resource `Cluster:kafka-cluster`: 
>   User:scram-user has Allow permission for operations: Create from hosts: 
> * 
> Current ACLs for resource `Topic:e2etopic`: 
>   User:scram-user has Allow permission for operations: Write from hosts: *
>   User:scram-user has Allow permission for operations: Describe from 
> hosts: * 
> Adding ACLs for resource `Topic:e2etopic`: 
>   User:scram-user has Allow permission for operations: Read from hosts: *
>   User:scram-user has Allow permission for operations: Describe from 
> hosts: * 
> Adding ACLs for resource `Group:group`: 
>   User:scram-user has Allow permission for operations: Read from hosts: * 
> Current ACLs for resource `Topic:e2etopic`: 
>   User:scram-user has Allow permission for operations: Write from hosts: *
>   User:scram-user has Allow permission for operations: Describe from 
> hosts: *
>   User:scram-user has Allow permission for operations: Read from hosts: * 
> Current ACLs for resource `Group:group`: 
>   User:scram-user has Allow permission for operations: Read from hosts: * 
> [2017-10-25 15:09:52,788] ERROR Error while creating ephemeral at /controller 
> with return code: OK 
> (kafka.controller.KafkaControllerZkUtils$CheckedEphemeral:101)
> [2017-10-25 15:09:54,078] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> [2017-10-25 15:09:54,112] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> Adding ACLs for resource `Cluster:kafka-cluster`: 
>   User:scram-admin has Allow permission for operations: ClusterAction 
> from hosts

Build failed in Jenkins: kafka-0.11.0-jdk7 #343

2017-12-01 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6118: Fix transient failure

--
[...truncated 9.42 KB...]
if(l == null)
 ^
:202:
 method round in class RichLong is deprecated: This is an integer type; there 
is no reason to round it.  Perhaps you meant to call this on a floating-point 
value?
throttleTimeMs = throttleTime(clientMetric, 
getQuotaMetricConfig(clientQuotaEntity.quota)).round.toInt

   ^
:308:
 class ZKGroupTopicDirs in package utils is deprecated: This class has been 
deprecated and will be removed in a future release.
val topicDirs = new ZKGroupTopicDirs(offsetCommitRequest.groupId, 
topicPartition.topic)
^
:347:
 value timestamp in class PartitionData is deprecated: see corresponding 
Javadoc for more information.
  if (partitionData.timestamp == 
OffsetCommitRequest.DEFAULT_TIMESTAMP)
^
:350:
 value timestamp in class PartitionData is deprecated: see corresponding 
Javadoc for more information.
offsetRetention + partitionData.timestamp
^
:650:
 method offsetData in class ListOffsetRequest is deprecated: see corresponding 
Javadoc for more information.
val (authorizedRequestInfo, unauthorizedRequestInfo) = 
offsetRequest.offsetData.asScala.partition {
 ^
:650:
 class PartitionData in object ListOffsetRequest is deprecated: see 
corresponding Javadoc for more information.
val (authorizedRequestInfo, unauthorizedRequestInfo) = 
offsetRequest.offsetData.asScala.partition {

  ^
:655:
 constructor PartitionData in class PartitionData is deprecated: see 
corresponding Javadoc for more information.
  new ListOffsetResponse.PartitionData(Errors.UNKNOWN_TOPIC_OR_PARTITION, 
List[JLong]().asJava)
  ^
:680:
 constructor PartitionData in class PartitionData is deprecated: see 
corresponding Javadoc for more information.
(topicPartition, new ListOffsetResponse.PartitionData(Errors.NONE, 
offsets.map(new JLong(_)).asJava))
 ^
:687:
 constructor PartitionData in class PartitionData is deprecated: see 
corresponding Javadoc for more information.
  (topicPartition, new 
ListOffsetResponse.PartitionData(Errors.forException(e), List[JLong]().asJava))
   ^
:690:
 constructor PartitionData in class PartitionData is deprecated: see 
corresponding Javadoc for more information.
  (topicPartition, new 
ListOffsetResponse.PartitionData(Errors.forException(e), List[JLong]().asJava))
   ^
:1016:
 class ZKGroupTopicDirs in package utils is deprecated: This class has been 
deprecated and will be removed in a future release.
  val topicDirs = new ZKGroupTopicDirs(offsetFetchRequest.groupId, 
topicPartition.topic)
  ^
:119:
 object ConsumerConfig in package consumer is deprecated: This object has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.consumer.ConsumerConfig instead.
  val ReplicaSocketTimeoutMs = ConsumerConfig.SocketTimeout
   ^
:120:
 object ConsumerConfig in package consumer is deprecated: This object has been 
deprecated and will be removed i