[jira] [Created] (KAFKA-6591) Move check for super user in SimpleAclProvider before ACL evaluation

2018-02-25 Thread JIRA
Sönke Liebau created KAFKA-6591:
---

 Summary: Move check for super user in SimpleAclProvider before ACL 
evaluation
 Key: KAFKA-6591
 URL: https://issues.apache.org/jira/browse/KAFKA-6591
 Project: Kafka
  Issue Type: Improvement
  Components: core, security
Affects Versions: 1.0.0
Reporter: Sönke Liebau
Assignee: Sönke Liebau


Currently the check whether a user as a super user in SimpleAclAuthorizer is 
[performed only after all other ACLs have been 
evaluated|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/security/auth/SimpleAclAuthorizer.scala#L124].
 Since all requests from a super user are granted we don't really need to apply 
the ACLs.

I believe this is unnecessary effort that could easily be avoided. I've rigged 
a small test that created 1000 ACLs for a topic and performed a million 
authorize calls with a principal that was a super user but didn't match any 
ACLs.

The implementation from trunk took 43 seconds, whereas a version with the super 
user check moved up only took half a second. Granted, this is a constructed 
case, but the effects will be the same, if less pronounced for setups with 
fewer rules.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-02-25 Thread Dong Lin
Hey Gwen,

Thanks for the use-case. This is certainly useful. I have updated the KIP
to allow partition deletion as well.

Dong

On Wed, Feb 21, 2018 at 9:49 PM, Gwen Shapira  wrote:

> Re: Why would anyone want to delete partitions:
>
> Imagine a system that is built to handle 200K events per sec most of the
> time, but they have 2-4 days per year where traffic spikes to 11M events
> per sec. We'll almost certainly need more partitions (and more consumers)
> during the spike to process all the events, but the rest of the 360 days
> these are pure overhead. Overhead for consumers, producers, controller,
> etc, etc. If we have the ability to add partitions when needed, it will be
> good to also remove them when no longer needed.
>
> On Wed, Feb 21, 2018 at 12:24 PM Dong Lin  wrote:
>
> > Hey Jun,
> >
> > Thanks much for your comments.
> >
> > On Wed, Feb 21, 2018 at 10:17 AM, Jun Rao  wrote:
> >
> > > Hi, Dong,
> > >
> > > Thanks for the KIP. At the high level, this makes sense. A few comments
> > > below.
> > >
> > > 1. It would be useful to support removing partitions as well. The
> general
> > > idea could be bumping the leader epoch for the remaining partitions.
> For
> > > the partitions to be removed, we can make them read-only and remove
> them
> > > after the retention time.
> > >
> >
> > I think we should be able to find a way to delete partitions of an
> existing
> > topic. But it will also add complexity to our broker and client
> > implementation. I am just not sure whether this feature is worth the
> > complexity. Could you explain a bit more why user would want to delete
> > partitions of an existing topic? Is it to handle the human error where a
> > topic is created with too many partitions by mistake?
> >
> >
> > >
> > > 2. If we support removing partitions, I am not sure if it's enough to
> > fence
> > > off the producer using total partition number since the total partition
> > > number may remain the same after adding and then removing partitions.
> > > Perhaps we need some notion of partition epoch.
> > >
> > > 3. In step 5) of the Proposed Changes, I am not sure that we can always
> > > rely upon position 0 for dealing with the new partitions. A consumer
> will
> > > start consuming the new partition when some of the existing records
> have
> > > been removed due to retention.
> > >
> >
> >
> > You are right. I have updated the KIP to compare the startPosition with
> the
> > earliest offset of the partition. If the startPosition > earliest offset,
> > then the consumer can consume messages from the given partition directly.
> > This should handle the case where some of the existing records have been
> > removed before consumer starts consumption.
> >
> >
> > >
> > > 4. When the consumer is allowed to read messages after the partition
> > > expansion point, a key may be moved from one consumer instance to
> > another.
> > > In this case, similar to consumer rebalance, it's useful to inform the
> > > application about this so that the consumer can save and reload the per
> > key
> > > state. So, we need to either add some new callbacks or reuse the
> existing
> > > rebalance callbacks.
> > >
> >
> >
> > Good point. I will add the callback later after we discuss the need for
> > partition deletion.
> >
> >
> > >
> > > 5. There is some subtlety in assigning partitions. Currently, the
> > consumer
> > > assigns partitions without needing to know the consumption offset. This
> > > could mean that a particular consumer may be assigned some new
> partitions
> > > that are not consumable yet, which could lead to imbalanced load
> > > temporarily. Not sure if this is super important to address though.
> > >
> >
> > Personally I think it is not worth adding more complexity just to
> optimize
> > this scenario. This imbalance should exist only for a short period of
> time.
> > If it is important I can think more about how to handle it.
> >
> >
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > >
> > > On Sat, Feb 10, 2018 at 3:35 PM, Dong Lin  wrote:
> > >
> > > > Hi all,
> > > >
> > > > I have created KIP-253: Support in-order message delivery with
> > partition
> > > > expansion. See
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 253%3A+Support+in-order+message+delivery+with+partition+expansion
> > > > .
> > > >
> > > > This KIP provides a way to allow messages of the same key from the
> same
> > > > producer to be consumed in the same order they are produced even if
> we
> > > > expand partition of the topic.
> > > >
> > > > Thanks,
> > > > Dong
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-02-25 Thread Dong Lin
Hey Jun,

Yeah I think this definitely makes sense. I have updated the KIP to support
partition deletion and also added callback as you previously suggested. Can
you take another look?

Thanks!
Dong

On Thu, Feb 22, 2018 at 11:52 AM, Jun Rao  wrote:

> Hi, Dong,
>
> Regarding deleting partitions, Gwen's point is right on. In some of the
> usage of Kafka, the traffic can be bursty. When the traffic goes up, adding
> partitions is a quick way of shifting some traffic to the newly added
> brokers. Once the traffic goes down, the newly added brokers will be
> reclaimed (potentially by moving replicas off those brokers). However, if
> one can only add partitions without removing, eventually, one will hit the
> limit.
>
> Thanks,
>
> Jun
>
> On Wed, Feb 21, 2018 at 12:23 PM, Dong Lin  wrote:
>
> > Hey Jun,
> >
> > Thanks much for your comments.
> >
> > On Wed, Feb 21, 2018 at 10:17 AM, Jun Rao  wrote:
> >
> > > Hi, Dong,
> > >
> > > Thanks for the KIP. At the high level, this makes sense. A few comments
> > > below.
> > >
> > > 1. It would be useful to support removing partitions as well. The
> general
> > > idea could be bumping the leader epoch for the remaining partitions.
> For
> > > the partitions to be removed, we can make them read-only and remove
> them
> > > after the retention time.
> > >
> >
> > I think we should be able to find a way to delete partitions of an
> existing
> > topic. But it will also add complexity to our broker and client
> > implementation. I am just not sure whether this feature is worth the
> > complexity. Could you explain a bit more why user would want to delete
> > partitions of an existing topic? Is it to handle the human error where a
> > topic is created with too many partitions by mistake?
> >
> >
> > >
> > > 2. If we support removing partitions, I am not sure if it's enough to
> > fence
> > > off the producer using total partition number since the total partition
> > > number may remain the same after adding and then removing partitions.
> > > Perhaps we need some notion of partition epoch.
> > >
> > > 3. In step 5) of the Proposed Changes, I am not sure that we can always
> > > rely upon position 0 for dealing with the new partitions. A consumer
> will
> > > start consuming the new partition when some of the existing records
> have
> > > been removed due to retention.
> > >
> >
> >
> > You are right. I have updated the KIP to compare the startPosition with
> the
> > earliest offset of the partition. If the startPosition > earliest offset,
> > then the consumer can consume messages from the given partition directly.
> > This should handle the case where some of the existing records have been
> > removed before consumer starts consumption.
> >
> >
> > >
> > > 4. When the consumer is allowed to read messages after the partition
> > > expansion point, a key may be moved from one consumer instance to
> > another.
> > > In this case, similar to consumer rebalance, it's useful to inform the
> > > application about this so that the consumer can save and reload the per
> > key
> > > state. So, we need to either add some new callbacks or reuse the
> existing
> > > rebalance callbacks.
> > >
> >
> >
> > Good point. I will add the callback later after we discuss the need for
> > partition deletion.
> >
> >
> > >
> > > 5. There is some subtlety in assigning partitions. Currently, the
> > consumer
> > > assigns partitions without needing to know the consumption offset. This
> > > could mean that a particular consumer may be assigned some new
> partitions
> > > that are not consumable yet, which could lead to imbalanced load
> > > temporarily. Not sure if this is super important to address though.
> > >
> >
> > Personally I think it is not worth adding more complexity just to
> optimize
> > this scenario. This imbalance should exist only for a short period of
> time.
> > If it is important I can think more about how to handle it.
> >
> >
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > >
> > > On Sat, Feb 10, 2018 at 3:35 PM, Dong Lin  wrote:
> > >
> > > > Hi all,
> > > >
> > > > I have created KIP-253: Support in-order message delivery with
> > partition
> > > > expansion. See
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 253%3A+Support+in-order+message+delivery+with+partition+expansion
> > > > .
> > > >
> > > > This KIP provides a way to allow messages of the same key from the
> same
> > > > producer to be consumed in the same order they are produced even if
> we
> > > > expand partition of the topic.
> > > >
> > > > Thanks,
> > > > Dong
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-02-25 Thread Dong Lin
Hey Allen,

Thanks for your comment. I will comment inline.

On Thu, Feb 22, 2018 at 3:05 PM, Allen Wang 
wrote:

> Overall this is a very useful feature. With this we can finally scale keyed
> messages.
>
> +1 on the ability to remove partitions. This will greatly increase Kafka's
> scalability in cloud.
>
> For example, when there is traffic increase, we can add brokers and assign
> new partitions to the new brokers. When traffic decreases, we can mark
> these new partitions as read only and remove them afterwards, together with
> the brokers that host these partitions. This will be a light-weight
> approach to scale a Kafka cluster compared to partition reassignment where
> you will always have to move data.
>
> I have some suggestions:
>
> - The KIP described each step in detail which is great. However, it lacks
> the "why" part to explain the high level goal we want to achieve with each
> step. For example, the purpose of step 5 may be described as "Make sure
> consumers always first finish consuming all data prior to partition
> expansion to enforce message ordering".
>

Yeah I think this is useful. This is a non-trivial KIP and it is useful to
explain the motivation of each change to help reading. I will added
motivation for each change in the KIP. Please let me know if there is
anything else that can make the KIP more readable.


>
> - The rejection of produce request at partition expansion should be
> configurable because it does not matter for non-keyed messages. Same with
> the consumer behavior for step 5. This will ensure that for non-keyed
> messages, partition expansion does not add the cost of possible message
> drop on producer or message latency on the consumer.
>

Ideally we would like to avoid adding extra configs to keep the interface
simple. I think the current overhead in the producer is actually very
small. Partition expansion or deletion should happen very infrequently.
Note that our producer today needs to refresh metadata whenever there is
leadership movement, i.e. producer will receive
NotLeaderForPartitionException from the old leader and keep refreshing
metadata until it gets the new leader of the partition, which happens much
more frequently than Partition expansion or deletion. So I am not sure we
should add a config to optimize this.


>
> - Since we now allow adding partitions for keyed messages while preserving
> the message ordering on the consumer side, the default producer partitioner
> seems to be inadequate as it rehashes all keys. As part of this KIP, should
> we also include a partitioner that better handles partition changes, for
> example, with consistent hashing?
>

I am not sure I understand the problem with the default partitioner. Can
you explain a bit more why default producer partitioner is inadequate with
this KIP? And why consistent hashing can be helpful?


>
> Thanks,
> Allen
>
>
> On Thu, Feb 22, 2018 at 11:52 AM, Jun Rao  wrote:
>
> > Hi, Dong,
> >
> > Regarding deleting partitions, Gwen's point is right on. In some of the
> > usage of Kafka, the traffic can be bursty. When the traffic goes up,
> adding
> > partitions is a quick way of shifting some traffic to the newly added
> > brokers. Once the traffic goes down, the newly added brokers will be
> > reclaimed (potentially by moving replicas off those brokers). However, if
> > one can only add partitions without removing, eventually, one will hit
> the
> > limit.
> >
> > Thanks,
> >
> > Jun
> >
> > On Wed, Feb 21, 2018 at 12:23 PM, Dong Lin  wrote:
> >
> > > Hey Jun,
> > >
> > > Thanks much for your comments.
> > >
> > > On Wed, Feb 21, 2018 at 10:17 AM, Jun Rao  wrote:
> > >
> > > > Hi, Dong,
> > > >
> > > > Thanks for the KIP. At the high level, this makes sense. A few
> comments
> > > > below.
> > > >
> > > > 1. It would be useful to support removing partitions as well. The
> > general
> > > > idea could be bumping the leader epoch for the remaining partitions.
> > For
> > > > the partitions to be removed, we can make them read-only and remove
> > them
> > > > after the retention time.
> > > >
> > >
> > > I think we should be able to find a way to delete partitions of an
> > existing
> > > topic. But it will also add complexity to our broker and client
> > > implementation. I am just not sure whether this feature is worth the
> > > complexity. Could you explain a bit more why user would want to delete
> > > partitions of an existing topic? Is it to handle the human error where
> a
> > > topic is created with too many partitions by mistake?
> > >
> > >
> > > >
> > > > 2. If we support removing partitions, I am not sure if it's enough to
> > > fence
> > > > off the producer using total partition number since the total
> partition
> > > > number may remain the same after adding and then removing partitions.
> > > > Perhaps we need some notion of partition epoch.
> > > >
> > > > 3. In step 5) of the Proposed Changes, I am not sure that we can
> always
> > > > rely upon position 0 for dealing with the 

Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-02-25 Thread Dong Lin
Hey Jay,

Thanks for the comment!

I have not specifically thought about how this works with Streams and
Connect. The current KIP w.r.t. the interface that our producer and
consumer exposes to the user. It ensures that if there are two messages
with the same key produced by the same producer, say messageA and messageB,
and suppose messageB is produced after messageA to a different partition
than messageA, then we can guarantee that the following sequence can happen
in order:

- Consumer of messageA can execute callback, in which user can flush state
related to the key of messageA.
- messageA is delivered by its consumer to the application
- Consumer of messageB can execute callback, in which user can load the
state related to the key of messageB.
- messageB is delivered by its consumer to the application.

So it seems that it should support Streams and Connect properly. But I am
not entirely sure because I have not looked into how Streams and Connect
works. I can think about it more if you can provide an example where this
does not work for Streams and Connect.

Regarding the second question, I think linear hashing approach provides a
way to reduce the number of partitions that can "conflict" with a give
partition to *log_2(n)*, as compares to *n* in the current KIP, where n is
the total number of partitions of the topic. This will be useful when
number of partition is large and asymptotic complexity matters.

I personally don't think this optimization is worth the additional
complexity in Kafka. This is because partition expansion or deletion should
happen infrequently and the largest number of partitions of a single topic
today is not that large -- probably 1000 or less. And when partitions of a
topic changes, each consumer will likely need to query and wait for
positions of a large percentage of partitions of the topic anyway even with
this optimization. I think this algorithm is kind of orthogonal to this
KIP. We can extend the KIP to support this algorithm in the future as well.

Thanks,
Dong

On Thu, Feb 22, 2018 at 5:19 PM, Jay Kreps  wrote:

> Hey Dong,
>
> Two questions:
> 1. How will this work with Streams and Connect?
> 2. How does this compare to a solution where we physically split partitions
> using a linear hashing approach (the partition number is equivalent to the
> hash bucket in a hash table)? https://en.wikipedia.org/wiki/Linear_hashing
>
> -Jay
>
> On Sat, Feb 10, 2018 at 3:35 PM, Dong Lin  wrote:
>
> > Hi all,
> >
> > I have created KIP-253: Support in-order message delivery with partition
> > expansion. See
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 253%3A+Support+in-order+message+delivery+with+partition+expansion
> > .
> >
> > This KIP provides a way to allow messages of the same key from the same
> > producer to be consumed in the same order they are produced even if we
> > expand partition of the topic.
> >
> > Thanks,
> > Dong
> >
>


Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-02-25 Thread Dong Lin
Hey Matthias,

Thanks for the comments.

I think when we compact topics, we only delete those messages (when there
is later message with the same key) and we do not change offset of a given
message. As long as offsets of existing messages are not changes, I think
the KIP should still work in the sense that it enforces the property
explained in the Goals section in the updated KIP. Can you see if there is
anything that does not work for compacted topic?

It is true that if we have a topic is not subject to either time-based or
size-based retention, then we can not delete this partition. I don't think
this is necessarily related to the compacted topic. Today we have can have
a non-compacted topic that has super long retention, such that we can not
delete partitions from this topic in a long period. And we can also specify
retention for a compacted topic such that all messages in a partition of
this compacted topic will be deleted after sometime. This KIP ensures that
a partition can be deleted after all messages in the partition has been
deleted, which is the common case. If user wants to keep very old messages
while still deleting partitions, maybe we should have a separate KIP that
allows can merge-sort the existing partitions to new (and smaller) set of
partitions. What do you think?

Regarding older Producer/Consumers, my current understanding is that old
clients can still produce/consume with the current behavior, i.e. consume
may consume messages out of order if there is partition expansion or
deletion. Old clients still have the ordering guarantee if partitions of
the topic does not change. It should be backward compatible. Users need to
upgrade client library in order to use the new future.

Thanks,
Dong

On Thu, Feb 22, 2018 at 6:24 PM, Matthias J. Sax 
wrote:

> Dong,
>
> thanks a lot for the KIP!
>
> Can you elaborate how this would work for compacted topics? If it does
> not work for compacted topics, I think Streams API cannot allow to scale
> input topics.
>
> This question seems to be particularly interesting for deleting
> partitions: assume that a key is never (or for a very long time)
> updated, a partition cannot be deleted.
>
>
> -Matthias
>
>
> On 2/22/18 5:19 PM, Jay Kreps wrote:
> > Hey Dong,
> >
> > Two questions:
> > 1. How will this work with Streams and Connect?
> > 2. How does this compare to a solution where we physically split
> partitions
> > using a linear hashing approach (the partition number is equivalent to
> the
> > hash bucket in a hash table)? https://en.wikipedia.org/wiki/
> Linear_hashing
> >
> > -Jay
> >
> > On Sat, Feb 10, 2018 at 3:35 PM, Dong Lin  wrote:
> >
> >> Hi all,
> >>
> >> I have created KIP-253: Support in-order message delivery with partition
> >> expansion. See
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 253%3A+Support+in-order+message+delivery+with+partition+expansion
> >> .
> >>
> >> This KIP provides a way to allow messages of the same key from the same
> >> producer to be consumed in the same order they are produced even if we
> >> expand partition of the topic.
> >>
> >> Thanks,
> >> Dong
> >>
> >
>
>


Build failed in Jenkins: kafka-trunk-jdk8 #2438

2018-02-25 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: fixes lgtm.com warnings (#4582)

--
[...truncated 3.92 MB...]

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldSuccessfullyReInitializeStateStores STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldSuccessfullyReInitializeStateStores PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldThrowIllegalArgumentExceptionOnRegisterWhenStoreHasAlreadyBeenRegistered 
STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldThrowIllegalArgumentExceptionOnRegisterWhenStoreHasAlreadyBeenRegistered 
PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldRestoreStoreWithSinglePutRestoreSpecification STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldRestoreStoreWithSinglePutRestoreSpecification PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldNotChangeOffsetsIfAckedOffsetsIsNull STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldNotChangeOffsetsIfAckedOffsetsIsNull PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldThrowIllegalArgumentExceptionIfStoreNameIsSameAsCheckpointFileName STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldThrowIllegalArgumentExceptionIfStoreNameIsSameAsCheckpointFileName PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldCloseStateManagerWithOffsets STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldCloseStateManagerWithOffsets PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldProcessRecordsForTopic STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldProcessRecordsForTopic PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldInitializeProcessorTopology STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldInitializeProcessorTopology PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldInitializeContext STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldInitializeContext PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldCheckpointOffsetsWhenStateIsFlushed STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldCheckpointOffsetsWhenStateIsFlushed PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldThrowStreamsExceptionWhenKeyDeserializationFails STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldThrowStreamsExceptionWhenKeyDeserializationFails PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldInitializeStateManager STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldInitializeStateManager PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldProcessRecordsForOtherTopic STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldProcessRecordsForOtherTopic PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldNotThrowStreamsExceptionWhenValueDeserializationFails STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldNotThrowStreamsExceptionWhenValueDeserializationFails PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldThrowStreamsExceptionWhenValueDeserializationFails STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldThrowStreamsExceptionWhenValueDeserializationFails PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldNotThrowStreamsExceptionWhenKeyDeserializationFailsWithSkipHandler STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldNotThrowStreamsExceptionWhenKeyDeserializationFailsWithSkipHandler PASSED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testThroughputMetrics STARTED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testThroughputMetrics PASSED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testLatencyMetrics STARTED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testLatencyMetrics PASSED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testRemoveSensor STARTED

org.apache.kafka.streams.proce

Build failed in Jenkins: kafka-trunk-jdk7 #3213

2018-02-25 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: fixes lgtm.com warnings (#4582)

--
[...truncated 120.44 KB...]
unit.kafka.admin.DeleteConsumerGroupTest > testDeleteWithShortInitialization 
STARTED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteWithShortInitialization 
PASSED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteCmdWithShortInitialization 
STARTED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteCmdWithShortInitialization 
PASSED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteWithTopicOption STARTED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteWithTopicOption PASSED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteNonExistingGroup STARTED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteNonExistingGroup PASSED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteCmdNonEmptyGroup STARTED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteCmdNonEmptyGroup PASSED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteCmdNonExistingGroup STARTED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteCmdNonExistingGroup PASSED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteEmptyGroup STARTED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteEmptyGroup PASSED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteWithMixOfSuccessAndError 
STARTED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteWithMixOfSuccessAndError 
PASSED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteCmdEmptyGroup STARTED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteCmdEmptyGroup PASSED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteInvalidGroupId STARTED

unit.kafka.admin.DeleteConsumerGroupTest > testDeleteInvalidGroupId PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperSessionStateMetric STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperSessionStateMetric PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnection STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnection PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout PASSED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString STARTED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry PASSED

kafka.zookeeper.ZooKeeperClientTest >

[DISCUSS] KIP-263: Allow broker to skip sanity check of inactive segments on broker startup

2018-02-25 Thread Dong Lin
Hi all,

I have created KIP-263: Allow broker to skip sanity check of inactive
segments on broker startup. See
https://cwiki.apache.org/confluence/display/KAFKA/KIP-263%3A+Allow+broker+to+skip+sanity+check+of+inactive+segments+on+broker+startup
.

This KIP provides a way to significantly reduce time to rolling bounce a
Kafka cluster.

Comments are welcome!

Thanks,
Dong


[jira] [Created] (KAFKA-6592) NullPointerException thrown when executing ConsoleCosumer with deserializer set to `WindowedDeserializer`

2018-02-25 Thread huxihx (JIRA)
huxihx created KAFKA-6592:
-

 Summary: NullPointerException thrown when executing ConsoleCosumer 
with deserializer set to `WindowedDeserializer`
 Key: KAFKA-6592
 URL: https://issues.apache.org/jira/browse/KAFKA-6592
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 1.0.0
Reporter: huxihx


When reading streams app's output topic with WindowedDeserializer deserilizer 
using kafka-console-consumer.sh, NullPointerException was thrown due to the 
fact that the inner deserializer was not initialized since there is no place in 
ConsoleConsumer to set this class.

Complete stack trace is shown below:
{code:java}
[2018-02-26 14:56:04,736] ERROR Unknown error when running consumer:  
(kafka.tools.ConsoleConsumer$)

java.lang.NullPointerException

at 
org.apache.kafka.streams.kstream.internals.WindowedDeserializer.deserialize(WindowedDeserializer.java:89)

at 
org.apache.kafka.streams.kstream.internals.WindowedDeserializer.deserialize(WindowedDeserializer.java:35)

at 
kafka.tools.DefaultMessageFormatter.$anonfun$writeTo$2(ConsoleConsumer.scala:544)

at scala.Option.map(Option.scala:146)

at kafka.tools.DefaultMessageFormatter.write$1(ConsoleConsumer.scala:545)

at kafka.tools.DefaultMessageFormatter.writeTo(ConsoleConsumer.scala:560)

at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:147)

at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:84)

at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54)

at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5809) when zookeeper set acl on path /. then kafka can't connect zookeeper

2018-02-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5809.
--
Resolution: Won't Fix

either we need to remove the acls on zk node acls or enable Zk SASL 
authentication http://kafka.apache.org/documentation/#zk_authz

 

Please reopen if you think the issue still exists

> when zookeeper set acl on path /. then kafka can't connect zookeeper
> 
>
> Key: KAFKA-5809
> URL: https://issues.apache.org/jira/browse/KAFKA-5809
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: heping
>Priority: Major
> Attachments: 20170830135831.png
>
>
> hi,all
> when i set zookeeper acl on zookeeper path /. using setAcl /  
> digest:zhangsan:jA/7JI9gsuLp0ZQn5J5dcnDQkHA=:cdrwa
>  then restart kafka,it  can't connect zookeeper. kafka version is 0.10.1.0
> [2017-08-30 13:50:14,108] INFO [Kafka Server 0], shut down completed 
> (kafka.server.KafkaServer)
> [2017-08-30 13:50:14,108] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> org.I0Itec.zkclient.exception.ZkException: 
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth
>   at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:68)
>   at kafka.utils.ZKCheckedEphemeral.create(ZkUtils.scala:1139)
>   at kafka.utils.ZkUtils.registerBrokerInZk(ZkUtils.scala:390)
>   at kafka.utils.ZkUtils.registerBrokerInZk(ZkUtils.scala:379)
>   at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:70)
>   at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:51)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:270)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
>   at kafka.Kafka$.main(Kafka.scala:67)
>   at kafka.Kafka.main(Kafka.scala)
> Caused by: org.apache.zookeeper.KeeperException$NoAuthException: 
> KeeperErrorCode = NoAuth
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
>   ... 9 more
> how to solve this problem? 
>  thanks 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)