[jira] [Comment Edited] (KAFKA-5351) Broker clean bounce test puts the broker into a 'CONCURRENT_TRANSACTIONS' state permanently

2017-05-31 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030729#comment-16030729
 ] 

Apurva Mehta edited comment on KAFKA-5351 at 5/31/17 6:59 AM:
--

I saw the following in the logs for two of the brokers. At 06:12:05 the first 
broker returns COORDINATOR_NOT_AVAILABLE, at which point the producer sends a 
FindCoordinator request and gets routed back to the same broker (knode09) 
below. The producer then keeps retrying the EndTxn request, and keeps getting 
CONCURRENT_TRANSACTIONS error code back. 

At 06:12:17,  the producer gets a NOT_COORDINATOR erro. It then sends another 
'FindCoordinator' request, and gets routed to knode03, at which point, it keeps 
getting CONCURRENT_TRANSACTIONS until the process is terminated. 

{noformat}
root@f82eb723774d:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=clean_bounce.bounce_target=brokers/1/KafkaService-0-140388221024592/knode09#
 find . -name 'server.log' -exec grep -Hni 'my-second-transactional-id' {} \;
./info/server.log:956:[2017-05-31 06:12:05,724] INFO TransactionalId 
my-second-transactional-id prepare transition from Ongoing to 
TxnTransitMetadata(producerId=1001, producerEpoch=0, txnTimeoutMs=6, 
txnState=PrepareCommit, topicPartitions=Set(output-topic-2, 
__consumer_offsets-47, output-topic-0, output-topic-1), 
txnStartTimestamp=1496211125265, txnLastUpdateTimestamp=1496211125723) 
(kafka.coordinator.transaction.TransactionMetadata)
./info/server.log:963:[2017-05-31 06:12:05,736] INFO [Transaction Log Manager 
3]: Appending transaction message TxnTransitMetadata(producerId=1001, 
producerEpoch=0, txnTimeoutMs=6, txnState=PrepareCommit, 
topicPartitions=Set(output-topic-2, __consumer_offsets-47, output-topic-0, 
output-topic-1), txnStartTimestamp=1496211125265, 
txnLastUpdateTimestamp=1496211125723) for my-second-transactional-id failed due 
to org.apache.kafka.common.errors.NotEnoughReplicasException,returning 
COORDINATOR_NOT_AVAILABLE to the client 
(kafka.coordinator.transaction.TransactionStateManager)
./info/server.log:964:[2017-05-31 06:12:05,737] INFO [Transaction Coordinator 
3]: Updating my-second-transactional-id's transaction state to 
TxnTransitMetadata(producerId=1001, producerEpoch=0, txnTimeoutMs=6, 
txnState=PrepareCommit,topicPartitions=Set(output-topic-2, 
__consumer_offsets-47, output-topic-0, output-topic-1), 
txnStartTimestamp=1496211125265, txnLastUpdateTimestamp=1496211125723) with 
coordinator epoch 1 for my-second-transactional-id failed since the transaction 
message cannot be appended to the log. Returning error code 
COORDINATOR_NOT_AVAILABLE to the client 
(kafka.coordinator.transaction.TransactionCoordinator)
root@f82eb723774d:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=clean_bounce.bounce_target=brokers/1/KafkaService-0-140388221024592/knode09#
 cd ..
root@f82eb723774d:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=clean_bounce.bounce_target=brokers/1/KafkaService-0-140388221024592#
 cd knode03/
root@f82eb723774d:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=clean_bounce.bounce_target=brokers/1/KafkaService-0-140388221024592/knode03#
 find . -name 'server.log' -exec grep -Hni 'my-second-transactional-id' {} \;
./info/server.log:1737:[2017-05-31 06:12:17,906] INFO TransactionalId 
my-second-transactional-id prepare transition from Ongoing to 
TxnTransitMetadata(producerId=1001, producerEpoch=0, txnTimeoutMs=6, 
txnState=PrepareCommit, topicPartitions=Set(output-topic-2, 
__consumer_offsets-47, output-topic-0, output-topic-1), 
txnStartTimestamp=1496211125265, txnLastUpdateTimestamp=1496211137906) 
(kafka.coordinator.transaction.TransactionMetadata)
./info/server.log:1741:[2017-05-31 06:12:17,926] INFO [Transaction Log Manager 
1]: Appending transaction message TxnTransitMetadata(producerId=1001, 
producerEpoch=0, txnTimeoutMs=6, txnState=PrepareCommit, 
topicPartitions=Set(output-topic-2, __consumer_offsets-47, output-topic-0, 
output-topic-1), txnStartTimestamp=1496211125265, 
txnLastUpdateTimestamp=1496211137906) for my-second-transactional-id failed due 
to org.apache.kafka.common.errors.NotEnoughReplicasException, returning 
COORDINATOR_NOT_AVAILABLE to the client 
(kafka.coordinator.transaction.TransactionStateManager)
./info/server.log:1742:[2017-05-31 06:12:17,933] INFO [Transaction Coordinator 
1]: Updating my-second-transactional-id's transaction state to 
TxnTransitMetadata(producerId=1001, producerEpoch=0, txnTimeoutMs=6, 
txnState=PrepareCommit, topicPartitions=Set(output-topic-2, 
__consumer_offsets-47, output-topic-0, output-topic-1), 
txnStartTimestamp=1496211125265, txnLastUpdateTimestamp=1496211137906) with 
coordinator epoch 2 for my-second-transactional-id failed since the transaction 
message cannot be appen

[jira] [Commented] (KAFKA-4668) Mirrormaker should default to auto.offset.reset=earliest

2017-05-31 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030735#comment-16030735
 ] 

Ismael Juma commented on KAFKA-4668:


This probably needs a simple KIP. Also cc [~becket_qin] for review.

> Mirrormaker should default to auto.offset.reset=earliest
> 
>
> Key: KAFKA-4668
> URL: https://issues.apache.org/jira/browse/KAFKA-4668
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jeff Widman
> Fix For: 0.11.0.0
>
>
> Mirrormaker currently inherits the default value for auto.offset.reset, which 
> is latest (new consumer) / largest (old consumer). 
> While for most consumers this is a sensible default, mirrormakers are 
> specifically designed for replication, so they should default to replicating 
> topics from the beginning.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4668) Mirrormaker should default to auto.offset.reset=earliest

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4668:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.1.0

> Mirrormaker should default to auto.offset.reset=earliest
> 
>
> Key: KAFKA-4668
> URL: https://issues.apache.org/jira/browse/KAFKA-4668
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jeff Widman
> Fix For: 0.11.1.0
>
>
> Mirrormaker currently inherits the default value for auto.offset.reset, which 
> is latest (new consumer) / largest (old consumer). 
> While for most consumers this is a sensible default, mirrormakers are 
> specifically designed for replication, so they should default to replicating 
> topics from the beginning.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3689) Exception when attempting to decrease connection count for address with no connections

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3689:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.0.1

> Exception when attempting to decrease connection count for address with no 
> connections
> --
>
> Key: KAFKA-3689
> URL: https://issues.apache.org/jira/browse/KAFKA-3689
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.1
> Environment: ubuntu 14.04,
> java version "1.7.0_95"
> OpenJDK Runtime Environment (IcedTea 2.6.4) (7u95-2.6.4-0ubuntu0.14.04.2)
> OpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode)
> 3 broker cluster (all 3 servers identical -  Intel Xeon E5-2670 @2.6GHz, 
> 8cores, 16 threads 64 GB RAM & 1 TB Disk)
> Kafka Cluster is managed by 3 server ZK cluster (these servers are different 
> from Kafka broker servers). All 6 servers are connected via 10G switch. 
> Producers run from external servers.
>Reporter: Buvaneswari Ramanan
>Assignee: Ryan P
> Fix For: 0.11.0.1
>
> Attachments: kafka-3689-instrumentation.patch, KAFKA-3689.log.redacted
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> As per Ismael Juma's suggestion in email thread to us...@kafka.apache.org 
> with the same subject, I am creating this bug report.
> The following error occurs in one of the brokers in our 3 broker cluster, 
> which serves about 8000 topics. These topics are single partitioned with a 
> replication factor = 3. Each topic gets data at a low rate  – 200 bytes per 
> sec.  Leaders are balanced across the topics.
> Producers run from external servers (4 Ubuntu servers with same config as the 
> brokers), each producing to 2000 topics utilizing kafka-python library.
> This error message occurs repeatedly in one of the servers. Between the hours 
> of 10:30am and 1:30pm on 5/9/16, there were about 10 Million such 
> occurrences. This was right after a cluster restart.
> This is not the first time we got this error in this broker. In those 
> instances, error occurred hours / days after cluster restart.
> =
> [2016-05-09 10:38:43,932] ERROR Processor got uncaught exception. 
> (kafka.network.Processor)
> java.lang.IllegalArgumentException: Attempted to decrease connection count 
> for address with no connections, address: /X.Y.Z.144 (actual network address 
> masked)
> at 
> kafka.network.ConnectionQuotas$$anonfun$9.apply(SocketServer.scala:565)
> at 
> kafka.network.ConnectionQuotas$$anonfun$9.apply(SocketServer.scala:565)
> at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
> at scala.collection.AbstractMap.getOrElse(Map.scala:59)
> at kafka.network.ConnectionQuotas.dec(SocketServer.scala:564)
> at 
> kafka.network.Processor$$anonfun$run$13.apply(SocketServer.scala:450)
> at 
> kafka.network.Processor$$anonfun$run$13.apply(SocketServer.scala:445)
> at scala.collection.Iterator$class.foreach(Iterator.scala:742)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at kafka.network.Processor.run(SocketServer.scala:445)
> at java.lang.Thread.run(Thread.java:745)
> [2016-05-09 10:38:43,932] ERROR Processor got uncaught exception. 
> (kafka.network.Processor)
> java.lang.IllegalArgumentException: Attempted to decrease connection count 
> for address with no connections, address: /X.Y.Z.144
> at 
> kafka.network.ConnectionQuotas$$anonfun$9.apply(SocketServer.scala:565)
> at 
> kafka.network.ConnectionQuotas$$anonfun$9.apply(SocketServer.scala:565)
> at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
> at scala.collection.AbstractMap.getOrElse(Map.scala:59)
> at kafka.network.ConnectionQuotas.dec(SocketServer.scala:564)
> at 
> kafka.network.Processor$$anonfun$run$13.apply(SocketServer.scala:450)
> at 
> kafka.network.Processor$$anonfun$run$13.apply(SocketServer.scala:445)
> at scala.collection.Iterator$class.foreach(Iterator.scala:742)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at kafka.network.Processor.run(SocketServer.scala:445)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4637) Update system test(s) to use multiple listeners for the same security protocol

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4637:
---
Labels: newbie  (was: )

> Update system test(s) to use multiple listeners for the same security protocol
> --
>
> Key: KAFKA-4637
> URL: https://issues.apache.org/jira/browse/KAFKA-4637
> Project: Kafka
>  Issue Type: Test
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>  Labels: newbie
>
> Even though this is tested via the JUnit tests introduced by KAFKA-4565, it 
> would be good to have at least one system test exercising this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4637) Update system test(s) to use multiple listeners for the same security protocol

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4637:
---
Fix Version/s: (was: 0.11.0.0)

> Update system test(s) to use multiple listeners for the same security protocol
> --
>
> Key: KAFKA-4637
> URL: https://issues.apache.org/jira/browse/KAFKA-4637
> Project: Kafka
>  Issue Type: Test
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>  Labels: newbie
>
> Even though this is tested via the JUnit tests introduced by KAFKA-4565, it 
> would be good to have at least one system test exercising this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4665) Inconsistent handling of non-existing topics in offset fetch handling

2017-05-31 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030736#comment-16030736
 ] 

Ismael Juma commented on KAFKA-4665:


[~hachikuji], is this still planned for 0.11.0.0?

> Inconsistent handling of non-existing topics in offset fetch handling
> -
>
> Key: KAFKA-4665
> URL: https://issues.apache.org/jira/browse/KAFKA-4665
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
> Fix For: 0.11.0.0
>
>
> For version 0 of the offset fetch API, the broker returns 
> UNKNOWN_TOPIC_OR_PARTITION for any topics/partitions which do not exist at 
> the time of fetching. In later versions, we skip this check. We do, however, 
> continue to return UNKNOWN_TOPIC_OR_PARTITION for authorization errors (i.e. 
> if the principal does not have Describe access to the corresponding topic). 
> We should probably make this behavior consistent across versions.
> Note also that currently the consumer raises {{KafkaException}} when it 
> encounters an UNKNOWN_TOPIC_OR_PARTITION error in the offset fetch response, 
> which is inconsistent with how we usually handle this error. This probably 
> doesn't cause any problems currently only because of the inconsistency 
> mentioned in the first paragraph above.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3986) completedReceives can contain closed channels

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3986:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.1.0

> completedReceives can contain closed channels 
> --
>
> Key: KAFKA-3986
> URL: https://issues.apache.org/jira/browse/KAFKA-3986
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Reporter: Ryan P
> Fix For: 0.11.1.0
>
>
> I'm not entirely sure why at this point but it is possible to throw a Null 
> Pointer Exception when processingCompletedReceives. This happens when a 
> fairly substantial number of simultaneously initiated connections are 
> initiated with the server. 
> The processor thread does carry on but it may be worth investigating how the 
> channel could be both closed and  completedReceives. 
> The NPE in question is thrown here:
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/network/SocketServer.scala#L490
> It can not be consistently reproduced either. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3866) KerberosLogin refresh time bug and other improvements

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3866:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.0.1

> KerberosLogin refresh time bug and other improvements
> -
>
> Key: KAFKA-3866
> URL: https://issues.apache.org/jira/browse/KAFKA-3866
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.11.0.1
>
>
> ZOOKEEPER-2295 describes a bug in the Kerberos refresh time logic that is 
> also present in our KerberosLogin class. While looking at the code, I found a 
> number of things that could be improved. More details in the PR.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5349) KafkaConsumer occasionally hits IllegalStateException

2017-05-31 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-5349:
---
Priority: Blocker  (was: Major)

> KafkaConsumer occasionally hits IllegalStateException
> -
>
> Key: KAFKA-5349
> URL: https://issues.apache.org/jira/browse/KAFKA-5349
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> I have noticed the following while debugging system tests. Sometimes a plain 
> old console consumer hits the following exception when reading from a topic:
> {noformat}
> [2017-05-30 22:16:55,686] ERROR Unknown error when running consumer:  
> (kafka.tools.ConsoleConsumer$)
> java.lang.IllegalStateException: Invalid attempt to complete a request future 
> which is already complete
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:145)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:158)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.handleListOffsetResponse(Fetcher.java:744)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.access$2000(Fetcher.java:91)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:688)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:683)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:488)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:348)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:184)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:451)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.resetOffsets(Fetcher.java:409)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.updateFetchPositions(Fetcher.java:282)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateFetchPositions(KafkaConsumer.java:1614)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1055)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1015)
> at kafka.consumer.NewShinyConsumer.(BaseConsumer.scala:58)
> at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:72)
> at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:53)
> at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-5351) Broker clean bounce test puts the broker into a 'CONCURRENT_TRANSACTIONS' state permanently

2017-05-31 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030729#comment-16030729
 ] 

Apurva Mehta edited comment on KAFKA-5351 at 5/31/17 7:03 AM:
--

I saw the following in the logs for two of the brokers. At 06:12:05 the first 
broker returns COORDINATOR_NOT_AVAILABLE, at which point the producer sends a 
FindCoordinator request and gets routed back to the same broker (knode09) 
below. The producer then keeps retrying the EndTxn request, and keeps getting 
CONCURRENT_TRANSACTIONS error code back. 

At 06:12:17,  the producer gets a NOT_COORDINATOR erro. It then sends another 
'FindCoordinator' request, and gets routed to knode03, at which point, it keeps 
getting CONCURRENT_TRANSACTIONS until the process is terminated. 

{noformat}
root@f82eb723774d:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=clean_bounce.bounce_target=brokers/1/KafkaService-0-140388221024592/knode09#
 find . -name 'server.log' -exec grep -Hni 'my-second-transactional-id' {} \;
./info/server.log:956:[2017-05-31 06:12:05,724] INFO TransactionalId 
my-second-transactional-id prepare transition from Ongoing to 
TxnTransitMetadata(producerId=1001, producerEpoch=0, txnTimeoutMs=6, 
txnState=PrepareCommit, topicPartitions=Set(output-topic-2, 
__consumer_offsets-47, output-topic-0, output-topic-1), 
txnStartTimestamp=1496211125265, txnLastUpdateTimestamp=1496211125723) 
(kafka.coordinator.transaction.TransactionMetadata)
./info/server.log:963:[2017-05-31 06:12:05,736] INFO [Transaction Log Manager 
3]: Appending transaction message TxnTransitMetadata(producerId=1001, 
producerEpoch=0, txnTimeoutMs=6, txnState=PrepareCommit, 
topicPartitions=Set(output-topic-2, __consumer_offsets-47, output-topic-0, 
output-topic-1), txnStartTimestamp=1496211125265, 
txnLastUpdateTimestamp=1496211125723) for my-second-transactional-id failed due 
to org.apache.kafka.common.errors.NotEnoughReplicasException,returning 
COORDINATOR_NOT_AVAILABLE to the client 
(kafka.coordinator.transaction.TransactionStateManager)
./info/server.log:964:[2017-05-31 06:12:05,737] INFO [Transaction Coordinator 
3]: Updating my-second-transactional-id's transaction state to 
TxnTransitMetadata(producerId=1001, producerEpoch=0, txnTimeoutMs=6, 
txnState=PrepareCommit,topicPartitions=Set(output-topic-2, 
__consumer_offsets-47, output-topic-0, output-topic-1), 
txnStartTimestamp=1496211125265, txnLastUpdateTimestamp=1496211125723) with 
coordinator epoch 1 for my-second-transactional-id failed since the transaction 
message cannot be appended to the log. Returning error code 
COORDINATOR_NOT_AVAILABLE to the client 
(kafka.coordinator.transaction.TransactionCoordinator)
root@f82eb723774d:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=clean_bounce.bounce_target=brokers/1/KafkaService-0-140388221024592/knode09#
 cd ..
root@f82eb723774d:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=clean_bounce.bounce_target=brokers/1/KafkaService-0-140388221024592#
 cd knode03/
root@f82eb723774d:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=clean_bounce.bounce_target=brokers/1/KafkaService-0-140388221024592/knode03#
 find . -name 'server.log' -exec grep -Hni 'my-second-transactional-id' {} \;
./info/server.log:1737:[2017-05-31 06:12:17,906] INFO TransactionalId 
my-second-transactional-id prepare transition from Ongoing to 
TxnTransitMetadata(producerId=1001, producerEpoch=0, txnTimeoutMs=6, 
txnState=PrepareCommit, topicPartitions=Set(output-topic-2, 
__consumer_offsets-47, output-topic-0, output-topic-1), 
txnStartTimestamp=1496211125265, txnLastUpdateTimestamp=1496211137906) 
(kafka.coordinator.transaction.TransactionMetadata)
./info/server.log:1741:[2017-05-31 06:12:17,926] INFO [Transaction Log Manager 
1]: Appending transaction message TxnTransitMetadata(producerId=1001, 
producerEpoch=0, txnTimeoutMs=6, txnState=PrepareCommit, 
topicPartitions=Set(output-topic-2, __consumer_offsets-47, output-topic-0, 
output-topic-1), txnStartTimestamp=1496211125265, 
txnLastUpdateTimestamp=1496211137906) for my-second-transactional-id failed due 
to org.apache.kafka.common.errors.NotEnoughReplicasException, returning 
COORDINATOR_NOT_AVAILABLE to the client 
(kafka.coordinator.transaction.TransactionStateManager)
./info/server.log:1742:[2017-05-31 06:12:17,933] INFO [Transaction Coordinator 
1]: Updating my-second-transactional-id's transaction state to 
TxnTransitMetadata(producerId=1001, producerEpoch=0, txnTimeoutMs=6, 
txnState=PrepareCommit, topicPartitions=Set(output-topic-2, 
__consumer_offsets-47, output-topic-0, output-topic-1), 
txnStartTimestamp=1496211125265, txnLastUpdateTimestamp=1496211137906) with 
coordinator epoch 2 for my-second-transactional-id failed since the transaction 
message cannot be appen

[jira] [Updated] (KAFKA-5349) KafkaConsumer occasionally hits IllegalStateException

2017-05-31 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-5349:
---
Fix Version/s: 0.11.0.0

> KafkaConsumer occasionally hits IllegalStateException
> -
>
> Key: KAFKA-5349
> URL: https://issues.apache.org/jira/browse/KAFKA-5349
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> I have noticed the following while debugging system tests. Sometimes a plain 
> old console consumer hits the following exception when reading from a topic:
> {noformat}
> [2017-05-30 22:16:55,686] ERROR Unknown error when running consumer:  
> (kafka.tools.ConsoleConsumer$)
> java.lang.IllegalStateException: Invalid attempt to complete a request future 
> which is already complete
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:145)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:158)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.handleListOffsetResponse(Fetcher.java:744)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.access$2000(Fetcher.java:91)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:688)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:683)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:488)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:348)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:184)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:451)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.resetOffsets(Fetcher.java:409)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.updateFetchPositions(Fetcher.java:282)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateFetchPositions(KafkaConsumer.java:1614)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1055)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1015)
> at kafka.consumer.NewShinyConsumer.(BaseConsumer.scala:58)
> at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:72)
> at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:53)
> at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5349) KafkaConsumer occasionally hits IllegalStateException

2017-05-31 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-5349:
--

Assignee: Jason Gustafson

> KafkaConsumer occasionally hits IllegalStateException
> -
>
> Key: KAFKA-5349
> URL: https://issues.apache.org/jira/browse/KAFKA-5349
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Jason Gustafson
> Fix For: 0.11.0.0
>
>
> I have noticed the following while debugging system tests. Sometimes a plain 
> old console consumer hits the following exception when reading from a topic:
> {noformat}
> [2017-05-30 22:16:55,686] ERROR Unknown error when running consumer:  
> (kafka.tools.ConsoleConsumer$)
> java.lang.IllegalStateException: Invalid attempt to complete a request future 
> which is already complete
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:145)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:158)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.handleListOffsetResponse(Fetcher.java:744)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.access$2000(Fetcher.java:91)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:688)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:683)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:488)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:348)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:184)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:451)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.resetOffsets(Fetcher.java:409)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.updateFetchPositions(Fetcher.java:282)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateFetchPositions(KafkaConsumer.java:1614)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1055)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1015)
> at kafka.consumer.NewShinyConsumer.(BaseConsumer.scala:58)
> at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:72)
> at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:53)
> at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-0.11.0-jdk7 #54

2017-05-31 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5211; Do not skip a corrupted record in consumer

--
[...truncated 905.01 KB...]

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent PASSED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress STARTED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown PASSED

kafka.controller.ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline STARTED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled PASSED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection STARTED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection PASSED

kafka.controller.ControllerIntegrationTest > testTopicCreation STARTED

kafka.controller.ControllerIntegrationTest > testTopicCreation PASSED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment STARTED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment PASSED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion STARTED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled PASSED

kafka.controller.ControllerIntegrationTest > testEmptyCluster STARTED

kafka.controller.ControllerIntegrationTest > testEmptyCluster PASSED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
STARTED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit STARTED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > testDefau

[jira] [Commented] (KAFKA-5349) KafkaConsumer occasionally hits IllegalStateException

2017-05-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030742#comment-16030742
 ] 

ASF GitHub Bot commented on KAFKA-5349:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3175

KAFKA-5349: Fix illegal state error in consumer's ListOffset handler



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5349

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3175.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3175


commit 12a38b4506494c76bc315526044dad9ec217ec80
Author: Jason Gustafson 
Date:   2017-05-31T07:05:19Z

KAFKA-5349: Fix illegal state error in consumer's ListOffset handler




> KafkaConsumer occasionally hits IllegalStateException
> -
>
> Key: KAFKA-5349
> URL: https://issues.apache.org/jira/browse/KAFKA-5349
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> I have noticed the following while debugging system tests. Sometimes a plain 
> old console consumer hits the following exception when reading from a topic:
> {noformat}
> [2017-05-30 22:16:55,686] ERROR Unknown error when running consumer:  
> (kafka.tools.ConsoleConsumer$)
> java.lang.IllegalStateException: Invalid attempt to complete a request future 
> which is already complete
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:145)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:158)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.handleListOffsetResponse(Fetcher.java:744)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.access$2000(Fetcher.java:91)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:688)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:683)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:488)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:348)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:184)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:451)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.resetOffsets(Fetcher.java:409)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.updateFetchPositions(Fetcher.java:282)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateFetchPositions(KafkaConsumer.java:1614)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1055)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1015)
> at kafka.consumer.NewShinyConsumer.(BaseConsumer.scala:58)
> at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:72)
> at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:53)
> at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3175: KAFKA-5349: Fix illegal state error in consumer's ...

2017-05-31 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3175

KAFKA-5349: Fix illegal state error in consumer's ListOffset handler



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5349

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3175.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3175


commit 12a38b4506494c76bc315526044dad9ec217ec80
Author: Jason Gustafson 
Date:   2017-05-31T07:05:19Z

KAFKA-5349: Fix illegal state error in consumer's ListOffset handler




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #1616

2017-05-31 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5211; Do not skip a corrupted record in consumer

--
[...truncated 3.54 MB...]
org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFirstMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFirstMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFinalMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidClientFirstMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidClientFirstMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validClientFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validClientFinalMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFirstMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFirstMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFinalMessage PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithoutPasswordConfiguration STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithoutPasswordConfiguration PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse STARTED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameOverride STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameOverride PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingOptionValue 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingOptionValue PASSED

org.apache.kafka.common.security.JaasContextTest > testSingleOption STARTED

org.apache.kafka.common.security.JaasContextTest > testSingleOption PASSED

org.apache.kafka.common.security.JaasContextTest > 
testNumericOptionWithoutQuotes STARTED

org.apache.kafka.common.security.JaasContextTest > 
testNumericOptionWithoutQuotes PASSED

org.apache.kafka.common.security.JaasContextTest > testConfigNoOptions STARTED

org.apache.kafka.common.security.JaasContextTest > testConfigNoOptions PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithWrongListenerName STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithWrongListenerName PASSED

org.apache.kafka.common.security.JaasContextTest > testNumericOptionWithQuotes 
STARTED

org.apache.kafka.common.security.JaasContextTest > testNumericOptionWithQuotes 
PASSED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionValue STARTED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionValue PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingLoginModule 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingLoginModule PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingSemicolon STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingSemicolon PASSED

org.apache.kafka.common.security.JaasContextTest > testMultipleOptions STARTED

org.apache.kafka.common.security.JaasContextTest > testMultipleOptions PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForClientWithListenerName STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForClientWithListenerName PASSED

org.apache.kafka.common.security.JaasContextTest > testMultipleLoginModules 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMultipleLoginModules 
PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingControlFlag 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingControlFlag PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameAndFallback STARTED

org.apache.kafka.co

[jira] [Updated] (KAFKA-2947) AlterTopic - protocol and server side implementation

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2947:
---
Fix Version/s: (was: 0.11.0.0)

> AlterTopic - protocol and server side implementation
> 
>
> Key: KAFKA-2947
> URL: https://issues.apache.org/jira/browse/KAFKA-2947
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Colin P. McCabe
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-2939) Make AbstractConfig.logUnused() tunable for clients

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2939:
---
Fix Version/s: (was: 0.11.0.0)

> Make AbstractConfig.logUnused() tunable for clients
> ---
>
> Key: KAFKA-2939
> URL: https://issues.apache.org/jira/browse/KAFKA-2939
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: newbie
>
> Today we always log unused configs in KafkaProducer / KafkaConsumer in their 
> constructors, however for some cases like Kafka Streams that make use of 
> these clients, other configs may be passed in to configure Partitioner / 
> Serializer classes, etc. So it would be better to make this function call 
> optional to avoid printing unnecessary and confusing WARN entries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-2651) Remove deprecated config alteration from TopicCommand in 0.9.1.0

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2651:
---
Fix Version/s: (was: 0.11.0.0)
   0.12.0.0

> Remove deprecated config alteration from TopicCommand in 0.9.1.0
> 
>
> Key: KAFKA-2651
> URL: https://issues.apache.org/jira/browse/KAFKA-2651
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Manikumar
> Fix For: 0.12.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-2651) Remove deprecated config alteration from TopicCommand in 0.9.1.0

2017-05-31 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030768#comment-16030768
 ] 

Ismael Juma commented on KAFKA-2651:


[~omkreddy], yes, I'm aware that it has been deprecated, but no reason has been 
given.

> Remove deprecated config alteration from TopicCommand in 0.9.1.0
> 
>
> Key: KAFKA-2651
> URL: https://issues.apache.org/jira/browse/KAFKA-2651
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Manikumar
> Fix For: 0.12.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-2334) Prevent HW from going back during leader failover

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2334:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.1.0

> Prevent HW from going back during leader failover 
> --
>
> Key: KAFKA-2334
> URL: https://issues.apache.org/jira/browse/KAFKA-2334
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.8.2.1
>Reporter: Guozhang Wang
>Assignee: Neha Narkhede
>  Labels: reliability
> Fix For: 0.11.1.0
>
>
> Consider the following scenario:
> 0. Kafka use replication factor of 2, with broker B1 as the leader, and B2 as 
> the follower. 
> 1. A producer keep sending to Kafka with ack=-1.
> 2. A consumer repeat issuing ListOffset request to Kafka.
> And the following sequence:
> 0. B1 current log-end-offset (LEO) 0, HW-offset 0; and same with B2.
> 1. B1 receive a ProduceRequest of 100 messages, append to local log (LEO 
> becomes 100) and hold the request in purgatory.
> 2. B1 receive a FetchRequest starting at offset 0 from follower B2, and 
> returns the 100 messages.
> 3. B2 append its received message to local log (LEO becomes 100).
> 4. B1 receive another FetchRequest starting at offset 100 from B2, knowing 
> that B2's LEO has caught up to 100, and hence update its own HW, and 
> satisfying the ProduceRequest in purgatory, and sending the FetchResponse 
> with HW 100 back to B2 ASYNCHRONOUSLY.
> 5. B1 successfully sends the ProduceResponse to the producer, and then fails, 
> hence the FetchResponse did not reach B2, whose HW remains 0.
> From the consumer's point of view, it could first see the latest offset of 
> 100 (from B1), and then see the latest offset of 0 (from B2), and then the 
> latest offset gradually catch up to 100.
> This is because we use HW to guard the ListOffset and 
> Fetch-from-ordinary-consumer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-2435) More optimally balanced partition assignment strategy

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2435:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.1.0

> More optimally balanced partition assignment strategy
> -
>
> Key: KAFKA-2435
> URL: https://issues.apache.org/jira/browse/KAFKA-2435
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrew Olson
>Assignee: Andrew Olson
> Fix For: 0.11.1.0
>
> Attachments: KAFKA-2435.patch
>
>
> While the roundrobin partition assignment strategy is an improvement over the 
> range strategy, when the consumer topic subscriptions are not identical 
> (previously disallowed but will be possible as of KAFKA-2172) it can produce 
> heavily skewed assignments. As suggested 
> [here|https://issues.apache.org/jira/browse/KAFKA-2172?focusedCommentId=14530767&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14530767]
>  it would be nice to have a strategy that attempts to assign an equal number 
> of partitions to each consumer in a group, regardless of how similar their 
> individual topic subscriptions are. We can accomplish this by tracking the 
> number of partitions assigned to each consumer, and having the partition 
> assignment loop assign each partition to a consumer interested in that topic 
> with the least number of partitions assigned. 
> Additionally, we can optimize the distribution fairness by adjusting the 
> partition assignment order:
> * Topics with fewer consumers are assigned first.
> * In the event of a tie for least consumers, the topic with more partitions 
> is assigned first.
> The general idea behind these two rules is to keep the most flexible 
> assignment choices available as long as possible by starting with the most 
> constrained partitions/consumers.
> This JIRA addresses the original high-level consumer. For the new consumer, 
> see KAFKA-3297.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-2334) Prevent HW from going back during leader failover

2017-05-31 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030772#comment-16030772
 ] 

Ismael Juma commented on KAFKA-2334:


Is this still an issue?

> Prevent HW from going back during leader failover 
> --
>
> Key: KAFKA-2334
> URL: https://issues.apache.org/jira/browse/KAFKA-2334
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.8.2.1
>Reporter: Guozhang Wang
>Assignee: Neha Narkhede
>  Labels: reliability
> Fix For: 0.11.1.0
>
>
> Consider the following scenario:
> 0. Kafka use replication factor of 2, with broker B1 as the leader, and B2 as 
> the follower. 
> 1. A producer keep sending to Kafka with ack=-1.
> 2. A consumer repeat issuing ListOffset request to Kafka.
> And the following sequence:
> 0. B1 current log-end-offset (LEO) 0, HW-offset 0; and same with B2.
> 1. B1 receive a ProduceRequest of 100 messages, append to local log (LEO 
> becomes 100) and hold the request in purgatory.
> 2. B1 receive a FetchRequest starting at offset 0 from follower B2, and 
> returns the 100 messages.
> 3. B2 append its received message to local log (LEO becomes 100).
> 4. B1 receive another FetchRequest starting at offset 100 from B2, knowing 
> that B2's LEO has caught up to 100, and hence update its own HW, and 
> satisfying the ProduceRequest in purgatory, and sending the FetchResponse 
> with HW 100 back to B2 ASYNCHRONOUSLY.
> 5. B1 successfully sends the ProduceResponse to the producer, and then fails, 
> hence the FetchResponse did not reach B2, whose HW remains 0.
> From the consumer's point of view, it could first see the latest offset of 
> 100 (from B1), and then see the latest offset of 0 (from B2), and then the 
> latest offset gradually catch up to 100.
> This is because we use HW to guard the ListOffset and 
> Fetch-from-ordinary-consumer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-1923) Negative offsets in replication-offset-checkpoint file

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1923:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.0.1

> Negative offsets in replication-offset-checkpoint file
> --
>
> Key: KAFKA-1923
> URL: https://issues.apache.org/jira/browse/KAFKA-1923
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
>Reporter: Oleg Golovin
>  Labels: reliability
> Fix For: 0.11.0.1
>
>
> Today was the second time we witnessed negative offsets in 
> replication-offset-checkpoint file. After restart the node stops replicating 
> some of its partitions.
> Unfortunately we can't reproduce it yet. But the two cases we encountered 
> indicate a bug which should be addressed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-2200) kafkaProducer.send() should not call callback.onCompletion()

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2200:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.1.0

> kafkaProducer.send() should not call callback.onCompletion()
> 
>
> Key: KAFKA-2200
> URL: https://issues.apache.org/jira/browse/KAFKA-2200
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0
>Reporter: Jiangjie Qin
>  Labels: newbie
> Fix For: 0.11.1.0
>
>
> KafkaProducer.send() should not call callback.onCompletion() because this 
> might break the callback firing order.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3252) compression type for a topic should be used during log compaction

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3252:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.1.0

> compression type for a topic should be used during log compaction 
> --
>
> Key: KAFKA-3252
> URL: https://issues.apache.org/jira/browse/KAFKA-3252
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ashish Singh
> Fix For: 0.11.1.0
>
>
> Currently, the broker uses the specified compression type in a topic for 
> newly published messages. However, during log compaction, it still uses the 
> compression codec in the original message. To be consistent, it seems that we 
> should use the compression type in a topic when copying the messages to new 
> log segments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3203) Add UnknownCodecException and UnknownMagicByteException to error mapping

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3203:
---
Fix Version/s: (was: 0.11.0.0)

> Add UnknownCodecException and UnknownMagicByteException to error mapping
> 
>
> Key: KAFKA-3203
> URL: https://issues.apache.org/jira/browse/KAFKA-3203
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Grant Henke
>
> Currently most of the exceptions to user have an error code. While 
> UnknownCodecException and UnknownMagicByteException can also be thrown to 
> client, broker does not have error mapping for them, so clients will only 
> receive UnknownServerException, which is vague.
> We should create those two exceptions in client package and add them to error 
> mapping.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3356) Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3356:
---
Fix Version/s: (was: 0.11.0.0)
   0.12.0.0

> Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11
> 
>
> Key: KAFKA-3356
> URL: https://issues.apache.org/jira/browse/KAFKA-3356
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Ashish Singh
>Assignee: Mickael Maison
>Priority: Blocker
> Fix For: 0.12.0.0
>
>
> ConsumerOffsetChecker is marked deprecated as of 0.9, should be removed in 
> 0.11.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3356) Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11

2017-05-31 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030792#comment-16030792
 ] 

Ismael Juma commented on KAFKA-3356:


Given the code freeze today and everything else that we still need to do, I'm 
pushing this to the next major release.

> Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11
> 
>
> Key: KAFKA-3356
> URL: https://issues.apache.org/jira/browse/KAFKA-3356
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Ashish Singh
>Assignee: Mickael Maison
>Priority: Blocker
> Fix For: 0.12.0.0
>
>
> ConsumerOffsetChecker is marked deprecated as of 0.9, should be removed in 
> 0.11.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3102: Follow-up improvements for consumer offset reset t...

2017-05-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3102


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5266) Follow-up improvements for consumer offset reset tool (KIP-122)

2017-05-31 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-5266.

Resolution: Fixed

Issue resolved by pull request 3102
[https://github.com/apache/kafka/pull/3102]

> Follow-up improvements for consumer offset reset tool (KIP-122)
> ---
>
> Key: KAFKA-5266
> URL: https://issues.apache.org/jira/browse/KAFKA-5266
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Jason Gustafson
>Assignee: Jorge Quilcate
> Fix For: 0.11.0.0
>
>
> 1. We should try to ensure that offsets are in range for the topic partition. 
> We currently only verify this for the shift option.
> 2. If you provide a CSV file, you shouldn't need to specify one of the 
> --all-topics or --topic options.
> 3. We currently support a "reset to current offsets" option if none of the 
> supported reset options are provided. This seems kind of useless. Perhaps we 
> should just enforce that one of the reset options is provided.
> 4. The command fails with an NPE if we cannot find one of the offsets we are 
> trying to reset. It would be better to raise an exception with a friendlier 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3176: MINOR: Improve assert in `ControllerFailoverTest`

2017-05-31 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3176

MINOR: Improve assert in `ControllerFailoverTest`

It sometimes fails like:

```text
java.lang.AssertionError: IllegalStateException was not thrown
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
kafka.controller.ControllerFailoverTest.testHandleIllegalStateException(ControllerFailoverTest.scala:86)
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka improve-controller-failover-assert

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3176.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3176


commit 22234b89ef75ad5953a8cc7813c295f13c1697b2
Author: Ismael Juma 
Date:   2017-05-31T07:55:42Z

MINOR: Improve assert in `ControllerFailoverTest`




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3356) Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11

2017-05-31 Thread Jeff Widman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030804#comment-16030804
 ] 

Jeff Widman commented on KAFKA-3356:


Is there anyway to get this to land today?

This is literally just deleting a shell script and a scala file, and there's 
already a PR ready to go.

Normally I wouldn't care if this were an internal cleanup, but since it exposes 
the shell script, I routinely get questions from folks who don't realize they 
shouldn't be using it. 

So I'd rather it get cleaned up as it improves the new user experience.

> Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11
> 
>
> Key: KAFKA-3356
> URL: https://issues.apache.org/jira/browse/KAFKA-3356
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Ashish Singh
>Assignee: Mickael Maison
>Priority: Blocker
> Fix For: 0.12.0.0
>
>
> ConsumerOffsetChecker is marked deprecated as of 0.9, should be removed in 
> 0.11.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk8 #1617

2017-05-31 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAKFA-5334: Allow rocksdb.config.setter to be specified as a String 
or

[wangguoz] KAFKA-5308; TC should handle UNSUPPORTED_FOR_MESSAGE_FORMAT in

--
[...truncated 903.63 KB...]
kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.FetcherTest > testFetcher STARTED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride SKIPPED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
SKIPPED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
t

Build failed in Jenkins: kafka-0.11.0-jdk7 #55

2017-05-31 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAKFA-5334: Allow rocksdb.config.setter to be specified as a String 
or

[wangguoz] KAFKA-5308; TC should handle UNSUPPORTED_FOR_MESSAGE_FORMAT in

--
[...truncated 904.02 KB...]
kafka.controller.ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline STARTED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled PASSED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection STARTED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection PASSED

kafka.controller.ControllerIntegrationTest > testTopicCreation STARTED

kafka.controller.ControllerIntegrationTest > testTopicCreation PASSED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment STARTED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment PASSED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion STARTED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled PASSED

kafka.controller.ControllerIntegrationTest > testEmptyCluster STARTED

kafka.controller.ControllerIntegrationTest > testEmptyCluster PASSED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
STARTED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit STARTED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer STARTED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage STARTED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage PASSED

kafka.tools.MirrorMakerTest > testDefault

Jenkins build is back to normal : kafka-trunk-jdk7 #2295

2017-05-31 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-0.11.0-jdk7 #56

2017-05-31 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #3176: MINOR: Improve assert in `ControllerFailoverTest`

2017-05-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3176


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3175: KAFKA-5349: Fix illegal state error in consumer's ...

2017-05-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3175


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5349) KafkaConsumer occasionally hits IllegalStateException

2017-05-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030892#comment-16030892
 ] 

ASF GitHub Bot commented on KAFKA-5349:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3175


> KafkaConsumer occasionally hits IllegalStateException
> -
>
> Key: KAFKA-5349
> URL: https://issues.apache.org/jira/browse/KAFKA-5349
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> I have noticed the following while debugging system tests. Sometimes a plain 
> old console consumer hits the following exception when reading from a topic:
> {noformat}
> [2017-05-30 22:16:55,686] ERROR Unknown error when running consumer:  
> (kafka.tools.ConsoleConsumer$)
> java.lang.IllegalStateException: Invalid attempt to complete a request future 
> which is already complete
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:145)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:158)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.handleListOffsetResponse(Fetcher.java:744)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.access$2000(Fetcher.java:91)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:688)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:683)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:488)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:348)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:184)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:451)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.resetOffsets(Fetcher.java:409)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.updateFetchPositions(Fetcher.java:282)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateFetchPositions(KafkaConsumer.java:1614)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1055)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1015)
> at kafka.consumer.NewShinyConsumer.(BaseConsumer.scala:58)
> at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:72)
> at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:53)
> at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-5349) KafkaConsumer occasionally hits IllegalStateException

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5349.

Resolution: Fixed

> KafkaConsumer occasionally hits IllegalStateException
> -
>
> Key: KAFKA-5349
> URL: https://issues.apache.org/jira/browse/KAFKA-5349
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> I have noticed the following while debugging system tests. Sometimes a plain 
> old console consumer hits the following exception when reading from a topic:
> {noformat}
> [2017-05-30 22:16:55,686] ERROR Unknown error when running consumer:  
> (kafka.tools.ConsoleConsumer$)
> java.lang.IllegalStateException: Invalid attempt to complete a request future 
> which is already complete
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:145)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:158)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.handleListOffsetResponse(Fetcher.java:744)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.access$2000(Fetcher.java:91)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:688)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$3.onSuccess(Fetcher.java:683)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:488)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:348)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:184)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:451)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.resetOffsets(Fetcher.java:409)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.updateFetchPositions(Fetcher.java:282)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateFetchPositions(KafkaConsumer.java:1614)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1055)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1015)
> at kafka.consumer.NewShinyConsumer.(BaseConsumer.scala:58)
> at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:72)
> at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:53)
> at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk7 #2296

2017-05-31 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5266; Follow-up improvements for consumer offset reset tool

--
[...truncated 906.76 KB...]

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedBracketConfig PASSED

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType STARTED

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfNonExistingConfigIsDeleted STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfNonExistingConfigIsDeleted PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName PASSED

kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues STARTED

kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues PASSED

kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig 
STARTED

kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig 
PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType PASSED

kafka.admin.ConfigCommandTest > shouldAddBrokerConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddBrokerConfig PASSED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities STARTED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers STARTED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount STARTED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers STARTED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist STARTED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions STARTED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas STARTED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.ReassignPartitionsIntegrationTest > testRackAwareReassign STARTED

kafka.admin.ReassignPartitionsIntegrationTest > testRackAwareReassign PASSED

kafka.admin.AdminTest > testBasicPreferredReplicaElection STARTED

kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED

kafka.admin.AdminTest > testPreferredReplicaJsonData STARTED

kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED

kafka.admin.AdminTest > testReassigningNonExistingPartition STARTED

kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED

kafka.admin.AdminTest > testGetBrokerMetadatas STARTED

kafka.admin.AdminTest > testGetBrokerMetadatas PASSED

kafka.admin.AdminTest > testBootstrapClientIdConfig STARTED

kafka.admin.AdminTest > testBootstrapClientIdConfig PASSED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas STARTED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas PASSED

kafka.admin.AdminTest > testReplicaAssignment STARTED

kafka.admin.AdminTest > testReplicaAssignment PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
STARTED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
PASSED

kafka.admin.AdminTest > testTopicConfigChange STARTED

kafka.admin.AdminTest > testTopicConfigChange PASSED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted STARTED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted PASSED

kafka.admin.AdminTest > testManualReplicaAssignment STARTED

kafka.admin.AdminTest > testManualReplicaAssignment PASSED

kafka.admin.AdminTest > testConcurrentTopicCreation STARTED

kafka.admin.AdminTest > testConcurrentTopicCreation PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas STARTED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas PASSED

kafka.admin.AdminTest > shouldPropagateDynamicBrokerConfigs STARTED

kafka.admin.AdminTest > shouldPropagateDynamicBrokerConfigs PASSED

kafka.admin.AdminTest > testShutdownBroker STARTED

kafka.admin.AdminTest > testShutdownBroker PASSED

kafka.admin.AdminTest > testTopicCreationWithCollision STARTED

kafka.admin.AdminTest > testTopicCreationWithCollision PASSED

kafka.admin.AdminTest > testTopicCreationInZK STARTED

kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.admin.DeleteTopicTest > testDe

[jira] [Commented] (KAFKA-3733) Avoid long command lines by setting CLASSPATH in environment

2017-05-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030932#comment-16030932
 ] 

Christopher Mårtensson commented on KAFKA-3733:
---

I can add that the long commands also breaks scripts like 
`kafka-server-stop.sh` that use `ps` to find the current running server 
instance.

> Avoid long command lines by setting CLASSPATH in environment
> 
>
> Key: KAFKA-3733
> URL: https://issues.apache.org/jira/browse/KAFKA-3733
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Adrian Muraru
>Assignee: Adrian Muraru
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> {{kafka-run-class.sh}} sets the JVM classpath in the command line via {{-cp}}.
> This generates long command lines that gets trimmed by the shell in commands 
> like ps, pgrep,etc.
> An alternative is to set the CLASSPATH in environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to normal : kafka-trunk-jdk8 #1618

2017-05-31 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-4270) ClassCast for Agregation

2017-05-31 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy resolved KAFKA-4270.
---
Resolution: Cannot Reproduce

Closing this as haven't been able to reproduce and no further information 
provided

> ClassCast for Agregation
> 
>
> Key: KAFKA-4270
> URL: https://issues.apache.org/jira/browse/KAFKA-4270
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Mykola Polonskyi
>Assignee: Damian Guy
>Priority: Critical
>  Labels: architecture
>
> With defined serdes for intermediate topic in aggregation catch the 
> ClassCastException: from custom class to the ByteArray.
> In debug I saw that defined serde isn't used for creation sinkNode (incide 
> `org.apache.kafka.streams.kstream.internals.KGroupedTableImpl#doAggregate`) 
> Instead defined serde inside aggregation call is used default Impl with empty 
> plugs instead of implementations 
> {code:koltin} 
> userTable.join(
> skicardsTable.groupBy { key, value -> 
> KeyValue(value.skicardInfo.ownerId, value.skicardInfo) }
> .aggregate(
> { mutableSetOf() }, 
> { ownerId, skicardInfo, accumulator -> 
> accumulator.put(skicardInfo) },
> { ownerId, skicardInfo, accumulator -> 
> accumulator }, 
> skicardByOwnerIdSerde,
> skicardByOwnerIdTopicName
> ),
> { userCreatedOrUpdated, skicardInfoSet -> 
> UserWithSkicardsCreatedOrUpdated(userCreatedOrUpdated.user, skicardInfoSet) }
> ).to(
> userWithSkicardsTable
> )
> {code}
> I think current behavior of `doAggregate` with serdes and/or stores setting 
> up should be changed because that is incorrect in release 0.10.0.1-cp1 to.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-4953) Global Store: cast exception when initialising with in-memory logged state store

2017-05-31 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy resolved KAFKA-4953.
---
Resolution: Fixed

fixed by KAFKA-5045

> Global Store: cast exception when initialising with in-memory logged state 
> store
> 
>
> Key: KAFKA-4953
> URL: https://issues.apache.org/jira/browse/KAFKA-4953
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Yennick Trevels
>  Labels: user-experience
>
> Currently it is not possible to initialise a global store with an in-memory 
> *logged* store via the TopologyBuilder. This results in the following 
> exception:
> {code}
> java.lang.ClassCastException: 
> org.apache.kafka.streams.processor.internals.GlobalProcessorContextImpl 
> cannot be cast to 
> org.apache.kafka.streams.processor.internals.RecordCollector$Supplier
>   at 
> org.apache.kafka.streams.state.internals.StoreChangeLogger.(StoreChangeLogger.java:52)
>   at 
> org.apache.kafka.streams.state.internals.StoreChangeLogger.(StoreChangeLogger.java:44)
>   at 
> org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStore.init(InMemoryKeyValueLoggedStore.java:56)
>   at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore$7.run(MeteredKeyValueStore.java:99)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
>   at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:130)
>   at 
> org.apache.kafka.streams.processor.internals.GlobalStateManagerImpl.initialize(GlobalStateManagerImpl.java:97)
>   at 
> org.apache.kafka.streams.processor.internals.GlobalStateUpdateTask.initialize(GlobalStateUpdateTask.java:61)
>   at 
> org.apache.kafka.test.ProcessorTopologyTestDriver.(ProcessorTopologyTestDriver.java:215)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorTopologyTest.shouldDriveInMemoryLoggedGlobalStore(ProcessorTopologyTest.java:235)
>   ...
> {code}
> I've created a PR which includes a unit this to verify this behavior.
> If the below PR gets merge, the fixing PR should leverage the provided test 
> {{ProcessorTopologyTest#shouldDriveInMemoryLoggedGlobalStore}} by removing 
> the {{@ignore}} annotation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk7 #2297

2017-05-31 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Improve assert in ControllerFailoverTest

[ismael] KAFKA-5349; Fix illegal state error in consumer's ListOffset handler

--
[...truncated 906.03 KB...]
kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedBracketConfig PASSED

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType STARTED

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfNonExistingConfigIsDeleted STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfNonExistingConfigIsDeleted PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName PASSED

kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues STARTED

kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues PASSED

kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig 
STARTED

kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig 
PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType PASSED

kafka.admin.ConfigCommandTest > shouldAddBrokerConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddBrokerConfig PASSED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities STARTED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers STARTED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount STARTED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers STARTED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist STARTED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions STARTED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas STARTED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.ReassignPartitionsIntegrationTest > testRackAwareReassign STARTED

kafka.admin.ReassignPartitionsIntegrationTest > testRackAwareReassign PASSED

kafka.admin.AdminTest > testBasicPreferredReplicaElection STARTED

kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED

kafka.admin.AdminTest > testPreferredReplicaJsonData STARTED

kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED

kafka.admin.AdminTest > testReassigningNonExistingPartition STARTED

kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED

kafka.admin.AdminTest > testGetBrokerMetadatas STARTED

kafka.admin.AdminTest > testGetBrokerMetadatas PASSED

kafka.admin.AdminTest > testBootstrapClientIdConfig STARTED

kafka.admin.AdminTest > testBootstrapClientIdConfig PASSED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas STARTED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas PASSED

kafka.admin.AdminTest > testReplicaAssignment STARTED

kafka.admin.AdminTest > testReplicaAssignment PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
STARTED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
PASSED

kafka.admin.AdminTest > testTopicConfigChange STARTED

kafka.admin.AdminTest > testTopicConfigChange PASSED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted STARTED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted PASSED

kafka.admin.AdminTest > testManualReplicaAssignment STARTED

kafka.admin.AdminTest > testManualReplicaAssignment PASSED

kafka.admin.AdminTest > testConcurrentTopicCreation STARTED

kafka.admin.AdminTest > testConcurrentTopicCreation PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas STARTED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas PASSED

kafka.admin.AdminTest > shouldPropagateDynamicBrokerConfigs STARTED

kafka.admin.AdminTest > shouldPropagateDynamicBrokerConfigs PASSED

kafka.admin.AdminTest > testShutdownBroker STARTED

kafka.admin.AdminTest > testShutdownBroker PASSED

kafka.admin.AdminTest > testTopicCreationWithCollision STARTED

kafka.admin.AdminTest > testTopicCreationWithCollision PASSED

kafka.admin.AdminTest > testTopicCreationInZK STARTED

kafka.admin.AdminTest > testT

[jira] [Created] (KAFKA-5352) Support automatic restart of failed tasks

2017-05-31 Thread Jiri Pechanec (JIRA)
Jiri Pechanec created KAFKA-5352:


 Summary: Support automatic restart of failed tasks
 Key: KAFKA-5352
 URL: https://issues.apache.org/jira/browse/KAFKA-5352
 Project: Kafka
  Issue Type: New Feature
  Components: KafkaConnect
Affects Versions: 0.10.2.1
Reporter: Jiri Pechanec


Sometimes a task fails when a connection is temporary lost to sink/source. The 
task now need to be restarted manually. It would be very useful if tasks are 
automcatially restarted by Connect when configured to do it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3177: MINOR: Set baseSequence correctly if log append ti...

2017-05-31 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3177

MINOR: Set baseSequence correctly if log append time and no broker 
recompression

This makes it consistent with the case where there is recompression. Thanks 
to
@edenhill who found the issue while testing librdkafka.

The reason our tests don’t catch this is that we rely on the maxTimestamp
to compute the record level timestamps if log append time is used.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
set-base-sequence-for-log-append-time

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3177.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3177


commit 12777a5e0b208bf50bab399f789db56cb3e3dcb3
Author: Ismael Juma 
Date:   2017-05-31T12:05:10Z

MINOR: Set baseSequence correctly if log append time and no broker 
recompression

This makes it consistent with the case where there is recompression. Thanks 
to
@edenhill who found the issue while testing librdkafka.

The reason our tests don’t catch this is that we rely on the maxTimestamp
to compute the record level timestamps if log append time is used.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3178: MINOR: Use new consumer in ProducerCompressionTest

2017-05-31 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3178

MINOR: Use new consumer in ProducerCompressionTest

Hopefully this will be less flaky. If not, it should be easier
to fix debug than the old SimpleConsumer. For the record,
the error in Jenkins is usually something like:

```text
java.net.SocketTimeoutException
at 
sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:229)
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
at 
java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:85)
at 
kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:100)
at 
kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:84)
at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:133)
at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:133)
at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:133)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:132)
at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:132)
at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:132)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:131)
at 
kafka.api.test.ProducerCompressionTest.testCompression(ProducerCompressionTest.scala:97)
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka producer-compression-test-flaky

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3178.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3178


commit fe5247e348cf9c7242f2350d105faca2a54dde24
Author: Ismael Juma 
Date:   2017-05-31T12:13:45Z

MINOR: Use new consumer in ProducerCompressionTest

Hopefully this will be less flaky. If not, it should be easier
to fix debug than the old SimpleConsumer. For the record,
the error in Jenkins is usually something like:

```text
java.net.SocketTimeoutException
at 
sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:229)
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
at 
java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:85)
at 
kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:100)
at 
kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:84)
at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:133)
at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:133)
at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:133)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:132)
at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:132)
at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:132)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:131)
at 
kafka.api.test.ProducerCompressionTest.testCompression(ProducerCompressionTest.scala:97)
```




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: KIP-162: Enable topic deletion by default

2017-05-31 Thread Jim Jagielski
+1
> On May 27, 2017, at 9:27 PM, Vahid S Hashemian  
> wrote:
> 
> Sure, that sounds good.
> 
> I suggested that to keep command line behavior consistent.
> Plus, removal of ACL access is something that can be easily undone, but 
> topic deletion is not reversible.
> So, perhaps a new follow-up JIRA to this KIP to add the confirmation for 
> topic deletion.
> 
> Thanks.
> --Vahid
> 
> 
> 
> From:   Gwen Shapira 
> To: dev@kafka.apache.org, us...@kafka.apache.org
> Date:   05/27/2017 11:04 AM
> Subject:Re: KIP-162: Enable topic deletion by default
> 
> 
> 
> Thanks Vahid,
> 
> Do you mind if we leave the command-line out of scope for this?
> 
> I can see why adding confirmations, options to bypass confirmations, etc
> would be an improvement. However, I've seen no complaints about the 
> current
> behavior of the command-line and the KIP doesn't change it at all. So I'd
> rather address things separately.
> 
> Gwen
> 
> On Fri, May 26, 2017 at 8:10 PM Vahid S Hashemian 
> 
> wrote:
> 
>> Gwen, thanks for the KIP.
>> It looks good to me.
>> 
>> Just a minor suggestion: It would be great if the command asks for a
>> confirmation (y/n) before deleting the topic (similar to how removing 
> ACLs
>> works).
>> 
>> Thanks.
>> --Vahid
>> 
>> 
>> 
>> From:   Gwen Shapira 
>> To: "dev@kafka.apache.org" , Users
>> 
>> Date:   05/26/2017 07:04 AM
>> Subject:KIP-162: Enable topic deletion by default
>> 
>> 
>> 
>> Hi Kafka developers, users and friends,
>> 
>> I've added a KIP to improve our out-of-the-box usability a bit:
>> KIP-162: Enable topic deletion by default:
>> 
>> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-162+-+Enable+topic+deletion+by+default
> 
>> 
>> 
>> Pretty simple :) Discussion and feedback are welcome.
>> 
>> Gwen
>> 
>> 
>> 
>> 
>> 
> 
> 
> 
> 



Can you give me permission to add a KIP

2017-05-31 Thread Ma Tianchi
Hi,
   I want to add a KIP as KAFKA-5319 ( 
https://issues.apache.org/jira/browse/KAFKA-5319 ),please give me permission to 
add a KIP.My username is marktcma .
Thanks, Ma Tianchi

[jira] [Created] (KAFKA-5353) Set baseSequence correctly if log append time and no broker recompression

2017-05-31 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-5353:
--

 Summary: Set baseSequence correctly if log append time and no 
broker recompression 
 Key: KAFKA-5353
 URL: https://issues.apache.org/jira/browse/KAFKA-5353
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Blocker
 Fix For: 0.11.0.0


We currently don't update baseSequence in this case, which is inconsistent with 
what we do if there is broker recompression.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5353) Set baseTimestamp correctly if log append time and no broker recompression

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5353:
---
Description: We currently don't update baseTimestamp in this case (just 
maxTimestamp), which is inconsistent with what we do if there is broker 
recompression.  (was: We currently don't update baseSequence in this case, 
which is inconsistent with what we do if there is broker recompression.)

> Set baseTimestamp correctly if log append time and no broker recompression 
> ---
>
> Key: KAFKA-5353
> URL: https://issues.apache.org/jira/browse/KAFKA-5353
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> We currently don't update baseTimestamp in this case (just maxTimestamp), 
> which is inconsistent with what we do if there is broker recompression.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5353) Set baseTimestamp correctly if log append time and no broker recompression

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5353:
---
Summary: Set baseTimestamp correctly if log append time and no broker 
recompression   (was: Set baseSequence correctly if log append time and no 
broker recompression )

> Set baseTimestamp correctly if log append time and no broker recompression 
> ---
>
> Key: KAFKA-5353
> URL: https://issues.apache.org/jira/browse/KAFKA-5353
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> We currently don't update baseSequence in this case, which is inconsistent 
> with what we do if there is broker recompression.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5226) NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize

2017-05-31 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-5226:
---
Fix Version/s: (was: 0.11.0.0)
   0.11.0.1

> NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize
> --
>
> Key: KAFKA-5226
> URL: https://issues.apache.org/jira/browse/KAFKA-5226
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
> Environment: 64-bit Amazon Linux, JDK8
>Reporter: Ian Springer
>Assignee: Bill Bejeck
> Fix For: 0.11.0.1
>
> Attachments: kafka.log, streamsAppOne.log, streamsAppThree.log, 
> streamsAppTwo.log
>
>
> I saw the following NPE in our Kafka Streams app, which has 3 nodes running 
> on 3 separate machines.. Out of hundreds of messages processed, the NPE only 
> occurred twice. I are not sure of the cause, so I am unable to reproduce it. 
> I'm hoping the Kafka Streams team can guess the cause based on the stack 
> trace. If I can provide any additional details about our app, please let me 
> know.
>  
> {code}
> INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
> Kafka version : 0.10.2.1
> INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
> Kafka commitId : e89bffd6b2eff799
> INFO  2017-05-10 02:58:26,031 o.s.context.support.DefaultLifecycleProcessor  
> Starting beans in phase 0
> INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from CREATED to RUNNING.
> INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] Started 
> Kafka Stream process
> INFO  2017-05-10 02:58:26,086 o.a.k.c.consumer.internals.AbstractCoordinator  
> Discovered coordinator p1kaf1.prod.apptegic.com:9092 (id: 2147482646 rack: 
> null) for group evergage-app.
> INFO  2017-05-10 02:58:26,126 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions [] for group evergage-app
> INFO  2017-05-10 02:58:26,126 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from RUNNING to REBALANCING.
> INFO  2017-05-10 02:58:26,127 o.a.k.c.consumer.internals.AbstractCoordinator  
> (Re-)joining group evergage-app
> INFO  2017-05-10 02:58:27,712 o.a.k.c.consumer.internals.AbstractCoordinator  
> Successfully joined group evergage-app with generation 18
> INFO  2017-05-10 02:58:27,716 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
> INFO  2017-05-10 02:58:27,716 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to REBALANCING.
> INFO  2017-05-10 02:58:27,729 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> state stores
> INFO  2017-05-10 02:58:27,731 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> processor nodes of the topology
> INFO  2017-05-10 02:58:27,742 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to RUNNING.
> [14 hours pass...]
> INFO  2017-05-10 16:21:27,476 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions [us.app.Trigger-0] for group 
> evergage-app
> INFO  2017-05-10 16:21:27,477 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from RUNNING to REBALANCING.
> INFO  2017-05-10 16:21:27,482 o.a.k.c.consumer.internals.AbstractCoordinator  
> (Re-)joining group evergage-app
> INFO  2017-05-10 16:21:27,489 o.a.k.c.consumer.internals.AbstractCoordinator  
> Successfully joined group evergage-app with generation 19
> INFO  2017-05-10 16:21:27,489 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
> INFO  2017-05-10 16:21:27,489 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to REBALANCING.
> INFO  2017-05-10 16:21:27,489 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> processor nodes of the topology
> INFO  2017-05-10 16:21:27,493 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to RUNNING.
> INFO  2017-05-10 16:21:30,584 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previous

[GitHub] kafka pull request #3179: MINOR: update stream docs for kip-134

2017-05-31 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3179

MINOR: update stream docs for kip-134

Add a section in the streams docs about the broker config introduced in 
KIP-134

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kip134-doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3179.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3179


commit b9b9e79020e753ca184370ca755334db09505e21
Author: Damian Guy 
Date:   2017-05-31T14:55:09Z

update stream docs for kip-134




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3178: MINOR: Use new consumer in ProducerCompressionTest

2017-05-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3178


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #1620

2017-05-31 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Use new consumer in ProducerCompressionTest

--
[...truncated 903.56 KB...]
kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.FetcherTest > testFetcher STARTED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride SKIPPED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
SKIPPED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

unit.kafka.server.KafkaApisTest > 
sh

[jira] [Updated] (KAFKA-5339) Transactions system test with hard broker bounces fails sporadically

2017-05-31 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta updated KAFKA-5339:

Priority: Major  (was: Blocker)

> Transactions system test with hard broker bounces fails sporadically
> 
>
> Key: KAFKA-5339
> URL: https://issues.apache.org/jira/browse/KAFKA-5339
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> The transactions hard bounce test occasionally fails because the 
> transactional message copy just seems to hang. In one of the client logs, I 
> noticed: 
> {noformat}
> [2017-05-27 20:36:12,596] WARN Got error produce response with correlation id 
> 124 on topic-partition output-topic-0, retrying (2147483646 attempts left). 
> Error: NOT_LEADER_FOR_PARTITION 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-27 20:36:15,386] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager$1.compare(TransactionManager.java:146)
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager$1.compare(TransactionManager.java:143)
> at 
> java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:721)
> at java.util.PriorityQueue.siftDown(PriorityQueue.java:687)
> at java.util.PriorityQueue.poll(PriorityQueue.java:595)
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager.nextRequestHandler(TransactionManager.java:351)
> at 
> org.apache.kafka.clients.producer.internals.Sender.maybeSendTransactionalRequest(Sender.java:303)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:193)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:154)
> at java.lang.Thread.run(Thread.java:748)
> [2017-05-27 20:36:52,007] INFO Closing the Kafka producer with timeoutMillis 
> = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
> [2017-05-27 20:36:52,036] INFO Marking the coordinator knode02:9092 (id: 
> 2147483645 rack: null) dead for group transactions-test-consumer-group 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> root@7dcd60017519:/opt/kafka-dev/results/latest/TransactionsTest/test_transactions/failure_mode=hard_bounce.bounce_target=brokers/1#
> {noformat}
> This suggests that the client has gotten to a bad state which is why it stops 
> processing messages, causing the tests to fail. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to normal : kafka-trunk-jdk7 #2298

2017-05-31 Thread Apache Jenkins Server
See 




[jira] [Commented] (KAFKA-5352) Support automatic restart of failed tasks

2017-05-31 Thread Randall Hauch (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16031472#comment-16031472
 ] 

Randall Hauch commented on KAFKA-5352:
--

Potentially related to KAFKA-3819, and it's possible a single implementation 
will satisfy both.

> Support automatic restart of failed tasks
> -
>
> Key: KAFKA-5352
> URL: https://issues.apache.org/jira/browse/KAFKA-5352
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Affects Versions: 0.10.2.1
>Reporter: Jiri Pechanec
>
> Sometimes a task fails when a connection is temporary lost to sink/source. 
> The task now need to be restarted manually. It would be very useful if tasks 
> are automcatially restarted by Connect when configured to do it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3120: Kafka 5265

2017-05-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3120


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5265) Move ACLs, Config, NodeVersions classes into org.apache.kafka.common

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5265.

Resolution: Fixed

Issue resolved by pull request 3120
[https://github.com/apache/kafka/pull/3120]

> Move ACLs, Config, NodeVersions classes into org.apache.kafka.common
> 
>
> Key: KAFKA-5265
> URL: https://issues.apache.org/jira/browse/KAFKA-5265
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.0
>
>
> We should move the `ACLs`, `Config`, and `NodeVersions` classes into 
> `org.apache.kafka.common`.  That will make the easier to use in server code 
> as well as admin client code.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3180: MINOR: reuse decompression buffers in log cleaner

2017-05-31 Thread xvrl
GitHub user xvrl opened a pull request:

https://github.com/apache/kafka/pull/3180

MINOR: reuse decompression buffers in log cleaner

follow-up to KAFKA-5150, reuse decompression buffers in the log cleaner 
thread.

@ijuma @hachikuji 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xvrl/kafka logcleaner-decompression-buffers

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3180.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3180


commit d37ef9dfe118ec62b4091a4db72558ccbe888fb8
Author: Xavier Léauté 
Date:   2017-05-31T16:24:15Z

MINOR: reuse decompression buffers in log cleaner




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5150) LZ4 decompression is 4-5x slower than Snappy on small batches / messages

2017-05-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16031486#comment-16031486
 ] 

ASF GitHub Bot commented on KAFKA-5150:
---

GitHub user xvrl opened a pull request:

https://github.com/apache/kafka/pull/3180

MINOR: reuse decompression buffers in log cleaner

follow-up to KAFKA-5150, reuse decompression buffers in the log cleaner 
thread.

@ijuma @hachikuji 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xvrl/kafka logcleaner-decompression-buffers

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3180.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3180


commit d37ef9dfe118ec62b4091a4db72558ccbe888fb8
Author: Xavier Léauté 
Date:   2017-05-31T16:24:15Z

MINOR: reuse decompression buffers in log cleaner




> LZ4 decompression is 4-5x slower than Snappy on small batches / messages
> 
>
> Key: KAFKA-5150
> URL: https://issues.apache.org/jira/browse/KAFKA-5150
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.2.2, 0.9.0.1, 0.11.0.0, 0.10.2.1
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
> Fix For: 0.11.0.0
>
>
> I benchmarked RecordsIteratorDeepRecordsIterator instantiation on small batch 
> sizes with small messages after observing some performance bottlenecks in the 
> consumer. 
> For batch sizes of 1 with messages of 100 bytes, LZ4 heavily underperforms 
> compared to Snappy (see benchmark below). Most of our time is currently spent 
> allocating memory blocks in KafkaLZ4BlockInputStream, due to the fact that we 
> default to larger 64kB block sizes. Some quick testing shows we could improve 
> performance by almost an order of magnitude for small batches and messages if 
> we re-used buffers between instantiations of the input stream.
> [Benchmark 
> Code|https://github.com/xvrl/kafka/blob/small-batch-lz4-benchmark/clients/src/test/java/org/apache/kafka/common/record/DeepRecordsIteratorBenchmark.java#L86]
> {code}
> Benchmark  (compressionType)  
> (messageSize)   Mode  Cnt   Score   Error  Units
> DeepRecordsIteratorBenchmark.measureSingleMessageLZ4  
>   100  thrpt   20   84802.279 ±  1983.847  ops/s
> DeepRecordsIteratorBenchmark.measureSingleMessage SNAPPY  
>   100  thrpt   20  407585.747 ±  9877.073  ops/s
> DeepRecordsIteratorBenchmark.measureSingleMessage   NONE  
>   100  thrpt   20  579141.634 ± 18482.093  ops/s
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Request a permission to KIP

2017-05-31 Thread Guozhang Wang
Tianchi,

You should be able to create new wiki pages now.


Guozhang

On Tue, May 30, 2017 at 8:31 PM, Ma Tianchi 
wrote:

> Hi,
> I want to get a permission to add a KIP.My wiki username is marktcma.
> Tahnks.




-- 
-- Guozhang


[GitHub] kafka pull request #3181: KAFKA-5154: Consumer fetches from revoked partitio...

2017-05-31 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3181

KAFKA-5154: Consumer fetches from revoked partitions when SyncGroup fails 
with disconnection [WIP]

Scenario is as follows:
1. Consumer subscribes to topic t1 and begins consuming
2. heartbeat fails as the group is rebalancing
3. ConsumerCoordinator.onJoinGroupPrepare is called
   3.1 onPartitionsRevoked is called
4. consumer becomes the group leader
5. sends sync group request
6. sync group is cancelled due to disconnection
7. fetch request is sent for partitions that have previously been revoked

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-5154

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3181.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3181


commit f84737d30acdb8b49e8e0d4e3da8720083e88354
Author: Damian Guy 
Date:   2017-05-31T16:40:50Z

just a test for discussion




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5154) Kafka Streams throws NPE during rebalance

2017-05-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16031499#comment-16031499
 ] 

ASF GitHub Bot commented on KAFKA-5154:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3181

KAFKA-5154: Consumer fetches from revoked partitions when SyncGroup fails 
with disconnection [WIP]

Scenario is as follows:
1. Consumer subscribes to topic t1 and begins consuming
2. heartbeat fails as the group is rebalancing
3. ConsumerCoordinator.onJoinGroupPrepare is called
   3.1 onPartitionsRevoked is called
4. consumer becomes the group leader
5. sends sync group request
6. sync group is cancelled due to disconnection
7. fetch request is sent for partitions that have previously been revoked

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-5154

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3181.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3181


commit f84737d30acdb8b49e8e0d4e3da8720083e88354
Author: Damian Guy 
Date:   2017-05-31T16:40:50Z

just a test for discussion




> Kafka Streams throws NPE during rebalance
> -
>
> Key: KAFKA-5154
> URL: https://issues.apache.org/jira/browse/KAFKA-5154
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Lukas Gemela
>Assignee: Damian Guy
> Attachments: 5154_problem.log, clio_afa596e9b809.gz, clio_reduced.gz, 
> clio.txt.gz
>
>
> please see attached log, Kafka streams throws NullPointerException during 
> rebalance, which is caught by our custom exception handler
> {noformat}
> 2017-04-30T17:44:17,675 INFO  kafka-coordinator-heartbeat-thread | hades 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.coordinatorDead()
>  @618 - Marking the coordinator 10.210.200.144:9092 (id: 2147483644 rack: 
> null) dead for group hades
> 2017-04-30T17:44:27,395 INFO  StreamThread-1 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onSuccess() 
> @573 - Discovered coordinator 10.210.200.144:9092 (id: 2147483644 rack: null) 
> for group hades.
> 2017-04-30T17:44:27,941 INFO  StreamThread-1 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare()
>  @393 - Revoking previously assigned partitions [poseidonIncidentFeed-27, 
> poseidonIncidentFeed-29, poseidonIncidentFeed-30, poseidonIncidentFeed-18] 
> for group hades
> 2017-04-30T17:44:27,947 INFO  StreamThread-1 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest()
>  @407 - (Re-)joining group hades
> 2017-04-30T17:44:48,468 INFO  StreamThread-1 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest()
>  @407 - (Re-)joining group hades
> 2017-04-30T17:44:53,628 INFO  StreamThread-1 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest()
>  @407 - (Re-)joining group hades
> 2017-04-30T17:45:09,587 INFO  StreamThread-1 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest()
>  @407 - (Re-)joining group hades
> 2017-04-30T17:45:11,961 INFO  StreamThread-1 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onSuccess() 
> @375 - Successfully joined group hades with generation 99
> 2017-04-30T17:45:13,126 INFO  StreamThread-1 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete()
>  @252 - Setting newly assigned partitions [poseidonIncidentFeed-11, 
> poseidonIncidentFeed-27, poseidonIncidentFeed-25, poseidonIncidentFeed-29, 
> poseidonIncidentFeed-19, poseidonIncidentFeed-18] for group hades
> 2017-04-30T17:46:37,254 INFO  kafka-coordinator-heartbeat-thread | hades 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.coordinatorDead()
>  @618 - Marking the coordinator 10.210.200.144:9092 (id: 2147483644 rack: 
> null) dead for group hades
> 2017-04-30T18:04:25,993 INFO  StreamThread-1 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onSuccess() 
> @573 - Discovered coordinator 10.210.200.144:9092 (id: 2147483644 rack: null) 
> for group hades.
> 2017-04-30T18:04:29,401 INFO  StreamThread-1 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare()
>  @393 - Revoking previously assigned partitions [poseidonIncidentFeed-11, 
> poseidonIncidentFeed-27, poseidonIncidentFeed-25, poseidonIncidentFeed-29, 
> poseidonIncidentFeed-19, poseidonIncidentFeed-18] for group hades
> 2017-04-30T18:05:10,877 INFO  StreamThread-1 
> org.apache.kafka.cl

[jira] [Assigned] (KAFKA-5319) Add a tool to make cluster replica and leader balance

2017-05-31 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-5319:


Assignee: Ma Tianchi

> Add a tool to make cluster replica and leader balance
> -
>
> Key: KAFKA-5319
> URL: https://issues.apache.org/jira/browse/KAFKA-5319
> Project: Kafka
>  Issue Type: New Feature
>  Components: admin
>Affects Versions: 0.10.2.1
>Reporter: Ma Tianchi
>Assignee: Ma Tianchi
>  Labels: patch
> Attachments: KAFKA-5319.patch
>
>
> When a new broker is added to cluster,there is not any topics in the new 
> broker.When we use console command to create a topic without 
> 'replicaAssignmentOpt',Kafka use AdminUtils.assignReplicasToBrokers to get 
> replicaAssignment.Even though it is balance at the creating time if the 
> cluster never change,with more and more brokers added to cluster the replica 
> balanced will become not well. We also can use 'kafka-reassign-partitions.sh' 
> to balance ,but the big amount of topics make it not easy.And at the topic 
> created time , Kafka choose a  PreferredReplicaLeader which be put at the 
> first position of  the AR to make leader balance.But the balance will be 
> destroyed when cluster changed.Using  'kafka-reassign-partitions.sh' to make 
> partition reassignment may be also destroy the leader balance ,because user 
> can change the AR of the partition . It may be not balanced , but Kafka 
> believe cluster leader balance is well with every leaders is the first on at 
> AR.
> So we create a tool to make the number of replicas and number of leaders on 
> every brokers is balanced.It uses a algorithm to get a balanced replica and 
> leader  reassignment,then uses ReassignPartitionsCommand to real balance the 
> cluster.
> It can be used to make balance when cluster added brokers or cluster is not 
> balanced .It does not deal with moving replicas of a dead broker to an alive 
> broker,it only makes replicas and leaders on alive brokers is balanced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk8 #1621

2017-05-31 Thread Apache Jenkins Server
See 

--
[...truncated 904.32 KB...]
kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.FetcherTest > testFetcher STARTED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride SKIPPED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
SKIPPED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

unit.kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedForMessageFormatOnHandleWriteTxnMarkersWhenMagicLowerThanRe

[jira] [Updated] (KAFKA-4064) Add support for infinite endpoints for range queries in Kafka Streams KV stores

2017-05-31 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4064:
-
Fix Version/s: (was: 0.11.0.0)
   0.11.1.0

> Add support for infinite endpoints for range queries in Kafka Streams KV 
> stores
> ---
>
> Key: KAFKA-4064
> URL: https://issues.apache.org/jira/browse/KAFKA-4064
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.2.0
>Reporter: Roger Hoover
>Assignee: Xavier Léauté
>Priority: Minor
>  Labels: needs-kip
> Fix For: 0.11.1.0
>
>
> In some applications, it's useful to iterate over the key-value store either:
> 1. from the beginning up to a certain key
> 2. from a certain key to the end
> We can add two new methods rangeUtil() and rangeFrom() easily to support this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4064) Add support for infinite endpoints for range queries in Kafka Streams KV stores

2017-05-31 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16031598#comment-16031598
 ] 

Guozhang Wang commented on KAFKA-4064:
--

[~ijuma] I will update the fix version.

> Add support for infinite endpoints for range queries in Kafka Streams KV 
> stores
> ---
>
> Key: KAFKA-4064
> URL: https://issues.apache.org/jira/browse/KAFKA-4064
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.2.0
>Reporter: Roger Hoover
>Assignee: Xavier Léauté
>Priority: Minor
>  Labels: needs-kip
> Fix For: 0.11.1.0
>
>
> In some applications, it's useful to iterate over the key-value store either:
> 1. from the beginning up to a certain key
> 2. from a certain key to the end
> We can add two new methods rangeUtil() and rangeFrom() easily to support this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5354) MirrorMaker not preserving headers

2017-05-31 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-5354:
--

 Summary: MirrorMaker not preserving headers
 Key: KAFKA-5354
 URL: https://issues.apache.org/jira/browse/KAFKA-5354
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Jun Rao


Currently, it doesn't seem that MirrorMaker preserves headers in 
BaseConsumerRecord. So, headers won't be preserved during mirroring.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5354) MirrorMaker not preserving headers

2017-05-31 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16031603#comment-16031603
 ] 

Jun Rao commented on KAFKA-5354:


[~michael.andre.pearce], could you confirm if this is indeed an issue?

> MirrorMaker not preserving headers
> --
>
> Key: KAFKA-5354
> URL: https://issues.apache.org/jira/browse/KAFKA-5354
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Jun Rao
>
> Currently, it doesn't seem that MirrorMaker preserves headers in 
> BaseConsumerRecord. So, headers won't be preserved during mirroring.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4064) Add support for infinite endpoints for range queries in Kafka Streams KV stores

2017-05-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16031608#comment-16031608
 ] 

Xavier Léauté commented on KAFKA-4064:
--

[~ijuma] [~guozhang] we haven't done the KIP for this.

> Add support for infinite endpoints for range queries in Kafka Streams KV 
> stores
> ---
>
> Key: KAFKA-4064
> URL: https://issues.apache.org/jira/browse/KAFKA-4064
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.2.0
>Reporter: Roger Hoover
>Assignee: Xavier Léauté
>Priority: Minor
>  Labels: needs-kip
> Fix For: 0.11.1.0
>
>
> In some applications, it's useful to iterate over the key-value store either:
> 1. from the beginning up to a certain key
> 2. from a certain key to the end
> We can add two new methods rangeUtil() and rangeFrom() easily to support this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-5159) Update Kafka documentation to mention AdminClient

2017-05-31 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe resolved KAFKA-5159.

Resolution: Duplicate

> Update Kafka documentation to mention AdminClient
> -
>
> Key: KAFKA-5159
> URL: https://issues.apache.org/jira/browse/KAFKA-5159
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Colin P. McCabe
>
> We should have a brief section for the AdminClient in the Kafka docs. It's OK 
> to point to the Javadoc for more information (we do the same for the new Java 
> consumer and producer).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Kafka Connect Parquet Support?

2017-05-31 Thread Colin McCabe
Hi Clayton,

It seems like an interesting improvement.  Given that Parquet is
columnar, you would expect some space savings.  I guess the big question
is, would each batch of records become a single parquet file?  And how
does this integrate with the existing logic, which might assume that
each record can be serialized on its own?

best,
Colin


On Sun, May 7, 2017, at 02:36, Clayton Wohl wrote:
> With the Kafka Connect S3 sink, I can choose Avro or JSON output format.
> Is
> there any chance that Parquet will be supported?
> 
> For record at a time processing, Parquet isn't a good fit. But for
> reading/writing batches of records, which is what the Kafka Connect Sink
> is
> writing, Parquet is generally better than Avro.
> 
> Would attempting writing support for this be wise to try or not?


[GitHub] kafka pull request #3182: MINOR: Use `waitUntil` to fix transient failures o...

2017-05-31 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3182

MINOR: Use `waitUntil` to fix transient failures of ControllerFailoverTest

Without it, it's possible that the assertion is checked before the exception
is thrown in the callback.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka fix-controller-failover-flakiness

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3182.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3182


commit 705f5580ab22d3b3201113af38680bc7631b514f
Author: Ismael Juma 
Date:   2017-05-31T18:23:51Z

MINOR: Use `waitUntil` to fix transient failures of ControllerFailoverTest




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-3623) Make KStreamTestDriver extending from ExternalResource

2017-05-31 Thread Mariam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mariam John reassigned KAFKA-3623:
--

Assignee: Mariam John

> Make KStreamTestDriver extending from ExternalResource
> --
>
> Key: KAFKA-3623
> URL: https://issues.apache.org/jira/browse/KAFKA-3623
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Mariam John
>  Labels: newbie, test
>
> In unit test we have lots of duplicate code for closing KStreamTestDriver 
> upon completing the test:
> {code}
> @After
> public void tearDown() {
> if (driver != null) {
> driver.close();
> }
> driver = null;
> }
> {code}
> One way to remove this duplicate code is to make KStreamTestDriver extending 
> from ExternalResource. By doing this we need to move the constructor logic 
> into a setup / init function and leave the construction empty.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to normal : kafka-trunk-jdk8 #1622

2017-05-31 Thread Apache Jenkins Server
See 




[jira] [Updated] (KAFKA-5340) Add additional test cases for batch splitting to ensure idempotent/transactional metadata is preserved

2017-05-31 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta updated KAFKA-5340:

  Labels: exactly-once  (was: )
Priority: Blocker  (was: Major)

> Add additional test cases for batch splitting to ensure 
> idempotent/transactional metadata is preserved
> --
>
> Key: KAFKA-5340
> URL: https://issues.apache.org/jira/browse/KAFKA-5340
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> We noticed in the patch for KAFKA-5316 that the transactional flag was not 
> being preserved after batch splitting. We should make sure that we have all 
> the cases covered in unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5283) Update clients and server code to make sure that epoch and sequence numbers wrap around

2017-05-31 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-5283:
--

Assignee: Jason Gustafson

> Update clients and server code to make sure that epoch and sequence numbers 
> wrap around
> ---
>
> Key: KAFKA-5283
> URL: https://issues.apache.org/jira/browse/KAFKA-5283
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Apurva Mehta
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> The design doc mentions that the epoch and sequence numbers will wrap around. 
> However, the current client and server code (on the producer, in the 
> `ProducerIdMapping` class, in the transaction coordinator) does not do this.
> Once all the pieces are in place we should go through and make sure that the 
> handling of sequence numbers and epoch is consistent across the board. Would 
> be good to add a system or integration test for this as well, if possible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-150 - Kafka-Streams Cogroup

2017-05-31 Thread Kyle Winkelman
Hello all,

I have spent some more time on this and the best alternative I have come up
with is:
KGroupedStream has a single cogroup call that takes an initializer and an
aggregator.
CogroupedKStream has a cogroup call that takes additional groupedStream
aggregator pairs.
CogroupedKStream has multiple aggregate methods that create the different
stores.

I plan on updating the kip but I want people's input on if we should have
the initializer be passed in once at the beginning or if we should instead
have the initializer be required for each call to one of the aggregate
calls. The first makes more sense to me but doesnt allow the user to
specify different initializers for different tables.

Thanks,
Kyle

On May 24, 2017 7:46 PM, "Kyle Winkelman"  wrote:

> Yea I really like that idea I'll see what I can do to update the kip and
> my pr when I have some time. I'm not sure how well creating the
> kstreamaggregates will go though because at that point I will have thrown
> away the type of the values. It will be type safe I just may need to do a
> little forcing.
>
> Thanks,
> Kyle
>
> On May 24, 2017 3:28 PM, "Guozhang Wang"  wrote:
>
>> Kyle,
>>
>> Thanks for the explanations, my previous read on the wiki examples was
>> wrong.
>>
>> So I guess my motivation should be "reduced" to: can we move the window
>> specs param from "KGroupedStream#cogroup(..)" to
>> "CogroupedKStream#aggregate(..)", and my motivations are:
>>
>> 1. minor: we can reduce the #.generics in CogroupedKStream from 3 to 2.
>> 2. major: this is for extensibility of the APIs, and since we are removing
>> the "Evolving" annotations on Streams it may be harder to change it again
>> in the future. The extended use cases are that people wanted to have
>> windowed running aggregates on different granularities, e.g. "give me the
>> counts per-minute, per-hour, per-day and per-week", and today in DSL we
>> need to specify that case in multiple aggregate operators, which gets a
>> state store / changelog, etc. And it is possible to optimize it as well to
>> a single state store. Its implementation would be tricky as you need to
>> contain different lengthed windows within your window store but just from
>> the public API point of view, it could be specified as:
>>
>> CogroupedKStream stream = stream1.cogroup(stream2, ...
>> "state-store-name");
>>
>> table1 = stream.aggregate(/*per-minute window*/)
>> table2 = stream.aggregate(/*per-hour window*/)
>> table3 = stream.aggregate(/*per-day window*/)
>>
>> while underlying we are only using a single store "state-store-name" for
>> it.
>>
>>
>> Although this feature is out of the scope of this KIP, I'd like to discuss
>> if we can "leave the door open" to make such changes without modifying the
>> public APIs .
>>
>> Guozhang
>>
>>
>> On Wed, May 24, 2017 at 3:57 AM, Kyle Winkelman > >
>> wrote:
>>
>> > I allow defining a single window/sessionwindow one time when you make
>> the
>> > cogroup call from a KGroupedStream. From then on you are using the
>> cogroup
>> > call from with in CogroupedKStream which doesnt accept any additional
>> > windows/sessionwindows.
>> >
>> > Is this what you meant by your question or did I misunderstand?
>> >
>> > On May 23, 2017 9:33 PM, "Guozhang Wang"  wrote:
>> >
>> > Another question that came to me is on "window alignment": from the KIP
>> it
>> > seems you are allowing users to specify a (potentially different) window
>> > spec in each co-grouped input stream. So if these window specs are
>> > different how should we "align" them with different input streams? I
>> think
>> > it is more natural to only specify on window spec in the
>> >
>> > KTable CogroupedKStream#aggregate(Windows);
>> >
>> >
>> > And remove it from the cogroup() functions. WDYT?
>> >
>> >
>> > Guozhang
>> >
>> > On Tue, May 23, 2017 at 6:22 PM, Guozhang Wang 
>> wrote:
>> >
>> > > Thanks for the proposal Kyle, this is a quite common use case to
>> support
>> > > such multi-way table join (i.e. N source tables with N aggregate func)
>> > with
>> > > a single store and N+1 serdes, I have seen lots of people using the
>> > > low-level PAPI to achieve this goal.
>> > >
>> > >
>> > > On Fri, May 19, 2017 at 10:04 AM, Kyle Winkelman <
>> > winkelman.k...@gmail.com
>> > > > wrote:
>> > >
>> > >> I like your point about not handling other cases such as count and
>> > reduce.
>> > >>
>> > >> I think that reduce may not make sense because reduce assumes that
>> the
>> > >> input values are the same as the output values. With cogroup there
>> may
>> > be
>> > >> multiple different input types and then your output type cant be
>> > multiple
>> > >> different things. In the case where you have all matching value types
>> > you
>> > >> can do KStreamBuilder#merge followed by the reduce.
>> > >>
>> > >> As for count I think it is possible to call count on all the
>> individual
>> > >> grouped streams and then do joins. Otherwise we could maybe make a
>> > special
>> > >> call in groupedstream for this case. Because

[jira] [Updated] (KAFKA-5351) Broker clean bounce test puts the broker into a 'CONCURRENT_TRANSACTIONS' state permanently

2017-05-31 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta updated KAFKA-5351:

Attachment: kafka-5351.logs.tar.gz

> Broker clean bounce test puts the broker into a 'CONCURRENT_TRANSACTIONS' 
> state permanently
> ---
>
> Key: KAFKA-5351
> URL: https://issues.apache.org/jira/browse/KAFKA-5351
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
> Attachments: kafka-5351.logs.tar.gz
>
>
> In the broker clean bounce test, sometimes the consumer just hangs on a 
> request to the transactional coordinator because it keeps getting a 
> `CONCURRENT_TRANSACTIONS` error. This continues for 30 seconds, until the 
> process is killed. 
> {noformat}
> [2017-05-31 04:54:14,053] DEBUG TransactionalId my-second-transactional-id -- 
> Received FindCoordinator response with error NONE 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,053] DEBUG TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,053] TRACE TransactionalId: my-second-transactional-id 
> -- Waiting 100ms before resending a transactional request 
> (transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,154] TRACE TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) to node 1 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,191] TRACE Got transactional response for 
> request:(transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,191] DEBUG TransactionalId my-second-transactional-id -- 
> Received EndTxn response with error COORDINATOR_NOT_AVAILABLE 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,192] DEBUG TransactionalId: my-second-transactional-id 
> -- Sending transactional request (type=FindCoordinatorRequest, 
> coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,192] TRACE TransactionalId: my-second-transactional-id 
> -- Sending transactional request (type=FindCoordinatorRequest, 
> coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) to 
> node 3 (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,193] TRACE Got transactional response for 
> request:(type=FindCoordinatorRequest, 
> coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,193] DEBUG TransactionalId my-second-transactional-id -- 
> Received FindCoordinator response with error NONE 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,193] DEBUG TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,193] TRACE TransactionalId: my-second-transactional-id 
> -- Waiting 100ms before resending a transactional request 
> (transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,294] TRACE TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) to node 1 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,294] TRACE Got transactional response for 
> request:(transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,295] DEBUG TransactionalId my-second-transactional-id -- 
> Received EndTxn response with error CONCURRENT_TRANSACTIONS 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,295] DEBUG TransactionalId: my-second-transactional-id 
> 

[jira] [Commented] (KAFKA-5351) Broker clean bounce test puts the broker into a 'CONCURRENT_TRANSACTIONS' state permanently

2017-05-31 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16031764#comment-16031764
 ] 

Apurva Mehta commented on KAFKA-5351:
-

[~guozhang] here are all the logs from a pertinent failure: 
https://issues.apache.org/jira/secure/attachment/12870620/kafka-5351.logs.tar.gz

> Broker clean bounce test puts the broker into a 'CONCURRENT_TRANSACTIONS' 
> state permanently
> ---
>
> Key: KAFKA-5351
> URL: https://issues.apache.org/jira/browse/KAFKA-5351
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
> Attachments: kafka-5351.logs.tar.gz
>
>
> In the broker clean bounce test, sometimes the consumer just hangs on a 
> request to the transactional coordinator because it keeps getting a 
> `CONCURRENT_TRANSACTIONS` error. This continues for 30 seconds, until the 
> process is killed. 
> {noformat}
> [2017-05-31 04:54:14,053] DEBUG TransactionalId my-second-transactional-id -- 
> Received FindCoordinator response with error NONE 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,053] DEBUG TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,053] TRACE TransactionalId: my-second-transactional-id 
> -- Waiting 100ms before resending a transactional request 
> (transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,154] TRACE TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) to node 1 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,191] TRACE Got transactional response for 
> request:(transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,191] DEBUG TransactionalId my-second-transactional-id -- 
> Received EndTxn response with error COORDINATOR_NOT_AVAILABLE 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,192] DEBUG TransactionalId: my-second-transactional-id 
> -- Sending transactional request (type=FindCoordinatorRequest, 
> coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,192] TRACE TransactionalId: my-second-transactional-id 
> -- Sending transactional request (type=FindCoordinatorRequest, 
> coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) to 
> node 3 (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,193] TRACE Got transactional response for 
> request:(type=FindCoordinatorRequest, 
> coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,193] DEBUG TransactionalId my-second-transactional-id -- 
> Received FindCoordinator response with error NONE 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,193] DEBUG TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,193] TRACE TransactionalId: my-second-transactional-id 
> -- Waiting 100ms before resending a transactional request 
> (transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,294] TRACE TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) to node 1 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,294] TRACE Got transactional response for 
> request:(transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,295] DEBUG TransactionalId my-second-transactional-id -- 
> Received EndTxn response with error CONCURRENT_T

[jira] [Updated] (KAFKA-5112) Trunk compatibility tests should test against 0.10.2

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5112:
---
Priority: Blocker  (was: Major)

> Trunk compatibility tests should test against 0.10.2
> 
>
> Key: KAFKA-5112
> URL: https://issues.apache.org/jira/browse/KAFKA-5112
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin P. McCabe
>Assignee: Ismael Juma
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> Now that 0.10.2 has been released, our trunk compatibility tests should test 
> against it.  This will ensure that 0.11 clients are backwards compatible with 
> 0.10.2 brokers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5112) Trunk compatibility tests should test against 0.10.2

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5112:
---
Fix Version/s: 0.11.0.0

> Trunk compatibility tests should test against 0.10.2
> 
>
> Key: KAFKA-5112
> URL: https://issues.apache.org/jira/browse/KAFKA-5112
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin P. McCabe
>Assignee: Ismael Juma
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> Now that 0.10.2 has been released, our trunk compatibility tests should test 
> against it.  This will ensure that 0.11 clients are backwards compatible with 
> 0.10.2 brokers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5112) Trunk compatibility tests should test against 0.10.2

2017-05-31 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5112:
---
Status: Patch Available  (was: Open)

> Trunk compatibility tests should test against 0.10.2
> 
>
> Key: KAFKA-5112
> URL: https://issues.apache.org/jira/browse/KAFKA-5112
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin P. McCabe
>Assignee: Ismael Juma
>
> Now that 0.10.2 has been released, our trunk compatibility tests should test 
> against it.  This will ensure that 0.11 clients are backwards compatible with 
> 0.10.2 brokers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5351) Broker clean bounce test puts the broker into a 'CONCURRENT_TRANSACTIONS' state permanently

2017-05-31 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16031807#comment-16031807
 ] 

Apurva Mehta commented on KAFKA-5351:
-

I found the bug. The problem is that the coordinator retains its pending state 
even though it sends a retriable error to the client. So when the client 
retries, the coordinator sees a pending state defined, and returns 
`CONCURRENT_TRANSACTIONS`, which is a state it can never get out of until it is 
bounced. 

The solution is to clear the pending state when coordinator returns a retriable 
error, because in reality there is nothing pending at that point. 

> Broker clean bounce test puts the broker into a 'CONCURRENT_TRANSACTIONS' 
> state permanently
> ---
>
> Key: KAFKA-5351
> URL: https://issues.apache.org/jira/browse/KAFKA-5351
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
> Attachments: kafka-5351.logs.tar.gz
>
>
> In the broker clean bounce test, sometimes the consumer just hangs on a 
> request to the transactional coordinator because it keeps getting a 
> `CONCURRENT_TRANSACTIONS` error. This continues for 30 seconds, until the 
> process is killed. 
> {noformat}
> [2017-05-31 04:54:14,053] DEBUG TransactionalId my-second-transactional-id -- 
> Received FindCoordinator response with error NONE 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,053] DEBUG TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,053] TRACE TransactionalId: my-second-transactional-id 
> -- Waiting 100ms before resending a transactional request 
> (transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,154] TRACE TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) to node 1 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,191] TRACE Got transactional response for 
> request:(transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,191] DEBUG TransactionalId my-second-transactional-id -- 
> Received EndTxn response with error COORDINATOR_NOT_AVAILABLE 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,192] DEBUG TransactionalId: my-second-transactional-id 
> -- Sending transactional request (type=FindCoordinatorRequest, 
> coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,192] TRACE TransactionalId: my-second-transactional-id 
> -- Sending transactional request (type=FindCoordinatorRequest, 
> coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) to 
> node 3 (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,193] TRACE Got transactional response for 
> request:(type=FindCoordinatorRequest, 
> coordinatorKey=my-second-transactional-id, coordinatorType=TRANSACTION) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,193] DEBUG TransactionalId my-second-transactional-id -- 
> Received FindCoordinator response with error NONE 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-05-31 04:54:14,193] DEBUG TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,193] TRACE TransactionalId: my-second-transactional-id 
> -- Waiting 100ms before resending a transactional request 
> (transactionalId=my-second-transactional-id, producerId=2000, 
> producerEpoch=0, result=COMMIT) 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,294] TRACE TransactionalId: my-second-transactional-id 
> -- Sending transactional request (transactionalId=my-second-transactional-id, 
> producerId=2000, producerEpoch=0, result=COMMIT) to node 1 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2017-05-31 04:54:14,294] TRACE Got transactional 

[jira] [Commented] (KAFKA-2526) Console Producer / Consumer's serde config is not working

2017-05-31 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16031849#comment-16031849
 ] 

Guozhang Wang commented on KAFKA-2526:
--

[~huxi_2b] are you picking this up?

> Console Producer / Consumer's serde config is not working
> -
>
> Key: KAFKA-2526
> URL: https://issues.apache.org/jira/browse/KAFKA-2526
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Mayuresh Gharat
>  Labels: newbie
>
> Although in the console producer one can specify the key value serializer, 
> they are actually not used since 1) it always serialize the input string as 
> String.getBytes (hence always pre-assume the string serializer) and 2) it is 
> actually only passed into the old producer. The same issues exist in console 
> consumer.
> In addition the configs in the console producer is messy: we have 1) some 
> config values exposed as cmd parameters, and 2) some config values in 
> --producer-property and 3) some in --property.
> It will be great to clean the configs up in both console producer and 
> consumer, and put them into a single --property parameter which could 
> possibly take a file to reading in property values as well, and only leave 
> --new-producer as the other command line parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3183: KAFKA-5283: Handle producer epoch/sequence overflo...

2017-05-31 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3183

KAFKA-5283: Handle producer epoch/sequence overflow by wrapping around



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5283

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3183.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3183


commit 1b92d3958a50fff256262bcf6c3cf4b5747fb580
Author: Jason Gustafson 
Date:   2017-05-31T20:16:04Z

KAFKA-5283: Handle producer epoch/sequence overflow by wrapping around




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-5283) Update clients and server code to make sure that epoch and sequence numbers wrap around

2017-05-31 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-5283:
---
Status: Patch Available  (was: Open)

> Update clients and server code to make sure that epoch and sequence numbers 
> wrap around
> ---
>
> Key: KAFKA-5283
> URL: https://issues.apache.org/jira/browse/KAFKA-5283
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Apurva Mehta
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> The design doc mentions that the epoch and sequence numbers will wrap around. 
> However, the current client and server code (on the producer, in the 
> `ProducerIdMapping` class, in the transaction coordinator) does not do this.
> Once all the pieces are in place we should go through and make sure that the 
> handling of sequence numbers and epoch is consistent across the board. Would 
> be good to add a system or integration test for this as well, if possible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5283) Update clients and server code to make sure that epoch and sequence numbers wrap around

2017-05-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16031875#comment-16031875
 ] 

ASF GitHub Bot commented on KAFKA-5283:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3183

KAFKA-5283: Handle producer epoch/sequence overflow by wrapping around



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5283

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3183.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3183


commit 1b92d3958a50fff256262bcf6c3cf4b5747fb580
Author: Jason Gustafson 
Date:   2017-05-31T20:16:04Z

KAFKA-5283: Handle producer epoch/sequence overflow by wrapping around




> Update clients and server code to make sure that epoch and sequence numbers 
> wrap around
> ---
>
> Key: KAFKA-5283
> URL: https://issues.apache.org/jira/browse/KAFKA-5283
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Apurva Mehta
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> The design doc mentions that the epoch and sequence numbers will wrap around. 
> However, the current client and server code (on the producer, in the 
> `ProducerIdMapping` class, in the transaction coordinator) does not do this.
> Once all the pieces are in place we should go through and make sure that the 
> handling of sequence numbers and epoch is consistent across the board. Would 
> be good to add a system or integration test for this as well, if possible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5093) Load only batch header when rebuilding producer ID map

2017-05-31 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-5093:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3160
[https://github.com/apache/kafka/pull/3160]

> Load only batch header when rebuilding producer ID map
> --
>
> Key: KAFKA-5093
> URL: https://issues.apache.org/jira/browse/KAFKA-5093
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> When rebuilding the producer ID map for KIP-98, we unnecessarily load the 
> full record data into memory when scanning through the log. It would be 
> better to only load the batch header since it is all that is needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3160: KAFKA-5093: Avoid loading full batch data when pos...

2017-05-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3160


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5093) Load only batch header when rebuilding producer ID map

2017-05-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16031975#comment-16031975
 ] 

ASF GitHub Bot commented on KAFKA-5093:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3160


> Load only batch header when rebuilding producer ID map
> --
>
> Key: KAFKA-5093
> URL: https://issues.apache.org/jira/browse/KAFKA-5093
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> When rebuilding the producer ID map for KIP-98, we unnecessarily load the 
> full record data into memory when scanning through the log. It would be 
> better to only load the batch header since it is all that is needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5355) Broker returns messages beyond "latest stable offset" to transactional consumer

2017-05-31 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5355:
---
Fix Version/s: 0.11.0.0

> Broker returns messages beyond "latest stable offset" to transactional 
> consumer
> ---
>
> Key: KAFKA-5355
> URL: https://issues.apache.org/jira/browse/KAFKA-5355
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
> Attachments: test.log
>
>
> This issue is exposed by the new (not yet committed) Streams EOS integration 
> test.
> Streams has two tasks (ie, two producers with {{pid}} 0 and 2000) both 
> writing to output topic {{output}} with one partition (replication factor 1).
> The test uses an transactional consumer with {{group.id=readCommitted}} to 
> read the data from {{output}} topic. When it read the data, each producer has 
> committed 10 records (one producer write messages with {{key=0}} and the 
> other with {{key=1}}). Furthermore, each producer has an open transaction and 
> 5 uncommitted records written.
> The test fails, as we expect to see 10 records per key, but we get 15 for 
> key=1:
> {noformat}
> java.lang.AssertionError: 
> Expected: <[KeyValue(1, 0), KeyValue(1, 1), KeyValue(1, 3), KeyValue(1, 6), 
> KeyValue(1, 10), KeyValue(1, 15), KeyValue(1, 21), KeyValue(1, 28), 
> KeyValue(1, 36), KeyValue(1, 45)]>
>  but: was <[KeyValue(1, 0), KeyValue(1, 1), KeyValue(1, 3), KeyValue(1, 
> 6), KeyValue(1, 10), KeyValue(1, 15), KeyValue(1, 21), KeyValue(1, 28), 
> KeyValue(1, 36), KeyValue(1, 45), KeyValue(1, 55), KeyValue(1, 66), 
> KeyValue(1, 78), KeyValue(1, 91), KeyValue(1, 105)]>
> {noformat}
> Dumping the segment shows, that there are two commit markers (one for each 
> producer) for the first 10 messages written. Furthermore, there are 5 pending 
> records. Thus, "latest stable offset" should be 21 (20 messages plus 2 commit 
> markers) and not data should be returned beyond this offset.
> Dumped Log Segment {{output-0}}
> {noformat}
> Starting offset: 0
> baseOffset: 0 lastOffset: 9 baseSequence: 0 lastSequence: 9 producerId: 0 
> producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 0 
> CreateTime: 1496255947332 isvalid: true size: 291 magic: 2 compresscodec: 
> NONE crc: 600535135
> baseOffset: 10 lastOffset: 10 baseSequence: -1 lastSequence: -1 producerId: 0 
> producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 291 
> CreateTime: 1496256005429 isvalid: true size: 78 magic: 2 compresscodec: NONE 
> crc: 3458060752
> baseOffset: 11 lastOffset: 20 baseSequence: 0 lastSequence: 9 producerId: 
> 2000 producerEpoch: 2 partitionLeaderEpoch: 0 isTransactional: true position: 
> 369 CreateTime: 1496255947322 isvalid: true size: 291 magic: 2 compresscodec: 
> NONE crc: 3392915713
> baseOffset: 21 lastOffset: 25 baseSequence: 10 lastSequence: 14 producerId: 0 
> producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 660 
> CreateTime: 1496255947342 isvalid: true size: 176 magic: 2 compresscodec: 
> NONE crc: 3513911368
> baseOffset: 26 lastOffset: 26 baseSequence: -1 lastSequence: -1 producerId: 
> 2000 producerEpoch: 2 partitionLeaderEpoch: 0 isTransactional: true position: 
> 836 CreateTime: 1496256011784 isvalid: true size: 78 magic: 2 compresscodec: 
> NONE crc: 1619151485
> {noformat}
> Dump with {{--deep-iteration}}
> {noformat}
> Starting offset: 0
> offset: 0 position: 0 CreateTime: 1496255947323 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 0 
> headerKeys: [] key: 1 payload: 0
> offset: 1 position: 0 CreateTime: 1496255947324 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 1 
> headerKeys: [] key: 1 payload: 1
> offset: 2 position: 0 CreateTime: 1496255947325 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 2 
> headerKeys: [] key: 1 payload: 3
> offset: 3 position: 0 CreateTime: 1496255947326 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 3 
> headerKeys: [] key: 1 payload: 6
> offset: 4 position: 0 CreateTime: 1496255947327 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 4 
> headerKeys: [] key: 1 payload: 10
> offset: 5 position: 0 CreateTime: 1496255947328 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 5 
> headerKeys: [] key: 1 payload: 15
> offset: 6 position: 0 CreateTime: 1496255947329 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 6 
> hea

[jira] [Created] (KAFKA-5355) Broker returns messages beyond "latest stable offset" to transactional consumer

2017-05-31 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-5355:
--

 Summary: Broker returns messages beyond "latest stable offset" to 
transactional consumer
 Key: KAFKA-5355
 URL: https://issues.apache.org/jira/browse/KAFKA-5355
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.11.0.0
Reporter: Matthias J. Sax
Priority: Blocker
 Attachments: test.log

This issue is exposed by the new (not yet committed) Streams EOS integration 
test.

Streams has two tasks (ie, two producers with {{pid}} 0 and 2000) both writing 
to output topic {{output}} with one partition (replication factor 1).

The test uses an transactional consumer with {{group.id=readCommitted}} to read 
the data from {{output}} topic. When it read the data, each producer has 
committed 10 records (one producer write messages with {{key=0}} and the other 
with {{key=1}}). Furthermore, each producer has an open transaction and 5 
uncommitted records written.

The test fails, as we expect to see 10 records per key, but we get 15 for key=1:
{noformat}
java.lang.AssertionError: 
Expected: <[KeyValue(1, 0), KeyValue(1, 1), KeyValue(1, 3), KeyValue(1, 6), 
KeyValue(1, 10), KeyValue(1, 15), KeyValue(1, 21), KeyValue(1, 28), KeyValue(1, 
36), KeyValue(1, 45)]>
 but: was <[KeyValue(1, 0), KeyValue(1, 1), KeyValue(1, 3), KeyValue(1, 6), 
KeyValue(1, 10), KeyValue(1, 15), KeyValue(1, 21), KeyValue(1, 28), KeyValue(1, 
36), KeyValue(1, 45), KeyValue(1, 55), KeyValue(1, 66), KeyValue(1, 78), 
KeyValue(1, 91), KeyValue(1, 105)]>
{noformat}

Dumping the segment shows, that there are two commit markers (one for each 
producer) for the first 10 messages written. Furthermore, there are 5 pending 
records. Thus, "latest stable offset" should be 21 (20 messages plus 2 commit 
markers) and not data should be returned beyond this offset.

Dumped Log Segment {{output-0}}
{noformat}
Starting offset: 0
baseOffset: 0 lastOffset: 9 baseSequence: 0 lastSequence: 9 producerId: 0 
producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 0 
CreateTime: 1496255947332 isvalid: true size: 291 magic: 2 compresscodec: NONE 
crc: 600535135
baseOffset: 10 lastOffset: 10 baseSequence: -1 lastSequence: -1 producerId: 0 
producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 291 
CreateTime: 1496256005429 isvalid: true size: 78 magic: 2 compresscodec: NONE 
crc: 3458060752
baseOffset: 11 lastOffset: 20 baseSequence: 0 lastSequence: 9 producerId: 2000 
producerEpoch: 2 partitionLeaderEpoch: 0 isTransactional: true position: 369 
CreateTime: 1496255947322 isvalid: true size: 291 magic: 2 compresscodec: NONE 
crc: 3392915713
baseOffset: 21 lastOffset: 25 baseSequence: 10 lastSequence: 14 producerId: 0 
producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 660 
CreateTime: 1496255947342 isvalid: true size: 176 magic: 2 compresscodec: NONE 
crc: 3513911368
baseOffset: 26 lastOffset: 26 baseSequence: -1 lastSequence: -1 producerId: 
2000 producerEpoch: 2 partitionLeaderEpoch: 0 isTransactional: true position: 
836 CreateTime: 1496256011784 isvalid: true size: 78 magic: 2 compresscodec: 
NONE crc: 1619151485
{noformat}

Dump with {{--deep-iteration}}
{noformat}
Starting offset: 0
offset: 0 position: 0 CreateTime: 1496255947323 isvalid: true keysize: 8 
valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 0 
headerKeys: [] key: 1 payload: 0
offset: 1 position: 0 CreateTime: 1496255947324 isvalid: true keysize: 8 
valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 1 
headerKeys: [] key: 1 payload: 1
offset: 2 position: 0 CreateTime: 1496255947325 isvalid: true keysize: 8 
valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 2 
headerKeys: [] key: 1 payload: 3
offset: 3 position: 0 CreateTime: 1496255947326 isvalid: true keysize: 8 
valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 3 
headerKeys: [] key: 1 payload: 6
offset: 4 position: 0 CreateTime: 1496255947327 isvalid: true keysize: 8 
valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 4 
headerKeys: [] key: 1 payload: 10
offset: 5 position: 0 CreateTime: 1496255947328 isvalid: true keysize: 8 
valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 5 
headerKeys: [] key: 1 payload: 15
offset: 6 position: 0 CreateTime: 1496255947329 isvalid: true keysize: 8 
valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 6 
headerKeys: [] key: 1 payload: 21
offset: 7 position: 0 CreateTime: 1496255947330 isvalid: true keysize: 8 
valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 7 
headerKeys: [] key: 1 payload: 28
offset: 8 position: 0 CreateTime: 1496255947331 isvalid: true keysize: 8 
valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 8 
headerKeys: [] key: 1 payload: 36
offset: 9 position: 0 CreateTime: 1496255947332 isvalid

  1   2   >