[jira] [Created] (KAFKA-7495) AdminClient thread dies on invalid input

2018-10-10 Thread JIRA
Xavier Léauté created KAFKA-7495:


 Summary: AdminClient thread dies on invalid input
 Key: KAFKA-7495
 URL: https://issues.apache.org/jira/browse/KAFKA-7495
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Xavier Léauté


The following code results in an uncaught IllegalArgumentException in the admin 
client thread, resulting in a zombie admin client.

{code}
AclBindingFilter aclFilter = new AclBindingFilter(
new ResourcePatternFilter(ResourceType.UNKNOWN, null, PatternType.ANY),
AccessControlEntryFilter.ANY
);
kafkaAdminClient.describeAcls(aclFilter).values().get();
{code}

See the resulting stacktrace below
{code}
ERROR [kafka-admin-client-thread | adminclient-3] Uncaught exception in thread 
'kafka-admin-client-thread | adminclient-3': 
(org.apache.kafka.common.utils.KafkaThread)
java.lang.IllegalArgumentException: Filter contain UNKNOWN elements
at 
org.apache.kafka.common.requests.DescribeAclsRequest.validate(DescribeAclsRequest.java:140)
at 
org.apache.kafka.common.requests.DescribeAclsRequest.(DescribeAclsRequest.java:92)
at 
org.apache.kafka.common.requests.DescribeAclsRequest$Builder.build(DescribeAclsRequest.java:77)
at 
org.apache.kafka.common.requests.DescribeAclsRequest$Builder.build(DescribeAclsRequest.java:67)
at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:450)
at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:411)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:910)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1107)
at java.base/java.lang.Thread.run(Thread.java:844)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7506) KafkaStreams repartition topic settings not suitable for processing old records

2018-10-15 Thread JIRA
Niklas Lönn created KAFKA-7506:
--

 Summary: KafkaStreams repartition topic settings not suitable for 
processing old records
 Key: KAFKA-7506
 URL: https://issues.apache.org/jira/browse/KAFKA-7506
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.1.0
Reporter: Niklas Lönn


Hi, We are using Kafka Streams to process a compacted store, when resetting the 
application/processing from scratch the default topic configuration for 
repartition topics is 50MB and 10min segment sizes.

 

As the retention.ms is undefined, this leads to default retention.ms and log 
cleaner starts competing with the application, effectively causing the streams 
app to skip records.

{{Application logs the following:}}

{\{ Fetch offset 213792 is out of range for partition 
app-id-KTABLE-AGGREGATE-STATE-STORE-15-repartition-7, resetting offset}}
 \{{ Fetch offset 110227 is out of range for partition 
app-id-KTABLE-AGGREGATE-STATE-STORE-15-repartition-2, resetting offset}}
 \{{ Resetting offset for partition 
app-id-KTABLE-AGGREGATE-STATE-STORE-15-repartition-7 to offset 233302.}}
 \{{ Resetting offset for partition 
app-id-KTABLE-AGGREGATE-STATE-STORE-15-repartition-2 to offset 119914.}}

By adding the following configuration to RepartitionTopicConfig.java the issue 
is solved

{{tempTopicDefaultOverrides.put(TopicConfig.RETENTION_MS_CONFIG, "-1"); // 
Infinite}}

 
 My understanding is that this should be safe as KafkaStreams uses the admin 
API to delete segments.
  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7531) NPE NullPointerException at TransactionCoordinator handleEndTransaction

2018-10-23 Thread JIRA
Sebastian Puzoń created KAFKA-7531:
--

 Summary: NPE NullPointerException at TransactionCoordinator 
handleEndTransaction
 Key: KAFKA-7531
 URL: https://issues.apache.org/jira/browse/KAFKA-7531
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 2.0.0
Reporter: Sebastian Puzoń


Kafka cluster with 4 brokers, 1 topic (20 partitions), 1 zookeeper.

Streams Application 4 instances, each has 5 Streams threads, total 20 stream 
threads.

I observe NPE NullPointerException at coordinator broker which causes all 
application stream threads shutdown, here's stack from broker:
{code:java}
[2018-10-22 21:50:46,755] INFO [GroupCoordinator 2]: Member 
elog_agg-client-sswvlp6802-StreamThread-4-consumer-cbcd4704-a346-45ea-80f9-96f62fc2dabe
 in group elo
g_agg has failed, removing it from the group 
(kafka.coordinator.group.GroupCoordinator)
[2018-10-22 21:50:46,755] INFO [GroupCoordinator 2]: Preparing to rebalance 
group elog_agg with old generation 49 (__consumer_offsets-21) 
(kafka.coordinator.gro
up.GroupCoordinator)

[2018-10-22 21:51:17,519] INFO [GroupCoordinator 2]: Stabilized group elog_agg 
generation 50 (__consumer_offsets-21) (kafka.coordinator.group.GroupCoordinator)
[2018-10-22 21:51:17,524] INFO [GroupCoordinator 2]: Assignment received from 
leader for group elog_agg for generation 50 
(kafka.coordinator.group.GroupCoordina
tor)
[2018-10-22 21:51:27,596] INFO [TransactionCoordinator id=2] Initialized 
transactionalId elog_agg-0_14 with producerId 1001 and producer epoch 20 on 
partition _
_transaction_state-16 (kafka.coordinator.transaction.TransactionCoordinator)
[
[2018-10-22 21:52:00,920] ERROR [KafkaApi-2] Error when handling request 
{transactional_id=elog_agg-0_3,producer_id=1004,producer_epoch=16,transaction_result=tr
ue} (kafka.server.KafkaApis)
java.lang.NullPointerException
 at 
kafka.coordinator.transaction.TransactionCoordinator.$anonfun$handleEndTransaction$7(TransactionCoordinator.scala:393)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
 at 
kafka.coordinator.transaction.TransactionMetadata.inLock(TransactionMetadata.scala:172)
 at 
kafka.coordinator.transaction.TransactionCoordinator.$anonfun$handleEndTransaction$5(TransactionCoordinator.scala:383)
 at scala.util.Either$RightProjection.flatMap(Either.scala:702)
 at 
kafka.coordinator.transaction.TransactionCoordinator.sendTxnMarkersCallback$1(TransactionCoordinator.scala:372)
 at 
kafka.coordinator.transaction.TransactionCoordinator.$anonfun$handleEndTransaction$12(TransactionCoordinator.scala:437)
 at 
kafka.coordinator.transaction.TransactionCoordinator.$anonfun$handleEndTransaction$12$adapted(TransactionCoordinator.scala:437)
 at 
kafka.coordinator.transaction.TransactionStateManager.updateCacheCallback$1(TransactionStateManager.scala:581)
 at 
kafka.coordinator.transaction.TransactionStateManager.$anonfun$appendTransactionToLog$15(TransactionStateManager.scala:619)
 at 
kafka.coordinator.transaction.TransactionStateManager.$anonfun$appendTransactionToLog$15$adapted(TransactionStateManager.scala:619)
 at kafka.server.DelayedProduce.onComplete(DelayedProduce.scala:129)
 at kafka.server.DelayedOperation.forceComplete(DelayedOperation.scala:70)
 at kafka.server.DelayedProduce.tryComplete(DelayedProduce.scala:110)
 at 
kafka.server.DelayedOperationPurgatory.tryCompleteElseWatch(DelayedOperation.scala:232)
 at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:495)
 at 
kafka.coordinator.transaction.TransactionStateManager.$anonfun$appendTransactionToLog$13(TransactionStateManager.scala:613)
 at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:257)
 at 
kafka.coordinator.transaction.TransactionStateManager.appendTransactionToLog(TransactionStateManager.scala:590)
 at 
kafka.coordinator.transaction.TransactionCoordinator.handleEndTransaction(TransactionCoordinator.scala:437)
 at kafka.server.KafkaApis.handleEndTxnRequest(KafkaApis.scala:1653)
 at kafka.server.KafkaApis.handle(KafkaApis.scala:132)
 at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
 at java.lang.Thread.run(Thread.java:745)
[2018-10-22 21:52:15,958] ERROR [KafkaApi-2] Error when handling request 
{transactional_id=elog_agg-0_9,producer_id=1005,producer_epoch=8,transaction_result=true}
 (kafka.server.KafkaApis)
java.lang.NullPointerException
 at 
kafka.coordinator.transaction.TransactionCoordinator.$anonfun$handleEndTransaction$7(TransactionCoordinator.scala:393)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
 at 
kafka.coordinator.transaction.TransactionMetadata.inLock(TransactionMetadata.scala:172)
 at 
kafka.coordinator.transaction.TransactionCoordinator.$anonfun$handleEndTransaction$5(TransactionCoordinator.scala:383)
 at scala.util.E

[jira] [Resolved] (KAFKA-395) kafka.tools.MirrorMaker black/white list improvement

2018-10-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-395.

Resolution: Workaround

As this issue has not been touched in more than 6 years I think it is fairly 
safe to assume that we can close this.

Any discussions around mirroring functionality are better addressed in the 
MirrorMaker 2.0 KIP discussion.

Regarding the specifics of this issue, that can be worked around by placing 
whitelists topics into a file and the paste-ing that file into the command as 
shown below. I believe that this should be sufficient as a workaround.
{code:java}
// ./kafka-mirror-maker.sh --consumer.config ../config/consumer.properties 
--producer.config ../config/producer.properties --whitelist "$(paste 
whitelist.topics -d'|' -s)" --blacklist "$(paste blacklist.topics -d'|' -s)"
{code}

> kafka.tools.MirrorMaker black/white list improvement
> 
>
> Key: KAFKA-395
> URL: https://issues.apache.org/jira/browse/KAFKA-395
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Reporter: Dave DeMaagd
>Priority: Minor
>
> Current black/white list topics are specified directly on the command line, 
> while functional, this has two drawbacks:
> 1) Changes become unwieldy if there are a large number of running instances - 
> potentially many instances to restart, which can have implications for data 
> stream lag
> 2) Maintaining the list itself can become increasingly complex if there are a 
> large number of elements in the list (particularly if they are complex 
> expressions)
> Suggest extending the way that black/white lists can be fed to the mirror 
> maker application, in particular, being able to specify the black/white list 
> as a file (or possibly a generic URI).  Thinking that this could be 
> accomplished either by adding '--whitelistfile' and '--blacklistfile' command 
> line parameters, or modifying the existing '--blacklist' and '--whitelist' 
> parameters to include a 'is this a valid file?' test and decide how to handle 
> it based on that (if it is a file, read it, if not, use current behavior). 
> Follow up suggestion would be to have the mirror maker process check for 
> updates to the list file, and on change, validate and reload it, and run from 
> that point with the new list information. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7616) MockConsumer can return ConsumerRecords objects with a non-empty map but no records

2018-11-11 Thread JIRA
Stig Rohde Døssing created KAFKA-7616:
-

 Summary: MockConsumer can return ConsumerRecords objects with a 
non-empty map but no records
 Key: KAFKA-7616
 URL: https://issues.apache.org/jira/browse/KAFKA-7616
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.0.1
Reporter: Stig Rohde Døssing
Assignee: Stig Rohde Døssing


The ConsumerRecords returned from MockConsumer.poll can return false for 
isEmpty while not containing any records. This behavior is because 
MockConsumer.poll eagerly adds entries to the returned Map>, based on which partitions have been added. If no records 
are returned for a partition, e.g. because the position was too far ahead, the 
entry for that partition will still be there.

 

The MockConsumer should lazily add entries to the map as they are needed, since 
it is more in line with how the real consumer behaves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7625) Kafka Broker node JVM crash - kafka.coordinator.transaction.TransactionCoordinator.$anonfun$handleEndTransaction

2018-11-14 Thread JIRA
Sebastian Puzoń created KAFKA-7625:
--

 Summary: Kafka Broker node JVM crash - 
kafka.coordinator.transaction.TransactionCoordinator.$anonfun$handleEndTransaction
 Key: KAFKA-7625
 URL: https://issues.apache.org/jira/browse/KAFKA-7625
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.0.0
 Environment:  environment:os.version=2.6.32-754.2.1.el6.x86_64 
java.version=1.8.0_92 
environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, 
built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
Kafka commitId : 3402a8361b734732 
Reporter: Sebastian Puzoń


I observe broker node JVM crashes with same problematic frame:
{code:java}
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x7ff4a2588261, pid=24681, tid=0x7ff3b9bb1700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_92-b14) (build 1.8.0_92-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.92-b14 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# J 9736 C1 
kafka.coordinator.transaction.TransactionCoordinator.$anonfun$handleEndTransaction$7(Lkafka/coordinator/transaction/TransactionCoordinator;Ljava/lang/String;JSLorg/apache/kafka/common/requests/TransactionResult;Lkafka/coordinator/transaction/TransactionMetadata;)Lscala/util/Either;
 (518 bytes) @ 0x7ff4a2588261 [0x7ff4a25871a0+0x10c1]
#
# Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#

---  T H R E A D  ---

Current thread (0x7ff4b356f800):  JavaThread "kafka-request-handler-3" 
daemon [_thread_in_Java, id=24781, stack(0x7ff3b9ab1000,0x7ff3b9bb2000)]
{code}
{code:java}
Stack: [0x7ff3b9ab1000,0x7ff3b9bb2000],  sp=0x7ff3b9bafca0,  free 
space=1019k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 9736 C1 
kafka.coordinator.transaction.TransactionCoordinator.$anonfun$handleEndTransaction$7(Lkafka/coordinator/transaction/TransactionCoordinator;Ljava/lang/String;JSLorg/apache/kafka/common/requests/TransactionResult;Lkafka/coordinator/transaction/TransactionMetadata;)Lscala/util/Either;
 (518 bytes) @ 0x7ff4a2588261 [0x7ff4a25871a0+0x10c1]
J 10456 C2 
kafka.coordinator.transaction.TransactionCoordinator.$anonfun$handleEndTransaction$5(Lkafka/coordinator/transaction/TransactionCoordinator;Ljava/lang/String;JSLorg/apache/kafka/common/requests/TransactionResult;ILscala/Option;)Lscala/util/Either;
 (192 bytes) @ 0x7ff4a1d413f0 [0x7ff4a1d41240+0x1b0]
J 9303 C1 
kafka.coordinator.transaction.TransactionCoordinator$$Lambda$1107.apply(Ljava/lang/Object;)Ljava/lang/Object;
 (32 bytes) @ 0x7ff4a245f55c [0x7ff4a245f3c0+0x19c]
J 10018 C2 
scala.util.Either$RightProjection.flatMap(Lscala/Function1;)Lscala/util/Either; 
(43 bytes) @ 0x7ff4a1f242c4 [0x7ff4a1f24260+0x64]
J 9644 C1 
kafka.coordinator.transaction.TransactionCoordinator.sendTxnMarkersCallback$1(Lorg/apache/kafka/common/protocol/Errors;Ljava/lang/String;JSLorg/apache/kafka/common/requests/TransactionResult;Lscala/Function1;ILkafka/coordinator/transaction/TxnTransitMetadata;)V
 (251 bytes) @ 0x7ff4a1ef6254 [0x7ff4a1ef5120+0x1134]
J 9302 C1 
kafka.coordinator.transaction.TransactionCoordinator$$Lambda$1106.apply(Ljava/lang/Object;)Ljava/lang/Object;
 (40 bytes) @ 0x7ff4a24747ec [0x7ff4a24745a0+0x24c]
J 10125 C2 
kafka.coordinator.transaction.TransactionStateManager.updateCacheCallback$1(Lscala/collection/Map;Ljava/lang/String;ILkafka/coordinator/transaction/TxnTransitMetadata;Lscala/Function1;Lscala/Function1;Lorg/apache/kafka/common/TopicPartition;)V
 (892 bytes) @ 0x7ff4a27045ec [0x7ff4a2703c60+0x98c]
J 10051 C2 
kafka.coordinator.transaction.TransactionStateManager$$Lambda$814.apply(Ljava/lang/Object;)Ljava/lang/Object;
 (36 bytes) @ 0x7ff4a1a9cd08 [0x7ff4a1a9cc80+0x88]
J 9349 C2 kafka.server.DelayedProduce.tryComplete()Z (52 bytes) @ 
0x7ff4a1e46e5c [0x7ff4a1e46980+0x4dc]
J 10111 C2 
kafka.server.DelayedOperationPurgatory.tryCompleteElseWatch(Lkafka/server/DelayedOperation;Lscala/collection/Seq;)Z
 (147 bytes) @ 0x7ff4a1c6e000 [0x7ff4a1c6df20+0xe0]
J 10448 C2 
kafka.server.ReplicaManager.appendRecords(JSZZLscala/collection/Map;Lscala/Function1;Lscala/Option;Lscala/Function1;)V
 (237 bytes) @ 0x7ff4a2340b6c [0x7ff4a233f3e0+0x178c]
J 10050 C2 
kafka.coordinator.transaction.TransactionStateManager.$anonfun$appendTransactionToLog$13(Lkafka/coordinator/transaction/TransactionStateManager;Ljava/lang/String;ILkafka/coordinator/transaction/TxnTransitMetadata;Lscala/Function1;Lscala/Function1;Lorg/apache/kafka/common/T

[jira] [Resolved] (KAFKA-6901) Kafka crashes when trying to delete segment when retetention time is reached

2018-11-22 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grégory R. resolved KAFKA-6901.
---
Resolution: Not A Bug

> Kafka crashes when trying to delete segment when retetention time is reached 
> -
>
> Key: KAFKA-6901
> URL: https://issues.apache.org/jira/browse/KAFKA-6901
> Project: Kafka
>  Issue Type: Bug
>  Components: core, log
>Affects Versions: 1.0.0, 1.1.0
> Environment: OS: Windows Server 2012 R2
>Reporter: Grégory R.
>Priority: Major
>  Labels: windows
> Attachments: 20180517 - ProcessExplorer after the crash.png, 20180517 
> - ProcessExplorer before the crash.png
>
>
> Following the parameter
> {code:java}
> log.retention.hours = 16{code}
> kafka tries to delete segments.
> This action crashes the server with following log:
>  
> {code:java}
> [2018-05-11 15:17:58,036] INFO Found deletable segments with base offsets [0] 
> due to retention time 60480ms breach (kafka.log.Log)
> [2018-05-11 15:17:58,068] INFO Rolled new log segment for 'event-0' in 12 ms. 
> (kafka.log.Log)
> [2018-05-11 15:17:58,068] INFO Scheduling log segment 0 for log event-0 for 
> deletion. (kafka.log.Log)
> [2018-05-11 15:17:58,068] ERROR Error while deleting segments for event-0 in 
> dir C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log (kafka.server.L
> ogDirFailureChannel)
> java.nio.file.FileSystemException: 
> C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\.log 
> -> C:\App\VISBridge\kafka_2.
> 12-1.0.0\kafka-log\event-0\.log.deleted: Le processus ne 
> peut pas accÚder au fichier car ce fichier est utilisÚ par un a
> utre processus.
>     at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>     at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>     at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>     at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>     at java.nio.file.Files.move(Files.java:1395)
>     at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:682)
>     at 
> org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
>     at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:398)
>     at kafka.log.Log.asyncDeleteSegment(Log.scala:1592)
>     at kafka.log.Log.deleteSegment(Log.scala:1579)
>     at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1152)
>     at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1152)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>     at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1152)
>     at 
> scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
>     at kafka.log.Log.maybeHandleIOException(Log.scala:1669)
>     at kafka.log.Log.deleteSegments(Log.scala:1143)
>     at kafka.log.Log.deleteOldSegments(Log.scala:1138)
>     at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1211)
>     at kafka.log.Log.deleteOldSegments(Log.scala:1204)
>     at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:715)
>     at 
> kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:713)
>     at 
> scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789)
>     at scala.collection.Iterator.foreach(Iterator.scala:929)
>     at scala.collection.Iterator.foreach$(Iterator.scala:929)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:71)
>     at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788)
>     at kafka.log.LogManager.cleanupLogs(LogManager.scala:713)
>     at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:341)
>     at 
> kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:110)
>     at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
>     at 
> java.util.concur

[jira] [Created] (KAFKA-7733) MockConsumer doesn't respect reset strategy

2018-12-13 Thread JIRA
Stig Rohde Døssing created KAFKA-7733:
-

 Summary: MockConsumer doesn't respect reset strategy
 Key: KAFKA-7733
 URL: https://issues.apache.org/jira/browse/KAFKA-7733
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Stig Rohde Døssing
Assignee: Stig Rohde Døssing


The MockConsumer throws OffsetOutOfRangeException if a record is behind the 
beginning offset. This is unlike the real consumer, which will use 
auto.offset.reset to decide whether to seek to the beginning, end or throw an 
exception.

It is convenient if the poll method does the offset reset properly, since it 
allows for testing cases like a consumer requesting offsets from a truncated 
log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7755) Kubernetes - Kafka clients are resolving DNS entries only one time

2018-12-18 Thread JIRA
Loïc Monney created KAFKA-7755:
--

 Summary: Kubernetes - Kafka clients are resolving DNS entries only 
one time
 Key: KAFKA-7755
 URL: https://issues.apache.org/jira/browse/KAFKA-7755
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.1.0, 2.2.0, 2.1.1
 Environment: Kubernetes
Reporter: Loïc Monney


*Introduction*
 Since 2.1.0 Kafka clients are supporting multiple DNS resolved IP addresses if 
the first one fails. This change has been introduced by 
https://issues.apache.org/jira/browse/KAFKA-6863. However this DNS resolution 
is now performed only one time by the clients. This is not a problem if all 
brokers have fixed IP addresses, however this is definitely an issue when Kafka 
brokers are run on top of Kubernetes. Indeed, new Kubernetes pods will receive 
another IP address, so as soon as all brokers will have been restarted clients 
won't be able to reconnect to any broker.

*Impact*
 Everyone running Kafka 2.1 or later on top of Kubernetes is impacted when a 
rolling restart is performed.

*Root cause*
 Since https://issues.apache.org/jira/browse/KAFKA-6863 Kafka clients are 
resolving DNS entries only once.

*Proposed solution*
 In 
[https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java#L368]
 Kafka clients should perform the DNS resolution again when all IP addresses 
have been "used" (when _index_ is back to 0)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7762) KafkaConsumer uses old API in the javadocs

2018-12-20 Thread JIRA
Matthias Weßendorf created KAFKA-7762:
-

 Summary: KafkaConsumer uses old API in the javadocs
 Key: KAFKA-7762
 URL: https://issues.apache.org/jira/browse/KAFKA-7762
 Project: Kafka
  Issue Type: Improvement
Reporter: Matthias Weßendorf


the `poll(ms)` API is deprecated, hence the javadoc should not use it 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7784) Producer & message format : better management when inter.broker.protocol.version is overrided

2019-01-03 Thread JIRA
Hervé RIVIERE created KAFKA-7784:


 Summary: Producer & message format : better management when 
inter.broker.protocol.version is overrided
 Key: KAFKA-7784
 URL: https://issues.apache.org/jira/browse/KAFKA-7784
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Affects Versions: 2.0.1
Reporter: Hervé RIVIERE


With the following setup : 
 * Java producer v2.0.1
 * Brokers 2.0.1 with *message format V1* (so with 
log.message.format.version=0.10.2 & inter.broker.protocol.version=0.10.2)

 

Producer is sending message with format message V2 so useless down convert to 
V1 message format is triggered on broker side.

 

 

An improvement to avoid wasting CPU ressources on broker side will broker 
advertise current message format version to client (currently broker advertises 
only api available). 

With message format version info producer will be able to prefer a specific 
message format to avoid down conversion on broker side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6868) BufferUnderflowException in client when querying consumer group information

2018-05-04 Thread JIRA
Xavier Léauté created KAFKA-6868:


 Summary: BufferUnderflowException in client when querying consumer 
group information
 Key: KAFKA-6868
 URL: https://issues.apache.org/jira/browse/KAFKA-6868
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Xavier Léauté


Exceptions get thrown when describing consumer group or querying group offsets.

{code}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
at java.lang.Thread.run(Thread.java:748)
{code}

{code}
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'version': java.nio.BufferUnderflowException
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
at
[snip]
Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error 
reading field 'version': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
... 1 more
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6901) Kafka crashes when trying to delete segment when retetention time is reached

2018-05-14 Thread JIRA
Grégory R. created KAFKA-6901:
-

 Summary: Kafka crashes when trying to delete segment when 
retetention time is reached 
 Key: KAFKA-6901
 URL: https://issues.apache.org/jira/browse/KAFKA-6901
 Project: Kafka
  Issue Type: Bug
  Components: core, log
Affects Versions: 1.0.0
 Environment: OS: Windows Server 2012 R2
Reporter: Grégory R.


Following the parameter
{code:java}
log.retention.hours = 16{code}
kafka tries to delete segments.

This action crashes the server with following log:

 
{code:java}

[2018-05-11 15:17:58,036] INFO Found deletable segments with base offsets [0] 
due to retention time 60480ms breach (kafka.log.Log)
[2018-05-11 15:17:58,068] INFO Rolled new log segment for 'event-0' in 12 ms. 
(kafka.log.Log)
[2018-05-11 15:17:58,068] INFO Scheduling log segment 0 for log event-0 for 
deletion. (kafka.log.Log)
[2018-05-11 15:17:58,068] ERROR Error while deleting segments for event-0 in 
dir C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log (kafka.server.L
ogDirFailureChannel)
java.nio.file.FileSystemException: 
C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\.log -> 
C:\App\VISBridge\kafka_2.
12-1.0.0\kafka-log\event-0\.log.deleted: Le processus ne 
peut pas accÚder au fichier car ce fichier est utilisÚ par un a
utre processus.

    at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
    at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
    at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
    at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
    at java.nio.file.Files.move(Files.java:1395)
    at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:682)
    at 
org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
    at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:398)
    at kafka.log.Log.asyncDeleteSegment(Log.scala:1592)
    at kafka.log.Log.deleteSegment(Log.scala:1579)
    at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1152)
    at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1152)
    at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
    at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1152)
    at 
scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
    at kafka.log.Log.maybeHandleIOException(Log.scala:1669)
    at kafka.log.Log.deleteSegments(Log.scala:1143)
    at kafka.log.Log.deleteOldSegments(Log.scala:1138)
    at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1211)
    at kafka.log.Log.deleteOldSegments(Log.scala:1204)
    at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:715)
    at 
kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:713)
    at 
scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789)
    at scala.collection.Iterator.foreach(Iterator.scala:929)
    at scala.collection.Iterator.foreach$(Iterator.scala:929)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
    at scala.collection.IterableLike.foreach(IterableLike.scala:71)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788)
    at kafka.log.LogManager.cleanupLogs(LogManager.scala:713)
    at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:341)
    at 
kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:110)
    at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
    Suppressed: java.nio.file.FileSystemException: 
C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\.log -> 
C:\Ap
p\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\.log.deleted:
 Le p

[jira] [Created] (KAFKA-7059) Offer new constructor on ProducerRecord

2018-06-14 Thread JIRA
Matthias Weßendorf created KAFKA-7059:
-

 Summary: Offer new constructor on ProducerRecord 
 Key: KAFKA-7059
 URL: https://issues.apache.org/jira/browse/KAFKA-7059
 Project: Kafka
  Issue Type: Improvement
Reporter: Matthias Weßendorf
 Fix For: 2.0.1


creating a ProducerRecord, with custom headers requires usage of a constructor 
with a slightly longer arguments list.

 

It would be handy or more convenient if there was a ctor, like:

{code}
public ProducerRecord(String topic, K key, V value, Iterable headers)
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7070) KafkaConsumer#committed might unexpectedly shift consumer offset

2018-06-18 Thread JIRA
Jan Lukavský created KAFKA-7070:
---

 Summary: KafkaConsumer#committed might unexpectedly shift consumer 
offset
 Key: KAFKA-7070
 URL: https://issues.apache.org/jira/browse/KAFKA-7070
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 1.1.0
Reporter: Jan Lukavský


When client uses manual partition assignment (e.g. {{KafkaConsumer#assign}}), 
but then accidentally calls {{KafkaConsumer#committed}} (for whatever reason, 
most probably bug in user code), then the offset gets shifted to latest, 
possibly skipping any unconsumed messages, or producing duplicates. The reason 
is that the call to {{KafkaConsumer#committed}} invokes AbstractCoordinator, 
which tries to fetch committed offset, but doesn't find {{group.id}} (will be 
probably even empty). This might cause Fetcher to receive invalid offset for 
partition and reset it to the latest offset.

Although this is primarily bug in user code, it is very hard to track it down. 
The call to {{KafkaConsumer#committed}} might probably throw exception when 
called on client without auto partition assignment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7197) Release a milestone build for Scala 2.13.0 M3

2018-07-24 Thread JIRA
Martynas Mickevičius created KAFKA-7197:
---

 Summary: Release a milestone build for Scala 2.13.0 M3
 Key: KAFKA-7197
 URL: https://issues.apache.org/jira/browse/KAFKA-7197
 Project: Kafka
  Issue Type: Improvement
Reporter: Martynas Mickevičius


Releasing a milestone version for Scala 2.13.0-M3 (and maybe even for 
2.13.0-M4, which has new collections) would be helpful to kickstart Kafka 
ecosystem adoption for 2.13.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6475) ConfigException on the broker results in UnknownServerException in the admin client

2018-01-23 Thread JIRA
Xavier Léauté created KAFKA-6475:


 Summary: ConfigException on the broker results in 
UnknownServerException in the admin client
 Key: KAFKA-6475
 URL: https://issues.apache.org/jira/browse/KAFKA-6475
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Xavier Léauté
Assignee: Colin P. McCabe


Calling AdminClient.alterConfigs with an invalid configuration may cause 
ConfigException to be thrown on the broker side, which results in an 
UnknownServerException thrown by the admin client. It would probably make more 
sense for the admin client to throw InvalidConfigurationException in that case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6515) Add toString() method to kafka connect Field class

2018-02-01 Thread JIRA
Bartłomiej Tartanus created KAFKA-6515:
--

 Summary: Add toString() method to kafka connect Field class
 Key: KAFKA-6515
 URL: https://issues.apache.org/jira/browse/KAFKA-6515
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Bartłomiej Tartanus


Currently testing is really painful:
{code:java}
org.apache.kafka.connect.data.Field@1d51df1f was not equal to 
org.apache.kafka.connect.data.Field@c0d62cd8{code}
 

toString() method would fix this, so please add one. :)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6548) Migrate committed offsets from ZooKeeper to Kafka

2018-02-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-6548.
-
Resolution: Not A Problem

Hi [~ppmanikandan...@gmail.com] 

it sounds like in principle you know what you want to do and are on the right 
track. I don't think that this should be filed in jira, but is rather something 
that would be well placed on the kafka users mailing list or stackoverflow. 
Actually I just found that you also posted this to 
[stackoverflow|https://stackoverflow.com/questions/48696705/migrate-zookeeper-offset-details-to-kafka]
 and received an answer there, so I'll close this issue as I think further 
discussion is better placed on SO.

> Migrate committed offsets from ZooKeeper to Kafka
> -
>
> Key: KAFKA-6548
> URL: https://issues.apache.org/jira/browse/KAFKA-6548
> Project: Kafka
>  Issue Type: Improvement
>  Components: offset manager
>Affects Versions: 0.10.0.0
> Environment: Windows
>Reporter: Manikandan P
>Priority: Minor
>
> We were using previous version of Kafka(0.8.X) where all the offset details 
> were stored in ZooKeeper. 
> Now we moved to new version of Kafka(0.10.X) where all the Topic offset 
> details are stored in Kafka itself. 
> We have to move all the Topic offset details to ZooKeeper to Kafka for 
> existing application in Production.
> Kafka is installed in Windows machine. we can't run kafka-consumer-groups.sh 
> from windows.
> Please advice how to migrate committed offsets from ZooKeeper to Kafka.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6561) Change visibility of aclMatch in SimpleAclAuthorizer to protected to allow overriding in subclasses

2018-02-13 Thread JIRA
Sönke Liebau created KAFKA-6561:
---

 Summary: Change visibility of aclMatch in SimpleAclAuthorizer to 
protected to allow overriding in subclasses
 Key: KAFKA-6561
 URL: https://issues.apache.org/jira/browse/KAFKA-6561
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 1.0.0
Reporter: Sönke Liebau
Assignee: Sönke Liebau


Currently the visibility of the 
[aclMatch|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/security/auth/SimpleAclAuthorizer.scala#L146]
 function in the SimpleAclAuthorizer class is set to private, thus prohibiting 
subclasses from overriding this method. I think this was originally done as 
this function is not supposed to be part of the public Api of this class, which 
makes sense.
However when creating a custom authorizer this would be a very useful method to 
override, as it allows to reuse a large amount of boilerplate code around 
loading and applying ACLs and simply changing the way that ACLs are matched.

Could we change the visibility of this method to protected, thus still keeping 
it out of the public interface but allowing for subclasses to override it?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-1457) Nginx Kafka Integration

2018-02-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-1457.
-
Resolution: Not A Problem

I think this issue can safely be closed, there are numerous solutions out there 
for pushing log files into Kafka:
* Kafka Connect Filesource
* Filebeat
* Logstash
* Heka
* Collectd

We could actually consider closing this as "fixed" since with Kafka Connect one 
might argue that Kafka now offers a component for what [~darion] wants to do :)

> Nginx Kafka Integration
> ---
>
> Key: KAFKA-1457
> URL: https://issues.apache.org/jira/browse/KAFKA-1457
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Reporter: darion yaphet
>Priority: Minor
>  Labels: DataSource
>
> Some use Kafka as log data collector and nginx is the web server . we need to 
> push nginx log into kafka but I don't know how to write the plugin
> Would Apacha Kafka provide  a nginx module to push nginx log into kafka ? 
> Thanks a lot   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-1689) automatic migration of log dirs to new locations

2018-02-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-1689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-1689.
-
   Resolution: Fixed
Fix Version/s: 1.1.0

The requested functionality is available as part of KAFKA-5163

> automatic migration of log dirs to new locations
> 
>
> Key: KAFKA-1689
> URL: https://issues.apache.org/jira/browse/KAFKA-1689
> Project: Kafka
>  Issue Type: New Feature
>  Components: config, core
>Affects Versions: 0.8.1.1
>Reporter: Javier Alba
>Priority: Minor
>  Labels: newbie++
> Fix For: 1.1.0
>
>
> There is no automated way in Kafka 0.8.1.1 to make a migration of log data if 
> we want to reconfigure our cluster nodes to use several data directories 
> where we have mounted new disks instead our original data directory.
> For example, say we have our brokers configured with:
>   log.dirs = /tmp/kafka-logs
> And we added 3 new disks and now we want our brokers to use them as log.dirs:
>   logs.dirs = /srv/data/1,/srv/data/2,/srv/data/3
> It would be great to have an automated way of doing such a migration, of 
> course without losing current data in the cluster.
> It would be ideal to be able to do this migration without losing service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6591) Move check for super user in SimpleAclProvider before ACL evaluation

2018-02-25 Thread JIRA
Sönke Liebau created KAFKA-6591:
---

 Summary: Move check for super user in SimpleAclProvider before ACL 
evaluation
 Key: KAFKA-6591
 URL: https://issues.apache.org/jira/browse/KAFKA-6591
 Project: Kafka
  Issue Type: Improvement
  Components: core, security
Affects Versions: 1.0.0
Reporter: Sönke Liebau
Assignee: Sönke Liebau


Currently the check whether a user as a super user in SimpleAclAuthorizer is 
[performed only after all other ACLs have been 
evaluated|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/security/auth/SimpleAclAuthorizer.scala#L124].
 Since all requests from a super user are granted we don't really need to apply 
the ACLs.

I believe this is unnecessary effort that could easily be avoided. I've rigged 
a small test that created 1000 ACLs for a topic and performed a million 
authorize calls with a principal that was a super user but didn't match any 
ACLs.

The implementation from trunk took 43 seconds, whereas a version with the super 
user check moved up only took half a second. Granted, this is a constructed 
case, but the effects will be the same, if less pronounced for setups with 
fewer rules.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6594) 第二次启动kafka是报错00000000000000000000.timeindex: 另一个程序正在使用此文件,进程无法访问。

2018-02-26 Thread JIRA
徐兴强 created KAFKA-6594:
--

 Summary: 第二次启动kafka是报错.timeindex: 
另一个程序正在使用此文件,进程无法访问。
 Key: KAFKA-6594
 URL: https://issues.apache.org/jira/browse/KAFKA-6594
 Project: Kafka
  Issue Type: Bug
  Components: config
Affects Versions: 1.0.0
 Environment: 环境:win10-1607 X64

kafka:1.0.0(kafka_2.12-1.0.0)

zookeeper:3.5.2
Reporter: 徐兴强
 Attachments: kafka报错.png

 

当我第一次运行kafka时,没有任何问题,但是当我关闭kafka(Ctrl+C)后,在第二次启动时,报错,提示.timeindex:
 另一个程序正在使用此文件,进程无法访问。



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6601) Kafka manager does not provide consumer offset producer rate with kafka v2.10-0.10.2.0

2018-02-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-6601.
-
Resolution: Cannot Reproduce

Hi [~raj6329], 

I've just tested this with kafka_2.10_0.10.2.0 and there doesn't seem to be an 
issue with that metric. I can clearly see it with jconsole. Do you have an 
active consumer that is committing offsets to your cluster? The metric will 
after a server restart only show up once someone actually produces to that 
topic.

If that doesn't fix it, then I am afraid you should open an issue for this on 
the KafkaManager bugtracker, as I cannot see anything wrong on the Kafka side.

> Kafka manager does not provide consumer offset producer rate with kafka 
> v2.10-0.10.2.0
> --
>
> Key: KAFKA-6601
>     URL: https://issues.apache.org/jira/browse/KAFKA-6601
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rajendra Jangir
>Priority: Major
>
> I am using kafka-manager and kafka version 2.10-0.10.2.
> And I am not able to see producer rate for _consumer_offset topic._
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6626) Performance bottleneck in Kafka Connect sendRecords

2018-03-07 Thread JIRA
Maciej Bryński created KAFKA-6626:
-

 Summary: Performance bottleneck in Kafka Connect sendRecords
 Key: KAFKA-6626
 URL: https://issues.apache.org/jira/browse/KAFKA-6626
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Maciej Bryński
 Attachments: MapPerf.java, image-2018-03-08-08-35-19-247.png

Kafka Connect is using IdentityHashMap for storing records.

[https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L239]

Unfortunately this solution is very slow (2 times slower than normal HashMap / 
HashSet).

Benchmark result (code in attachment).
{code:java}
Identity 3977
Set 2442
Map 2207
Fast Set 2067
{code}
This problem is greatly slowing Kafka Connect.

!image-2018-03-08-08-35-19-247.png!

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6632) Very slow hashCode methods in Kafka Connect types

2018-03-09 Thread JIRA
Maciej Bryński created KAFKA-6632:
-

 Summary: Very slow hashCode methods in Kafka Connect types
 Key: KAFKA-6632
 URL: https://issues.apache.org/jira/browse/KAFKA-6632
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Maciej Bryński


hashCode method of ConnectSchema (and Field) is used a lot in SMT.

Example:

[https://github.com/apache/kafka/blob/e5d6c9a79a4ca9b82502b8e7f503d86ddaddb7fb/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/InsertField.java#L164]

Unfortunately it's using Objects.hash which is very slow.

I rewrite this to own implementation and gain 6x speedup.

Microbencharks gives:
 * Original ConnectSchema hashCode: 2995ms
 * My implementation: 517ms

(1 iterations of calculating: hashCode for on new 
ConnectSchema(Schema.Type.STRING))
{code:java}
@Override
public int hashCode() {
int result = 5;
result = 31 * result + type.hashCode();
result = 31 * result + (optional ? 1 : 0);
result = 31 * result + (defaultValue == null ? 0 : defaultValue.hashCode());
if (fields != null) {
for (Field f : fields) {
result = 31 * result + f.hashCode();
}
}
result = 31 * result + (keySchema == null ? 0 : keySchema.hashCode());
result = 31 * result + (valueSchema == null ? 0 : valueSchema.hashCode());
result = 31 * result + (name == null ? 0 : name.hashCode());
result = 31 * result + (version == null ? 0 : version);
result = 31 * result + (doc == null ? 0 : doc.hashCode());
if (parameters != null) {
for (String s : parameters.keySet()) {
result = 31 * result + s.hashCode() + parameters.get(s).hashCode();
}
}
return result;
}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6682) Kafka reconnection after broker restart

2018-03-19 Thread JIRA
Tomasz Gąska created KAFKA-6682:
---

 Summary: Kafka reconnection after broker restart
 Key: KAFKA-6682
 URL: https://issues.apache.org/jira/browse/KAFKA-6682
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 1.0.0
Reporter: Tomasz Gąska


I am using kafka producer plugin for logback (danielwegener) with the clients 
library 1.0.0 and after restart of broker all my JVMs connected to it get tons 
of the exceptions:
{code:java}
11:22:48.738 [kafka-producer-network-thread | app-logback-relaxed] cid: 
clid: E [    @] a: o.a.k.c.p.internals.Sender - [Producer 
clientId=id-id-logback-relaxed] Uncaught error in kafka producer I/O 
thread:  ex:java.lang.NullPointerException: null
    at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:436)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:399)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
    at java.lang.Thread.run(Thread.java:798){code}
During restart there are still other brokers available behind LB.    

Dosen't matter kafka is up again, only restarting JVM helps
{code:java}
    
    
    
   
 %date{"-MM-dd'T'HH:mm:ss.SSS'Z'"} ${HOSTNAME} 
[%thread] %logger{32} - %message ex:%exf%n
    
    mytopichere
    
    
    
    
    
   
 
    
    bootstrap.servers=10.99.99.1:9092
    
    acks=0
    
    block.on.buffer.full=false
    
    
client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed
    
    
    compression.type=none
   
 
    max.block.ms=0
    {code}

I provide loadbalancer address in bootstrap servers here. There are three kafka 
brokers behind.
{code:java}
java version "1.7.0"
Java(TM) SE Runtime Environment (build pap6470sr9fp60ifix-20161110_01(SR9 
FP60)+IV90630+IV90578))
IBM J9 VM (build 2.6, JRE 1.7.0 AIX ppc64-64 Compressed References 
20161005_321282 (JIT enabled, AOT enabled)
J9VM - R26_Java726_SR9_20161005_1259_B321282
JIT  - tr.r11_20161001_125404
GC   - R26_Java726_SR9_20161005_1259_B321282_CMPRSS
J9CL - 20161005_321282)
JCL - 20161021_01 based on Oracle jdk7u121-b15{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6711) Checkpoint of InMemoryStore along with GlobalKTable

2018-03-24 Thread JIRA
Cemalettin Koç created KAFKA-6711:
-

 Summary: Checkpoint of InMemoryStore along with GlobalKTable 
 Key: KAFKA-6711
 URL: https://issues.apache.org/jira/browse/KAFKA-6711
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.1
Reporter: Cemalettin Koç


We are using an InMemoryStore along with GlobalKTable and I noticed that after 
each restart I am losing all my data. When I debug it, 
`/tmp/kafka-streams/category-client-1/global/.checkpoint` file contains offset 
for my GlobalKTable topic. I had checked GlobalStateManagerImpl implementation 
and noticed that it is not guarded for cases similar to mine. I am fairly new 
to Kafka land and probably there might be another way to fix issue. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6713) Provide an easy way replace store with a custom one on High-Level Streams DSL

2018-03-25 Thread JIRA
Cemalettin Koç created KAFKA-6713:
-

 Summary: Provide an easy way replace store with a custom one on 
High-Level Streams DSL
 Key: KAFKA-6713
 URL: https://issues.apache.org/jira/browse/KAFKA-6713
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 1.0.1
Reporter: Cemalettin Koç


I am trying to use GlobalKTable with a custom store implementation. In my 
stores, I would like to store my `Category` entites and I would like to query 
them by their name as well. My custom store has some capabilities beyond `get` 
such as get by `name`. I also want to get all entries in a hierarchical way in 
a lazy fashion. I have other use cases as well.

 

In order to accomplish my task I had to implement  a custom 
`KeyValueBytesStoreSupplier`,  `BytesTypeConverter` and 

 
{code:java}
public class DelegatingByteStore implements KeyValueStore {

  private BytesTypeConverter converter;

  private KeyValueStore delegated;

  public DelegatingByteStore(KeyValueStore delegated, 
BytesTypeConverter converter) {
this.converter = converter;
this.delegated = delegated;
  }

  @Override
  public void put(Bytes key, byte[] value) {
delegated.put(converter.outerKey(key),
  converter.outerValue(value));
  }

  @Override
  public byte[] putIfAbsent(Bytes key, byte[] value) {
V v = delegated.putIfAbsent(converter.outerKey(key),
converter.outerValue(value));
return v == null ? null : value;
  }
  ..

{code}
 

 Type Converter:
{code:java}
public interface TypeConverter {

  IK innerKey(final K key);

  IV innerValue(final V value);

  List> innerEntries(final List> from);

  List> outerEntries(final List> from);

  V outerValue(final IV value);

  KeyValue outerKeyValue(final KeyValue from);

  KeyValueinnerKeyValue(final KeyValue entry);

  K outerKey(final IK ik);
}
{code}
 

This is unfortunately too cumbersome and hard to maintain.  

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6744) MockProducer with transaction enabled doesn't fail on commit if a record was failed

2018-04-03 Thread JIRA
Pascal Gélinas created KAFKA-6744:
-

 Summary: MockProducer with transaction enabled doesn't fail on 
commit if a record was failed
 Key: KAFKA-6744
 URL: https://issues.apache.org/jira/browse/KAFKA-6744
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 1.0.0
Reporter: Pascal Gélinas


The KafkaProducer#send documentation states the following:

When used as part of a transaction, it is not necessary to define a callback or 
check the result of the future in order to detect errors from send. If any of 
the send calls failed with an irrecoverable error, the final 
commitTransaction() call will fail and throw the exception from the last failed 
send.

So I was expecting the following to throw an exception:

{{*MockProducer* producer = new MockProducer<>(false,}}
{{ new StringSerializer(), new ByteArraySerializer());}}
{{producer.initTransactions();}}
{{producer.beginTransaction();}}
{{producer.send(new ProducerRecord<>("foo", new byte[]{}));}}
{{producer.errorNext(new RuntimeException());}}
{{producer.commitTransaction(); // Expecting this to throw}}

Unfortunately, the commitTransaction() call returns successfully.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6806) Unable to validate sink connectors without "topics" component which is not required

2018-04-19 Thread JIRA
Ivan Majnarić created KAFKA-6806:


 Summary: Unable to validate sink connectors without "topics" 
component which is not required
 Key: KAFKA-6806
 URL: https://issues.apache.org/jira/browse/KAFKA-6806
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.1.0
 Environment: CP4.1., Centos7
Reporter: Ivan Majnarić


The bug is happening when you try to create new connector through for example 
kafka-connect-ui.

While both source and sink connectors were able to be validated through REST 
without "topics" as add-on with "connector.class" like this:
{code:java}
PUT / 
http://connect-url:8083/connector-plugins/com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector/config/validate
{
    "connector.class": 
"com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector",
}{code}
In the new version of CP4.1 you still can validate *source connectors* but not 
*sink connectors*. If you want to validate sink connectors you need to add to 
request -> "topics" config, like:
{code:java}
PUT / 
http://connect-url:8083/connector-plugins/com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector/config/validate
{
    "connector.class": 
"com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector",
"topics": "test-topic"
}{code}
So there is a little missmatch of the ways how to validate connectors which I 
think happened accidentally.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6826) Avoid range scans when forwarding values in window store aggregations

2018-04-25 Thread JIRA
Xavier Léauté created KAFKA-6826:


 Summary: Avoid range scans when forwarding values in window store 
aggregations
 Key: KAFKA-6826
 URL: https://issues.apache.org/jira/browse/KAFKA-6826
 Project: Kafka
  Issue Type: Bug
Reporter: Xavier Léauté
Assignee: Xavier Léauté
 Attachments: Screen Shot 2018-04-25 at 11.14.25 AM.png

This is a follow-up to KAFKA-6560, where we missed at least one case that 
should be using single point queries instead of range-scans when forwarding 
values during aggregation.

Since a single range scan can sometimes account for 75% of aggregation cpu 
time, fixing this should provide some significant speedups (see attached 
flamegraph)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7374) Tiered Storage

2018-09-04 Thread JIRA
Maciej Bryński created KAFKA-7374:
-

 Summary: Tiered Storage
 Key: KAFKA-7374
 URL: https://issues.apache.org/jira/browse/KAFKA-7374
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 2.0.0
Reporter: Maciej Bryński


Both Pravega and Pulsar gives possibility to use tiered storage.
We can store old messages on different FS like S3 or HDFS.

Kafka should give similar possibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


***UNCHECKED*** [jira] [Created] (KAFKA-7421) Deadlock in Kafka Connect

2018-09-19 Thread JIRA
Maciej Bryński created KAFKA-7421:
-

 Summary: Deadlock in Kafka Connect
 Key: KAFKA-7421
 URL: https://issues.apache.org/jira/browse/KAFKA-7421
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 2.0.0
Reporter: Maciej Bryński


I'm getting this deadlock on half of Kafka Connect runs.
Thread 1:
{code}
"pool-22-thread-2@4748" prio=5 tid=0x4d nid=NA waiting for monitor entry
  java.lang.Thread.State: BLOCKED
 waiting for pool-22-thread-1@4747 to release lock on <0x1423> (a 
org.apache.kafka.connect.runtime.isolation.PluginClassLoader)
  at 
org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:91)
  at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.loadClass(DelegatingClassLoader.java:367)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  at java.lang.Class.forName0(Class.java:-1)
  at java.lang.Class.forName(Class.java:348)
  at 
org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:715)
  at 
org.apache.kafka.connect.runtime.ConnectorConfig.enrich(ConnectorConfig.java:295)
  at 
org.apache.kafka.connect.runtime.ConnectorConfig.(ConnectorConfig.java:200)
  at 
org.apache.kafka.connect.runtime.ConnectorConfig.(ConnectorConfig.java:194)
  at 
org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:233)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:916)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:111)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:932)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:928)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)
{code}

Thread 2:
{code}
"pool-22-thread-1@4747" prio=5 tid=0x4c nid=NA waiting for monitor entry
  java.lang.Thread.State: BLOCKED
 blocks pool-22-thread-2@4748
 waiting for pool-22-thread-2@4748 to release lock on <0x1421> (a 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:406)
  at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.loadClass(DelegatingClassLoader.java:358)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:411)
  - locked <0x1424> (a java.lang.Object)
  at 
org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
  - locked <0x1423> (a 
org.apache.kafka.connect.runtime.isolation.PluginClassLoader)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  at 
io.debezium.transforms.ByLogicalTableRouter.(ByLogicalTableRouter.java:57)
  at java.lang.Class.forName0(Class.java:-1)
  at java.lang.Class.forName(Class.java:348)
  at 
org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:715)
  at 
org.apache.kafka.connect.runtime.ConnectorConfig.enrich(ConnectorConfig.java:295)
  at 
org.apache.kafka.connect.runtime.ConnectorConfig.(ConnectorConfig.java:200)
  at 
org.apache.kafka.connect.runtime.ConnectorConfig.(ConnectorConfig.java:194)
  at 
org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:233)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:916)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:111)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:932)
  at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:928)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)
{code}

I'm using official Confluent Docker images.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7442) forceUnmap mmap on linux

2018-09-26 Thread JIRA
翟玉勇 created KAFKA-7442:
--

 Summary: forceUnmap mmap on linux
 Key: KAFKA-7442
 URL: https://issues.apache.org/jira/browse/KAFKA-7442
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.10.1.1
Reporter: 翟玉勇


when resize OffsetIndex or TimeIndex,We should force unmap mmap for linux 
platform

{code}
def resize(newSize: Int) {
inLock(lock) {
  val raf = new RandomAccessFile(_file, "rw")
  val roundedNewSize = roundDownToExactMultiple(newSize, entrySize)
  val position = mmap.position

  /* Windows won't let us modify the file length while the file is mmapped 
:-( */
  if(Os.isWindows)
forceUnmap(mmap)
  try {
raf.setLength(roundedNewSize)
mmap = raf.getChannel().map(FileChannel.MapMode.READ_WRITE, 0, 
roundedNewSize)
_maxEntries = mmap.limit / entrySize
mmap.position(position)
  } finally {
CoreUtils.swallow(raf.close())
  }
}
  }
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6626) Performance bottleneck in Kafka Connect sendRecords

2018-10-03 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maciej Bryński resolved KAFKA-6626.
---
Resolution: Unresolved

> Performance bottleneck in Kafka Connect sendRecords
> ---
>
> Key: KAFKA-6626
> URL: https://issues.apache.org/jira/browse/KAFKA-6626
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0
>Reporter: Maciej Bryński
>Priority: Major
> Attachments: MapPerf.java, image-2018-03-08-08-35-19-247.png
>
>
> Kafka Connect is using IdentityHashMap for storing records.
> [https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L239]
> Unfortunately this solution is very slow (2-4 times slower than normal 
> HashMap / HashSet).
> Benchmark result (code in attachment). 
> {code:java}
> Identity 4220
> Set 2115
> Map 1941
> Fast Set 2121
> {code}
> Things are even worse when using default GC configuration 
>  (-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35  -Djava.awt.headless=true)
> {code:java}
> Identity 7885
> Set 2364
> Map 1548
> Fast Set 1520
> {code}
> Java version
> {code:java}
> java version "1.8.0_152"
> Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
> Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
> {code}
> This problem is greatly slowing Kafka Connect.
> !image-2018-03-08-08-35-19-247.png!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6632) Very slow hashCode methods in Kafka Connect types

2018-10-03 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maciej Bryński resolved KAFKA-6632.
---
Resolution: Fixed

> Very slow hashCode methods in Kafka Connect types
> -
>
> Key: KAFKA-6632
> URL: https://issues.apache.org/jira/browse/KAFKA-6632
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0
>Reporter: Maciej Bryński
>Priority: Major
>
> hashCode method of ConnectSchema (and Field) is used a lot in SMT and 
> fromConnect.
> Example:
> [https://github.com/apache/kafka/blob/e5d6c9a79a4ca9b82502b8e7f503d86ddaddb7fb/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/InsertField.java#L164]
> Unfortunately it's using Objects.hash which is very slow.
> I rewrite this to own implementation and gain 6x speedup.
> Microbencharks gives:
>  * Original ConnectSchema hashCode: 2995ms
>  * My implementation: 517ms
> (1 iterations of calculating: hashCode for on new 
> ConnectSchema(Schema.Type.STRING))
> {code:java}
> @Override
> public int hashCode() {
> int result = 5;
> result = 31 * result + type.hashCode();
> result = 31 * result + (optional ? 1 : 0);
> result = 31 * result + (defaultValue == null ? 0 : 
> defaultValue.hashCode());
> if (fields != null) {
> for (Field f : fields) {
> result = 31 * result + f.hashCode();
> }
> }
> result = 31 * result + (keySchema == null ? 0 : keySchema.hashCode());
> result = 31 * result + (valueSchema == null ? 0 : valueSchema.hashCode());
> result = 31 * result + (name == null ? 0 : name.hashCode());
> result = 31 * result + (version == null ? 0 : version);
> result = 31 * result + (doc == null ? 0 : doc.hashCode());
> if (parameters != null) {
> for (Map.Entry e : parameters.entrySet()) {
> result = 31 * result + e.getKey().hashCode() + 
> e.getValue().hashCode();
> }
> }
> return result;
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-5959) NPE in NetworkClient

2017-09-21 Thread JIRA
Xavier Léauté created KAFKA-5959:


 Summary: NPE in NetworkClient
 Key: KAFKA-5959
 URL: https://issues.apache.org/jira/browse/KAFKA-5959
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 1.0.0
Reporter: Xavier Léauté
Assignee: Jason Gustafson


I'm experiencing the following error when running trunk clients against a 
0.11.0 cluster configured with SASL_PLAINTEXT

{code}
[2017-09-21 23:07:09,072] ERROR [kafka-producer-network-thread | xxx] [Producer 
clientId=xxx] Uncaught error in request completion: 
(org.apache.kafka.clients.NetworkClient)
java.lang.NullPointerException
at 
org.apache.kafka.clients.producer.internals.Sender.canRetry(Sender.java:639)
at 
org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:522)
at 
org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:473)
at 
org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:76)
at 
org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:693)
at 
org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:101)
at 
org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:453)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:241)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:166)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5966) Support ByteBuffer serialization in Kafka Streams

2017-09-22 Thread JIRA
Xavier Léauté created KAFKA-5966:


 Summary: Support ByteBuffer serialization in Kafka Streams
 Key: KAFKA-5966
 URL: https://issues.apache.org/jira/browse/KAFKA-5966
 Project: Kafka
  Issue Type: Improvement
Reporter: Xavier Léauté


Currently Kafka Streams only supports serialization using byte arrays. This 
means we generate a lot of garbage and spend unnecessary time copying bytes, 
especially when working with windowed state stores that rely on composite keys. 
In many places in the code we have extract parts of the composite key to 
deserialize the either the timestamp or the message key from the state store 
key (e.g. the methods in WindowStoreUtils)

Having support for serde into/from ByteBuffers would allow us to reuse the 
underlying bytearrays and just pass around slices of the underlying Buffers to 
avoid the unnecessary copying.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5981) Update the affected versions on bug KAFKA-1194

2017-09-27 Thread JIRA
Amund Gjersøe created KAFKA-5981:


 Summary: Update the affected versions on bug KAFKA-1194
 Key: KAFKA-5981
 URL: https://issues.apache.org/jira/browse/KAFKA-5981
 Project: Kafka
  Issue Type: Sub-task
  Components: log
Affects Versions: 0.11.0.0, 0.10.2.0, 0.10.1.1, 0.10.1.0, 0.10.0.1, 
0.10.0.0, 0.9.0.0, 0.8.1
 Environment: Windows
Reporter: Amund Gjersøe
Priority: Critical


KAFKA-1194 is a bug that has been around for close to 4 years.
The original poster hasn't updated the affected version, so I did based on own 
testing and what that have been reported in the comments.

If someone with the right to change bug reports could update the original one, 
that would be better than my approach. I just want to make sure that it is not 
seen as a "v0.8 only" bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6104) Add unit tests for ClusterConnectionStates

2017-10-22 Thread JIRA
Sönke Liebau created KAFKA-6104:
---

 Summary: Add unit tests for ClusterConnectionStates
 Key: KAFKA-6104
 URL: https://issues.apache.org/jira/browse/KAFKA-6104
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Affects Versions: 0.11.0.0
Reporter: Sönke Liebau
Assignee: Sönke Liebau
Priority: Trivial


There are no unit tests for the ClusterConnectionStates object.

I've created tests to:
* Cycle through connection states and check correct properties during the 
process
* Check authentication exception is correctly stored
* Check that max reconnect backoff is not exceeded during reconnects
* Check that removed connections are correctly reset

There is currently no test that checks whether the reconnect times are 
correctly increased, as that is still being fixed in KAFKA-6101 .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6114) kafka Java API Consumer and producer Offset value comparison?

2017-10-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-6114.
-
Resolution: Invalid
  Assignee: Sönke Liebau

> kafka Java API Consumer and producer Offset value comparison?
> -
>
> Key: KAFKA-6114
> URL: https://issues.apache.org/jira/browse/KAFKA-6114
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer, offset manager, producer 
>Affects Versions: 0.11.0.0
> Environment: Linux 
>Reporter: veerendra nath jasthi
>Assignee: Sönke Liebau
>
> I have a requirement to match Kafka producer offset value to consumer offset 
> by using Java API?
> I am new to KAFKA,Could anyone suggest how to proceed with this ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6112) SSL + ACL does not seem to work

2017-10-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-6112.
-
Resolution: Cannot Reproduce

As stated in earlier comment this is most probably a configuration issue, ACLs 
with SSL authentication in general work.

> SSL + ACL does not seem to work
> ---
>
> Key: KAFKA-6112
> URL: https://issues.apache.org/jira/browse/KAFKA-6112
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0, 0.11.0.1
>Reporter: Jagadish Prasath Ramu
>Assignee: Sönke Liebau
>
> I'm trying to enable ACL for a cluster that has SSL based authentication 
> setup.
> Similar issue (or exceptions) has been reported in the following JIRA:
> https://issues.apache.org/jira/browse/KAFKA-3687 (refer the last 2 exceptions 
> that were posted after the issue was closed).
> error messages seen in Producer:
> {noformat}
> [2017-10-24 18:32:25,254] WARN Error while fetching metadata with correlation 
> id 349 : {t1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2017-10-24 18:32:25,362] WARN Error while fetching metadata with correlation 
> id 350 : {t1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2017-10-24 18:32:25,470] WARN Error while fetching metadata with correlation 
> id 351 : {t1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2017-10-24 18:32:25,575] WARN Error while fetching metadata with correlation 
> id 352 : {t1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> {noformat}
> security related kafka config.properties:
> {noformat}
> ssl.keystore.location=kafka.server.keystore.jks
> ssl.keystore.password=abc123
> ssl.key.password=abc123
> ssl.truststore.location=kafka.server.truststore.jks
> ssl.truststore.password=abc123
> ssl.client.auth=required
> ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
> ssl.keystore.type = JKS
> ssl.truststore.type = JKS
> security.inter.broker.protocol = SSL
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> allow.everyone.if.no.acl.found=false
> super.users=User:Bob;User:"CN=localhost,OU=XXX,O=,L=XXX,ST=XX,C=XX"
> listeners=PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093
> {noformat}
> client configuration file:
> {noformat}
> security.protocol=SSL
> ssl.truststore.location=kafka.client.truststore.jks
> ssl.truststore.password=abc123
> ssl.keystore.location=kafka.client.keystore.jks
> ssl.keystore.password=abc123
> ssl.key.password=abc123
> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
> ssl.truststore.type=JKS
> ssl.keystore.type=JKS
> group.id=group-1
> {noformat}
> The debug messages of authorizer log does not show any "DENY" messages.
> {noformat}
> [2017-10-24 18:32:26,319] DEBUG operation = Create on resource = 
> Cluster:kafka-cluster from host = 127.0.0.1 is Allow based on acl = 
> User:CN=localhost,OU=XXX,O=,L=XXX,ST=XX,C=XX has Allow permission for 
> operations: Create from hosts: 127.0.0.1 (kafka.authorizer.logger)
> [2017-10-24 18:32:26,319] DEBUG Principal = 
> User:CN=localhost,OU=XXX,O=,L=XXX,ST=XX,C=XX is Allowed Operation = 
> Create from host = 127.0.0.1 on resource = Cluster:kafka-cluster 
> (kafka.authorizer.logger)
> {noformat}
> I have followed the scripts stated in the thread:
> http://comments.gmane.org/gmane.comp.apache.kafka.user/12619



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6159) Link to upgrade docs in 100 release notes is broken

2017-11-02 Thread JIRA
Martin Schröder created KAFKA-6159:
--

 Summary: Link to upgrade docs in 100 release notes is broken
 Key: KAFKA-6159
 URL: https://issues.apache.org/jira/browse/KAFKA-6159
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.0
Reporter: Martin Schröder


The release notes for 1.0.0 point to 
http://kafka.apache.org/100/documentation.html#upgrade for "upgrade 
documentation", but that gives a 404.

Maybe you mean http://kafka.apache.org/documentation.html#upgrade ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6163) Broker should fail fast on startup if an error occurs while loading logs

2017-11-02 Thread JIRA
Xavier Léauté created KAFKA-6163:


 Summary: Broker should fail fast on startup if an error occurs 
while loading logs
 Key: KAFKA-6163
 URL: https://issues.apache.org/jira/browse/KAFKA-6163
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Xavier Léauté
Priority: Normal


If the broker fails to load one of the logs during startup, we currently don't 
fail fast. The {{LogManager}} will log an error and initiate the shutdown 
sequence, but continue loading all the remaining sequence before shutting down.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6164) ClientQuotaManager threads prevent shutdown when encountering an error loading logs

2017-11-02 Thread JIRA
Xavier Léauté created KAFKA-6164:


 Summary: ClientQuotaManager threads prevent shutdown when 
encountering an error loading logs
 Key: KAFKA-6164
 URL: https://issues.apache.org/jira/browse/KAFKA-6164
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0, 1.0.0
Reporter: Xavier Léauté
Priority: Major


While diagnosing KAFKA-6163, we noticed that when the broker initiates a 
shutdown sequence in response to an error loading the logs, the process never 
exits. The JVM appears to be waiting indefinitely for several non-deamon 
threads to terminate.

The threads in question are {{ThrottledRequestReaper-Request}}, 
{{ThrottledRequestReaper-Produce}}, and {{ThrottledRequestReaper-Fetch}}, so it 
appears we don't properly shutdown {{ClientQuotaManager}} in this situation.

QuotaManager shutdown is currently handled by KafkaApis, however KafkaApis will 
never be instantiated in the first place if we encounter an error loading the 
logs, so quotamangers are left dangling in that case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6177) kafka-mirror-maker.sh RecordTooLargeException

2017-11-06 Thread JIRA
Rémi REY created KAFKA-6177:
---

 Summary: kafka-mirror-maker.sh RecordTooLargeException
 Key: KAFKA-6177
 URL: https://issues.apache.org/jira/browse/KAFKA-6177
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.10.1.1
 Environment: centos 7
Reporter: Rémi REY
Priority: Minor
 Attachments: consumer.config, producer.config

Hi all,

I am facing an issue with kafka-mirror-maker.sh.
We have 2 kafka clusters with the same configuration and mirror maker instances 
in charge of the mirroring between the clusters.

We haven't change the default configuration on the message size, so the 112 
bytes limitation is expected on both clusters.

we are facing the following error at the mirroring side:

Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] ERROR 
Error when sending message to topic my_topic_name with key: 81 bytes, value: 
1000272 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
org.apache.kafka.common.errors.RecordTooLargeException: The request included a 
message larger than the max message size the server will accept.
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] ERROR 
Error when sending message to topic my_topic_name with key: 81 bytes, value: 
13846 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
java.lang.IllegalStateException: Producer is closed forcefully.
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:513)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:493)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:156)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
java.lang.Thread.run(Thread.java:745)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] FATAL 
[mirrormaker-thread-0] Mirror maker thread failure due to  
(kafka.tools.MirrorMaker$MirrorMakerThread)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
java.lang.IllegalStateException: Cannot send after the producer is closed.
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:185)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:474)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:657)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.Iterator$class.foreach(Iterator.scala:893)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54)
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:434)

Why am I getting this error ? 
I expect that messages that could enter the first cluster could be repicated to 
the second cluster without raising any error on the message size.
Is there any configuration adjustment required at mirror maker side to have it 
support the default message size on the brokers ?

Find the mirrormaker consumer and producer config files attached.

Thanks for your inputs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4) Confusing Error mesage from producer when no kafka brokers are available

2017-11-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-4.
--
Resolution: Fixed  (was: Unresolved)

> Confusing Error mesage from producer when no kafka brokers are available
> 
>
> Key: KAFKA-4
> URL: https://issues.apache.org/jira/browse/KAFKA-4
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.6
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> If no kafka brokers are available the producer gives the following error: 
> Exception in thread "main" kafka.common.InvalidPartitionException: Invalid 
> number of partitions: 0 
> Valid values are > 0 
> at 
> kafka.producer.Producer.kafka$producer$Producer$$getPartition(Producer.scala:144)
>  
> at kafka.producer.Producer$$anonfun$3.apply(Producer.scala:112) 
> at kafka.producer.Producer$$anonfun$3.apply(Producer.scala:102) 
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>  
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>  
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>  
> at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32) 
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:206) 
> at scala.collection.mutable.WrappedArray.map(WrappedArray.scala:32) 
> at kafka.producer.Producer.send(Producer.scala:102) 
> at kafka.javaapi.producer.Producer.send(Producer.scala:101) 
> at com.linkedin.nusviewer.PublishTestMessage.main(PublishTestMessage.java:45) 
> This is confusing. The problem is that no brokers are available, we should 
> make this more clear.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-381) Changes made by a request do not affect following requests in the same packet.

2017-11-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-381.

Resolution: Not A Bug

I think we can safely close this issue, the behavior was sufficiently 
investigated and explained.
Behavior today would still be like this and is the expected behavior.

> Changes made by a request do not affect following requests in the same packet.
> --
>
> Key: KAFKA-381
> URL: https://issues.apache.org/jira/browse/KAFKA-381
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.7
>Reporter: Samir Jindel
>Priority: Minor
>
> If a packet contains a produce request followed immediately by a fetch 
> request, the fetch request will not have access to the data produced by the 
> prior request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1130) "log.dirs" is a confusing property name

2017-11-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-1130.
-
Resolution: Won't Fix

Due to the age of the last comment on this issue and the fact that there has 
been not a lot of discussion around the naming of this parameter in the recent 
past I believe we can close this issue.

> "log.dirs" is a confusing property name
> ---
>
> Key: KAFKA-1130
> URL: https://issues.apache.org/jira/browse/KAFKA-1130
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.0
>Reporter: David Arthur
>Priority: Minor
> Attachments: KAFKA-1130.diff
>
>
> "log.dirs" is a somewhat misleading config name. The term "log" comes from an 
> internal Kafka class name, and shouldn't leak out into the public API (in 
> this case, the config).
> Something like "data.dirs" would be less confusing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6238) Issues with protocol version when applying a rolling upgrade to 1.0.0

2017-11-20 Thread JIRA
Diego Louzán created KAFKA-6238:
---

 Summary: Issues with protocol version when applying a rolling 
upgrade to 1.0.0
 Key: KAFKA-6238
 URL: https://issues.apache.org/jira/browse/KAFKA-6238
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.0
Reporter: Diego Louzán


Hello,

I am trying to perform a rolling upgrade from 0.10.0.1 to 1.0.0, and according 
to the instructions in the documentation, I should only have to upgrade the 
"inter.broker.protocol.version" parameter in the first step. But after setting 
the value to "0.10.0" or "0.10.0.1" (tried both), the broker refuses to start 
with the following error:

{code}
[2017-11-20 08:28:46,620] FATAL  (kafka.Kafka$)
java.lang.IllegalArgumentException: requirement failed: 
log.message.format.version 1.0-IV0 cannot be used when 
inter.broker.protocol.version is set to 0.10.0.1
at scala.Predef$.require(Predef.scala:224)
at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:1205)
at kafka.server.KafkaConfig.(KafkaConfig.scala:1170)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:881)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:878)
at 
kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
at kafka.Kafka$.main(Kafka.scala:82)
at kafka.Kafka.main(Kafka.scala)
{code}

I checked the instructions for rolling upgrades to previous versions (namely 
0.11.0.0), and in here it's stated that is also needed to upgrade the 
"log.message.format.version" parameter in two stages. I have tried that and the 
upgrade worked. It seems it still applies to version 1.0.0, so I'm not sure if 
this is wrong documentation, or an actual issue with kafka since it should work 
as stated in the docs.

Regards,
Diego Louzán



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6257) KafkaConsumer was hung when bootstrap servers was not existed

2017-11-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-6257.
-
Resolution: Duplicate

Closing this as duplicate since no contradicting information was added.

> KafkaConsumer was hung when bootstrap servers was not existed
> -
>
> Key: KAFKA-6257
> URL: https://issues.apache.org/jira/browse/KAFKA-6257
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Brian Clark
>Priority: Minor
>
> Could anyone help me on this?
> We have an issue if we entered an non-existed host:port for bootstrap.servers 
> property on KafkaConsumer. The created KafkaConsumer was hung forever.
> the debug message:
> java.net.ConnectException: Connection timed out: no further information
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
> at 
> org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:95)
> at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:359)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:432)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:199)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:223)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:200)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
> [2017-08-28 09:20:56,400] DEBUG Node -1 disconnected. 
> (org.apache.kafka.clients.NetworkClient)
> [2017-08-28 09:20:56,400] WARN Connection to node -1 could not be 
> established. Broker may not be available. 
> (org.apache.kafka.clients.NetworkClient)
> [2017-08-28 09:20:56,400] DEBUG Give up sending metadata request since no 
> node is available (org.apache.kafka.clients.NetworkClient)
> [2017-08-28 09:20:56,450] DEBUG Initialize connection to node -1 for sending 
> metadata request (org.apache.kafka.clients.NetworkClient)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6311) Expose Kafka cluster ID in Connect REST API

2017-12-05 Thread JIRA
Xavier Léauté created KAFKA-6311:


 Summary: Expose Kafka cluster ID in Connect REST API
 Key: KAFKA-6311
 URL: https://issues.apache.org/jira/browse/KAFKA-6311
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Xavier Léauté
Assignee: Ewen Cheslack-Postava


Connect currently does not expose any information about the Kafka cluster it is 
connected to.
In an environment with multiple Kafka clusters it would be useful to know which 
cluster Connect is talking to. Exposing this information enables programmatic 
discovery of Kafka cluster metadata for the purpose of configuring connectors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6389) Expose transaction metrics via JMX

2017-12-20 Thread JIRA
Florent Ramière created KAFKA-6389:
--

 Summary: Expose transaction metrics via JMX
 Key: KAFKA-6389
 URL: https://issues.apache.org/jira/browse/KAFKA-6389
 Project: Kafka
  Issue Type: Improvement
  Components: metrics
Affects Versions: 1.0.0
Reporter: Florent Ramière


Expose various metrics from 
https://cwiki.apache.org/confluence/display/KAFKA/Transactional+Messaging+in+Kafka
Especially 
* number of transactions
* number of current transactions
* timeout



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-7878) Connect Task already exists in this worker when failed to create consumer

2019-01-28 Thread JIRA
Loïc Monney created KAFKA-7878:
--

 Summary: Connect Task already exists in this worker when failed to 
create consumer
 Key: KAFKA-7878
 URL: https://issues.apache.org/jira/browse/KAFKA-7878
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.0.1, 1.0.1
Reporter: Loïc Monney


*Assumption*
1. DNS is not available during a few minutes
2. Consumer group rebalances
3. Client is not able to resolve DNS entries anymore and fails
4. Task seems already registered, so at next rebalance the task will fail due 
to *Task already exists in this worker* and the only way to recover is to 
restart the connect process

*Real log entries*
* Distributed cluster running one connector on top of Kubernetes
* Connect 2.0.1
* kafka-connect-hdfs 5.0.1
{noformat}
[2019-01-28 13:31:25,914] WARN Removing server kafka.xxx.net:9093 from 
bootstrap.servers as DNS resolution failed for kafka.xxx.net 
(org.apache.kafka.clients.ClientUtils:56)
[2019-01-28 13:31:25,915] ERROR WorkerSinkTask\{id=xxx-22} Task failed 
initialization and will not be started. 
(org.apache.kafka.connect.runtime.WorkerSinkTask:142)
org.apache.kafka.connect.errors.ConnectException: Failed to create consumer
 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.createConsumer(WorkerSinkTask.java:476)
 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.initialize(WorkerSinkTask.java:139)
 at org.apache.kafka.connect.runtime.Worker.startTask(Worker.java:452)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:873)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1600(DistributedHerder.java:111)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder$13.call(DistributedHerder.java:888)
 at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder$13.call(DistributedHerder.java:884)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka 
consumer
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:799)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:615)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:596)
 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.createConsumer(WorkerSinkTask.java:474)
 ... 10 more
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable 
bootstrap urls given in bootstrap.servers
 at 
org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:66)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:709)
 ... 13 more
[2019-01-28 13:31:25,925] INFO Finished starting connectors and tasks 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:868)
[2019-01-28 13:31:25,926] INFO Rebalance started 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:1239)
[2019-01-28 13:31:25,927] INFO Stopping task xxx-22 
(org.apache.kafka.connect.runtime.Worker:555)
[2019-01-28 13:31:26,021] INFO Finished stopping tasks in preparation for 
rebalance (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1269)
[2019-01-28 13:31:26,021] INFO [Worker clientId=connect-1, groupId=xxx-cluster] 
(Re-)joining group 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator:509)
[2019-01-28 13:31:30,746] INFO [Worker clientId=connect-1, groupId=xxx-cluster] 
Successfully joined group with generation 29 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator:473)
[2019-01-28 13:31:30,746] INFO Joined group and got assignment: 
Assignment\{error=0, leader='connect-1-05961f03-52a7-4c02-acc2-0f1fb021692e', 
leaderUrl='http://192.168.46.59:8083/', offset=32, connectorIds=[], 
taskIds=[xxx-22]} 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:1217)
[2019-01-28 13:31:30,747] INFO Starting connectors and tasks using config 
offset 32 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:858)
[2019-01-28 13:31:30,747] INFO Starting task xxx-22 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:872)
[2019-01-28 13:31:30,747] INFO Creating task xxx-22 
(org.apache.kafka.connect.runtime.Worker:396)
[2019-01-28 13:31:30,748] ERROR Couldn't instantiate task xxx-22 because it has 
an invalid task configuration. This task will not execute until reconfigured. 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:890)
org.apache.kafka.connect.errors.ConnectException: Task already exists in this 
worker: xxx-22
 at org.apache.kafka.connect.runtime.Worker.startTask(Worker.java:399)
 at 
org.apache.kafka.connect.runtime.distributed.Di

[jira] [Created] (KAFKA-7883) Add schema.namespace support to SetSchemaMetadata SMT in Kafka Connect

2019-01-29 Thread JIRA
Jérémy Thulliez created KAFKA-7883:
--

 Summary: Add schema.namespace support to SetSchemaMetadata SMT in 
Kafka Connect
 Key: KAFKA-7883
 URL: https://issues.apache.org/jira/browse/KAFKA-7883
 Project: Kafka
  Issue Type: New Feature
  Components: KafkaConnect
Affects Versions: 2.1.0
Reporter: Jérémy Thulliez


When using a connector with AvroConverter & SchemaRegistry, users should be 
able to specify the namespace in the SMT.

Currently, only "schema.version" and "schema.name" can be specified.

This is needed because if not specified, generated classes (from avro schema)  
are in the default package and not accessible.

Currently, the workaround is to add a Transformation implementation to the 
connect classpath.

It should be native.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7886) Some partitions are fully truncated during recovery when log.message.format = 0.10.2 & inter.broker.protocol >= 0.11

2019-01-31 Thread JIRA
Hervé RIVIERE created KAFKA-7886:


 Summary: Some partitions are fully truncated during recovery when 
log.message.format = 0.10.2 & inter.broker.protocol >= 0.11
 Key: KAFKA-7886
 URL: https://issues.apache.org/jira/browse/KAFKA-7886
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.1.0, 2.0.1, 0.11.0.0
 Environment: centos 7 
Reporter: Hervé RIVIERE
 Attachments: broker.log

On a cluster of Kafka 2.0.1, and brokers configured with 
* inter.broker.protocol.format = 2.0
* log.message.format.version = 0.10.2

 

In such configuration, when a broker is restarted (clean shutdown), the 
recovery process, for some partitions, is not taking in account the high 
watermark and is truncating and re-downloading the full partition.


Typically for brokers with 500 partitions each / 5 TB of disk usage the 
recovery process with this configuration is during up to 1 hour whereas it 
usually takes less than 10 min in the same broker when 
(inter.broker.protocol.format = log.message.format.version)
Partitions redownloaded seems not predictable : after several restart of the 
same broker, partitions redownloaded are now always the same.


Broker log filter for one specific partition that was redownloaded ( the 
truncate offset : 12878451349 is corresponding to the log-start-offset) :

 
{code:java}
2019-01-31 09:23:34,703 INFO [ProducerStateManager partition=my_topic-11] 
Writing producer snapshot at offset 13132373966 (kafka.log.ProducerStateManager)
2019-01-31 09:25:15,245 INFO [Log partition=my_topic-11, dir=/var/lib/kafka] 
Loading producer state till offset 13132373966 with message format version 1 
(kafka.log.Log)
2019-01-31 09:25:15,245 INFO [ProducerStateManager partition=my_topic-11] 
Writing producer snapshot at offset 13130789408 (kafka.log.ProducerStateManager)
2019-01-31 09:25:15,249 INFO [ProducerStateManager partition=my_topic-11] 
Writing producer snapshot at offset 13131829288 (kafka.log.ProducerStateManager)
2019-01-31 09:25:15,388 INFO [ProducerStateManager partition=my_topic-11] 
Writing producer snapshot at offset 13132373966 (kafka.log.ProducerStateManager)

2019-01-31 09:25:15,388 INFO [Log partition=my_topic-11, dir=/var/lib/kafka] 
Completed load of log with 243 segments, log start offset 12878451349 and log 
end offset 13132373966 in 46273 ms (kafka.log.Log)

2019-01-31 09:28:38,226 INFO Replica loaded for partition my_topic-11 with 
initial high watermark 13132373966 (kafka.cluster.Replica)
2019-01-31 09:28:38,226 INFO Replica loaded for partition my_topic-11 with 
initial high watermark 0 (kafka.cluster.Replica)
2019-01-31 09:28:38,226 INFO Replica loaded for partition my_topic-11 with 
initial high watermark 0 (kafka.cluster.Replica)
2019-01-31 09:28:42,132 INFO The cleaning for partition my_topic-11 is aborted 
and paused (kafka.log.LogCleaner)

2019-01-31 09:28:42,133 INFO [Log partition=my_topic-11, dir=/var/lib/kafka] 
Truncating to offset 12878451349 (kafka.log.Log)

2019-01-31 09:28:42,135 INFO [Log partition=my_topic-11, dir=/var/lib/kafka] 
Scheduling log segment [baseOffset 12879521312, size 536869342] for deletion. 
(kafka.log.Log)
(...)
2019-01-31 09:28:42,521 INFO [Log partition=my_topic-11, dir=/var/lib/kafka] 
Scheduling log segment [baseOffset 13131829288, size 280543535] for deletion. 
(kafka.log.Log)
2019-01-31 09:28:43,870 WARN [ReplicaFetcher replicaId=11, leaderId=13, 
fetcherId=1] Truncating my_topic-11 to offset 12878451349 below high watermark 
13132373966 (kafka.server.ReplicaFetcherThread)
2019-01-31 09:29:03,703 INFO [Log partition=my_topic-11, dir=/var/lib/kafka] 
Found deletable segments with base offsets [12878451349] due to retention time 
25920ms breach (kafka.log.Log)
2019-01-31 09:28:42,550 INFO Compaction for partition my_topic-11 is resumed 
(kafka.log.LogManager)
{code}
 

We sucessfull tried to reproduce the same bug with kafka 0.11, 2.0.1 & 2.1.0

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-217) Client test suite

2019-02-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-217.

Resolution: Won't Fix

I'll close this for now since no-one objected here or on the mailing list. If 
we decide to do something like this later on we can always reopen or create a 
new issue.

> Client test suite
> -
>
> Key: KAFKA-217
> URL: https://issues.apache.org/jira/browse/KAFKA-217
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>Priority: Major
>
> It would be great to get a comprehensive test suite that we could run against 
> clients to certify them.
> The first step here would be work out a design approach that makes it easy to 
> certify the correctness of a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-659) Support request pipelining in the network server

2019-02-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-659.

Resolution: Fixed

I'll close this for now since no-one objected here or on the mailing list, so 
I'll assume my understanding of this being fixed by now is correct.

> Support request pipelining in the network server
> 
>
> Key: KAFKA-659
> URL: https://issues.apache.org/jira/browse/KAFKA-659
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jay Kreps
>Priority: Major
>
> Currently the network layer in kafka will only process a single request at a 
> time from a given connection. The protocol is designed to allow pipelining of 
> requests which would improve latency.
> There are two changes that would have to made for this to work, in my 
> understanding:
> 1. Currently once a completed request is read from a socket the server does 
> not register for "read interest" again until a response is sent. The server 
> would have to register for read interest immediately to allow reading more 
> requests.
> 2. Currently the socket server adds all requests to a single "request 
> channel" that serves as a work queue for all the background i/o threads. One 
> requirement for Kafka is to do in order processing of requests from a given 
> socket. This is currently achieved by not reading any new requests from a 
> socket until the currently outstanding request is processed. To maintain this 
> guarantee we would have to guarantee that all requests from a particular 
> socket went to the same I/O thread. A simple way to do this would be to have 
> work queue per I/O thread. One downside of this is that pinning requests to 
> I/O threads will add latency variance--if that thread stalls due to a slow 
> I/O no other thread can pick up the slack. So perhaps there is a better way 
> that isn't overly complex?
> Would be good to nail down the design for this as a first step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-1015) documentation for inbuilt offset management

2019-02-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-1015.
-
Resolution: Fixed

I'll close this as there was no conflicting feedback here or on the mailing 
list. Not sure what the correct Resolution here would be, so I'll keep it 
simple and go with "fixed".

> documentation for inbuilt offset management
> ---
>
> Key: KAFKA-1015
> URL: https://issues.apache.org/jira/browse/KAFKA-1015
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Tejas Patil
>Assignee: Tejas Patil
>Priority: Minor
>
> Add documentation for inbuilt offset management and update existing documents 
> if needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-817) Implement a zookeeper path-based controlled shutdown tool

2019-02-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-817.

Resolution: Won't Fix

I'll close this for now as I've received no feedback to the contrary.

> Implement a zookeeper path-based controlled shutdown tool
> -
>
> Key: KAFKA-817
> URL: https://issues.apache.org/jira/browse/KAFKA-817
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, tools
>Affects Versions: 0.8.1
>Reporter: Joel Koshy
>Assignee: Neha Narkhede
>Priority: Major
>
> The controlled shutdown tool currently depends on jmxremote.port being 
> exposed. Apparently, this is often not exposed in production environments and 
> makes the script unusable. We can move to a zk-based approach in which the 
> controller watches a path that lists shutting down brokers. This will also 
> make it consistent with the pattern used in some of the other 
> replication-related tools.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7883) Add schema.namespace support to SetSchemaMetadata SMT in Kafka Connect

2019-02-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jérémy Thulliez resolved KAFKA-7883.

Resolution: Workaround

> Add schema.namespace support to SetSchemaMetadata SMT in Kafka Connect
> --
>
> Key: KAFKA-7883
> URL: https://issues.apache.org/jira/browse/KAFKA-7883
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Affects Versions: 2.1.0
>Reporter: Jérémy Thulliez
>Priority: Minor
>  Labels: features
>
> When using a connector with AvroConverter & SchemaRegistry, users should be 
> able to specify the namespace in the SMT.
> Currently, only "schema.version" and "schema.name" can be specified.
> This is needed because if not specified, generated classes (from avro schema) 
>  are in the default package and not accessible.
> Currently, the workaround is to add a Transformation implementation to the 
> connect classpath.
> It should be native.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8009) Uppgrade Jenkins job Gradle version from 4.10.2 to 5.x

2019-02-27 Thread JIRA
Dejan Stojadinović created KAFKA-8009:
-

 Summary: Uppgrade Jenkins job Gradle version from 4.10.2 to 5.x
 Key: KAFKA-8009
 URL: https://issues.apache.org/jira/browse/KAFKA-8009
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Dejan Stojadinović
Assignee: Ismael Juma


*Rationale:* 
 * Kafka project already uses gradle 5.x (5.1.1 at the moment)
 * recently released *spotbugs-gradle-plugin* version 1.6.10 drops support for 
Gradle 4: 
[https://github.com/spotbugs/spotbugs-gradle-plugin/blob/1.6.10/CHANGELOG.md#1610---2019-02-18]

 

*Note:* related github pull request that contains spotbugs plugin version bump 
(among other things): 
https://github.com/apache/kafka/pull/6332#issuecomment-467631246



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8131) Add --version parameter to command line help outputs & docs

2019-03-20 Thread JIRA
Sönke Liebau created KAFKA-8131:
---

 Summary: Add --version parameter to command line help outputs & 
docs
 Key: KAFKA-8131
 URL: https://issues.apache.org/jira/browse/KAFKA-8131
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Sönke Liebau
Assignee: Sönke Liebau


KAFKA-2061 added the --version flag to kafka-run-class.sh which prints the 
Kafka version to the commandline.

As this is in kafka-run-class.sh this will effectively work for all commandline 
tools that use this file to run a class, so it should be added to the help 
output of these scripts as well. A quick grep leads me to these suspects:
 * connect-distributed.sh
 * connect-standalone.sh
 * kafka-acls.sh
 * kafka-broker-api-versions.sh
 * kafka-configs.sh
 * kafka-console-consumer.sh
 * kafka-console-producer.sh
 * kafka-consumer-groups.sh
 * kafka-consumer-perf-test.sh
 * kafka-delegation-tokens.sh
 * kafka-delete-records.sh
 * kafka-dump-log.sh
 * kafka-log-dirs.sh
 * kafka-mirror-maker.sh
 * kafka-preferred-replica-election.sh
 * kafka-producer-perf-test.sh
 * kafka-reassign-partitions.sh
 * kafka-replica-verification.sh
 * kafka-server-start.sh
 * kafka-streams-application-reset.sh
 * kafka-topics.sh
 * kafka-verifiable-consumer.sh
 * kafka-verifiable-producer.sh
 * trogdor.sh
 * zookeeper-security-migration.sh
 * zookeeper-server-start.sh
 * zookeeper-shell.sh

Currently this parameter is not documented at all, neither in the output nor in 
the official docs.

I'd propose to add it to the docs as well as part of this issue, I'll look for 
a suitable place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8160) To add ACL with SSL authentication

2019-03-27 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-8160.
-
Resolution: Information Provided

Hi [~suseem...@gmail.com]

 

you can absolutely use SSL based authentication with ACLs, please refer to the 
docs [here|https://kafka.apache.org/documentation/#security_ssl] and 
[here|https://kafka.apache.org/documentation/#security_authz] for more 
information.

For your specific question, you will have to use a custom PrincipalBuilder to 
ensure that principals that are extracted from certificates conform to what you 
set as username for your SCRAM users. 

 

As this is more of a support request, not a new feature I'll close this ticket, 
if you have any further questions, please don't hesitate to reach out on the 
users mailing list!

> To add ACL with SSL authentication
> --
>
> Key: KAFKA-8160
> URL: https://issues.apache.org/jira/browse/KAFKA-8160
> Project: Kafka
>  Issue Type: New Feature
>  Components: consumer, producer 
>Affects Versions: 1.1.0
>Reporter: suseendramani
>Priority: Major
>
> We want to setup the SSL based authentication along with ACL in place.  Is 
> that doable and can it be added as a feature ? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8218) IllegalStateException while accessing context in Transformer

2019-04-11 Thread JIRA
Bartłomiej Kępa created KAFKA-8218:
--

 Summary: IllegalStateException while accessing context in 
Transformer
 Key: KAFKA-8218
 URL: https://issues.apache.org/jira/browse/KAFKA-8218
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.1.1
Reporter: Bartłomiej Kępa


Custom Kotlin implementation of Transformer throws 
{code}
java.lang.IllegalStateException: This should not happen as headers() should 
only be called while a record is processed
{code}

while being plugged into the stream topology that actually works. Invocation of 
transform() method has valid arguments (Key and GenericRecord).

The exception is being thrown because in our implementation of transform we 
need to access headers from context.  


{code:java}
 override fun transform(key: String?, value: GenericRecord): KeyValue {
  val headers = context.headers()
  ...
}
 {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8226) New MirrorMaker option partition.to.partition

2019-04-12 Thread JIRA
Ernestas Vaiciukevičius created KAFKA-8226:
--

 Summary: New MirrorMaker option partition.to.partition
 Key: KAFKA-8226
 URL: https://issues.apache.org/jira/browse/KAFKA-8226
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Ernestas Vaiciukevičius


Currently when MirrorMaker moves data between topics with records with null 
keys - it shuffles records between destination topic's partitions. Sometimes 
it's desirable trying to preserve the original partition.

Related PR adds new command line option to do that:

When partition.to.partition=true MirrorMaker retains the partition number when 
mirroring records even without the keys. 
When using this option - source and destination topics are assumed to have the 
same number of partitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8247) Duplicate error handling in kafka-server-start.sh and actual Kafka class

2019-04-17 Thread JIRA
Sönke Liebau created KAFKA-8247:
---

 Summary: Duplicate error handling in kafka-server-start.sh and 
actual Kafka class
 Key: KAFKA-8247
 URL: https://issues.apache.org/jira/browse/KAFKA-8247
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.1.1
Reporter: Sönke Liebau


There is some duplication of error handling for command line parameters that 
are passed into kafka-server-start.sh

 

The shell script prints an error, if no arguments are passed in, effectively 
causing the same check in 
[Kafka|https://github.com/apache/kafka/blob/92db08cba582668d77160b0c2853efd45a1b809b/core/src/main/scala/kafka/Kafka.scala#L43]
 to never be triggered, unless the only option that is specified is -daemon, 
which would be removed before passing arguments to the java class.

 

While not in any way critical I don't think that this is intended behavior. I 
think we should remove the extra check in kafka-server-start.sh and leave 
argument handling up to the Kafka class.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8292) Add support for --version parameter to command line tools

2019-04-25 Thread JIRA
Sönke Liebau created KAFKA-8292:
---

 Summary: Add support for --version parameter to command line tools
 Key: KAFKA-8292
 URL: https://issues.apache.org/jira/browse/KAFKA-8292
 Project: Kafka
  Issue Type: Improvement
Reporter: Sönke Liebau


During the implemenation of 
[KAFKA-8131|https://issues.apache.org/jira/browse/KAFKA-8131] we noticed that 
command line tools implement parsing of parameters in different ways.
For most of the tools the --version parameter was correctly implemented in that 
issue, for the following this still remains to be done:
* ConnectDistributed
* ConnectStandalone
* ProducerPerformance
* VerifiableConsumer
* VerifiableProducer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8393) Kafka Connect: Kafka Connect: could not get type for name org.osgi.framework.BundleListener on Windows

2019-05-20 Thread JIRA
Loïc created KAFKA-8393:
---

 Summary: Kafka Connect: Kafka Connect: could not get type for name 
org.osgi.framework.BundleListener on Windows
 Key: KAFKA-8393
 URL: https://issues.apache.org/jira/browse/KAFKA-8393
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.2.0
 Environment: Windows 10
Reporter: Loïc


Hi guys,

according the documentation 
[https://kafka.apache.org/quickstart#quickstart_kafkaconnect]

I've tried the command

`c:\dev\Tools\servers\kafka_2.12-2.2.0>bin\windows\connect-standalone.bat 
config\connect-standalone.properties config\connect-file-source.properties 
config\connect-file-sink.properties`

and got this error:

c:\dev\Tools\servers\kafka_2.12-2.2.0>bin\windows\connect-standalone.bat 
config\connect-standalone.properties config\connect-file-source.properties 
config\connect-file-sink.properties
[2019-05-17 10:21:25,049] WARN could not get type for name 
org.osgi.framework.BundleListener from any class loader 
(org.reflections.Reflections)
org.reflections.ReflectionsException: could not get type for name 
org.osgi.framework.BundleListener
 at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:390)
 at org.reflections.Reflections.expandSuperTypes(Reflections.java:381)
 at org.reflections.Reflections.(Reflections.java:126)
 at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader$InternalReflections.(DelegatingClassLoader.java:400)
 at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:299)
 at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:237)
 at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:185)

 

Environment:  Windows 10, Kafka 2.12-2.2.0 [current]

 

Many thanks for your help.

Regards

Loïc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8466) Remove 'jackson-module-scala' dependency (and replace it with some code)

2019-06-03 Thread JIRA
Dejan Stojadinović created KAFKA-8466:
-

 Summary: Remove 'jackson-module-scala' dependency (and replace it 
with some code)
 Key: KAFKA-8466
 URL: https://issues.apache.org/jira/browse/KAFKA-8466
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Dejan Stojadinović
Assignee: Dejan Stojadinović


*Prologue:* 
 * [https://github.com/apache/kafka/pull/5454#issuecomment-497323889]
 * [https://github.com/apache/kafka/pull/5726/files#r289078080]

*Rationale:* one dependency less is always a good thing.

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8517) A lot of WARN messages in kafka log "Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch:

2019-06-10 Thread JIRA
Jacek Żoch created KAFKA-8517:
-

 Summary: A lot of WARN messages in kafka log "Received a 
PartitionLeaderEpoch assignment for an epoch < latestEpoch: 
 Key: KAFKA-8517
 URL: https://issues.apache.org/jira/browse/KAFKA-8517
 Project: Kafka
  Issue Type: Bug
  Components: logging
Affects Versions: 0.11.0.1
 Environment: PRD
Reporter: Jacek Żoch


We have 2.0 version but it was happening in version 0.11

In kafka log there is a lot of messages

"Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This 
implies messages have arrived out of order."

On 23.05 we had 

Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This 
implies messages have arrived out of order. New: \{epoch:181, 
offset:23562380995}, Current: \{epoch:362, offset10365488611} for Partition: 
__consumer_offsets-30 (kafka.server.epoch.LeaderEpochFileCache)

Currently we have

Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This 
implies messages have arrived out of order. New: \{epoch:199, 
offset:24588072027}, Current: \{epoch:362, offset:10365488611} for Partition: 
__consumer_offsets-30 (kafka.server.epoch.LeaderEpochFileCache)

I think kafka should either fix it "under the hood" or have information how to 
fix it

There is no information, how dangerous is it and how to fix it

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8568) MirrorMaker 2.0 resource leak

2019-06-19 Thread JIRA
Péter Gergő Barna created KAFKA-8568:


 Summary: MirrorMaker 2.0 resource leak
 Key: KAFKA-8568
 URL: https://issues.apache.org/jira/browse/KAFKA-8568
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect, mirrormaker
Affects Versions: 2.2.2
Reporter: Péter Gergő Barna


This issue produced by the branch  KIP-382 (I am not sure which version is 
affected by that)

While MirrorMaker 2.0 is running, the following command returns a number that 
is getting larger and larger. 

 
{noformat}
lsof -p  | grep ESTABLISHED | wc -l{noformat}
 

In the error log, NullPointers pops up from the MirrorSourceTask.cleanup, 
because either the consumer or the producer is null when the cleanup method 
tries to close it.

 
{noformat}
Exception in thread "Thread-790" java.lang.NullPointerException
 at 
org.apache.kafka.connect.mirror.MirrorSourceTask.cleanup(MirrorSourceTask.java:110)
 at java.lang.Thread.run(Thread.java:748)
Exception in thread "Thread-792" java.lang.NullPointerException
 at 
org.apache.kafka.connect.mirror.MirrorSourceTask.cleanup(MirrorSourceTask.java:110)
 at java.lang.Thread.run(Thread.java:748)
Exception in thread "Thread-791" java.lang.NullPointerException
 at 
org.apache.kafka.connect.mirror.MirrorSourceTask.cleanup(MirrorSourceTask.java:116)
 at java.lang.Thread.run(Thread.java:748)
Exception in thread "Thread-793" java.lang.NullPointerException
 at 
org.apache.kafka.connect.mirror.MirrorSourceTask.cleanup(MirrorSourceTask.java:110)
 at java.lang.Thread.run(Thread.java:748){noformat}
When the number of the established connections (returned by lsof) reaches a 
certain limit, new exceptions start to pop up in the logs: Too many open files

 

 
{noformat}
[2019-06-19 12:56:43,949] ERROR WorkerSourceTask{id=MirrorHeartbeatConnector-0} 
failed to send record to heartbeats: {} 
(org.apache.kafka.connect.runtime.WorkerSourceTask)
org.apache.kafka.common.errors.SaslAuthenticationException: An error: 
(java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Too many open 
files)]) occurred when evaluating SASL token received from the Kafka Broker. 
Kafka Client will go to A
UTHENTICATION_FAILED state.
Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Too many open 
files)]
        at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
        at 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.lambda$createSaslToken$1(SaslClientAuthenticator.java:461)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslToken(SaslClientAuthenticator.java:461)
        at 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendSaslClientToken(SaslClientAuthenticator.java:370)
        at 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendInitialToken(SaslClientAuthenticator.java:290)
        at 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:230)
        at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
        at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:536)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:472)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
        at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:311)
        at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235)
        at java.lang.Thread.run(Thread.java:748)
Caused by: GSSException: No valid credentials provided (Mechanism level: Too 
many open files)
        at 
sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:775)
        at 
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
        at 
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
        at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
        ... 14 more
Caused by: java.net.SocketException: Too many open files
        at java.net.Socket.createImpl(Socket.java:460)
        at java.net.Socket.connect(Socket.java:587)
        at sun.security.krb5.internal.TCPClient.(NetClient.java:63)
        at sun.security.krb5.internal.NetClient.getInstance(NetClient.java:43)
        at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:393)
        at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:364)
        at java.secur

[jira] [Created] (KAFKA-8659) SetSchemaMetadata SMT fails on records with null value and schema

2019-07-12 Thread JIRA
Marc Löhe created KAFKA-8659:


 Summary: SetSchemaMetadata SMT fails on records with null value 
and schema
 Key: KAFKA-8659
 URL: https://issues.apache.org/jira/browse/KAFKA-8659
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Marc Löhe


If you use the {{SetSchemaMetadata}} SMT with records for which the key or 
value and corresponding schema are {{null}} (i.e. tombstone records from 
[Debezium|[https://debezium.io/]), the transform will fail.
{code:java}
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
handler
at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at 
org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
at 
org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
at 
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: Schema required for 
[updating schema metadata]
at 
org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)
at 
org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:67)
at 
org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
... 11 more

{code}
 

I don't see any problem in passing those records as is in favor of failing and 
will shortly add this in a PR.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8660) Make ValueToKey SMT work only on a whitelist of topics

2019-07-12 Thread JIRA
Marc Löhe created KAFKA-8660:


 Summary: Make ValueToKey SMT work only on a whitelist of topics
 Key: KAFKA-8660
 URL: https://issues.apache.org/jira/browse/KAFKA-8660
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Marc Löhe


For source connectors that publish on multiple topics it is essential to be 
able to configure transforms to be active only for certain topics. I'll add a 
PR to implement this on the example of the ValueToKey SMT.

I'm also interested if this would make sense to add as a configurable option to 
all packaged SMTs or even as a capability for SMTs in general.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8698) ListOffsets Response protocol documentation

2019-07-23 Thread JIRA
Fábio Silva created KAFKA-8698:
--

 Summary: ListOffsets Response protocol documentation
 Key: KAFKA-8698
 URL: https://issues.apache.org/jira/browse/KAFKA-8698
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Reporter: Fábio Silva


The documentation of ListOffsets Response (Version: 0) appears to have an error 
on offsets field name, suffixed with `'`.
{code:java}
[offsets']{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-822) Reassignment of partitions needs a cleanup

2019-07-30 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-822.

Resolution: Abandoned

Closing this as abandoned after asking for feedback on the dev list. Its 
probably also fixed, but not absolutely sure of that.

> Reassignment of partitions needs a cleanup
> --
>
> Key: KAFKA-822
> URL: https://issues.apache.org/jira/browse/KAFKA-822
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, tools
>Affects Versions: 0.8.0
>Reporter: Swapnil Ghike
>Assignee: Swapnil Ghike
>Priority: Major
>  Labels: bugs
>
> 1. This is probably a left-over from when the ReassignPartitionsCommand used 
> to be blocking: 
> Currently, for each partition that is reassigned, controller deletes the 
> /admin/reassign_partitions zk path, and populates it with a new list with the 
> reassigned partition removed from the original list. This is probably an 
> overkill, and we can delete the zk path completely once the reassignment of 
> all partitions has completed successfully or in error. 
> 2. It will help to clarify that there could be no replicas that have started 
> and are not in the ISR when KafkaController.onPartitionReassignment() is 
> called.
> 3. We should batch the requests in 
> KafkaController.StopOldReplicasOfReassignedPartition()
> 4. Update controllerContext.partitionReplicaAssignment only once in 
> KafkaController.updateAssignedReplicasForPartition().
> 5. Need to thoroughly test.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-1016) Broker should limit purgatory size

2019-07-30 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-1016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-1016.
-
Resolution: Not A Problem

Closing this as "not a problem", I believe the Purgatory redesign should help 
with the issue described here to a large extent.


> Broker should limit purgatory size
> --
>
> Key: KAFKA-1016
> URL: https://issues.apache.org/jira/browse/KAFKA-1016
> Project: Kafka
>  Issue Type: Bug
>  Components: purgatory
>Affects Versions: 0.8.0
>Reporter: Chris Riccomini
>Assignee: Joel Koshy
>Priority: Major
>
> I recently ran into a case where a poorly configured Kafka consumer was able 
> to trigger out of memory exceptions in multiple Kafka brokers. The consumer 
> was configured to have a fetcher.max.wait of Int.MaxInt.
> For low volume topics, this configuration causes the consumer to block for 
> frequently, and for long periods of time. [~junrao] informs me that the fetch 
> request will time out after the socket timeout is reached. In our case, this 
> was set to 30s.
> With several thousand consumer threads, the fetch request purgatory got into 
> the 100,000-400,000 range, which we believe triggered the out of memory 
> exception. [~nehanarkhede] claims to have seem similar behavior in other high 
> volume clusters.
> It kind of seems like a bad thing that a poorly configured consumer can 
> trigger out of memory exceptions in the broker. I was thinking maybe it makes 
> sense to have the broker try and protect itself from this situation. Here are 
> some potential solutions:
> 1. Have a broker-side max wait config for fetch requests.
> 2. Threshold the purgatory size, and either drop the oldest connections in 
> purgatory, or reject the newest fetch requests when purgatory is full.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-1099) StopReplicaRequest and StopReplicaResponse should also carry the replica ids

2019-07-30 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-1099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-1099.
-
Resolution: Abandoned

Closing this as abandoned after asking for feedback on the dev list and 
receiving no objections.

> StopReplicaRequest and StopReplicaResponse should also carry the replica ids
> 
>
> Key: KAFKA-1099
> URL: https://issues.apache.org/jira/browse/KAFKA-1099
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.1
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
>Priority: Major
>
> The stop replica request and response only contain a list of partitions for 
> which a replica should be moved to offline/nonexistent state. But the replica 
> id information is implicit in the network layer as the receiving broker. This 
> complicates stop replica response handling on the controller. This blocks the 
> right fix for KAFKA-1097 since it requires invoking callback for processing a 
> StopReplicaResponse and it requires to know the replica id from the 
> StopReplicaResponse.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-1111) Broker prematurely accepts TopicMetadataRequests on startup

2019-07-30 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-.
-
Resolution: Abandoned

Closing as abandoned after no objections on dev list. If this is indeed still 
an issue we can always reopen this.

> Broker prematurely accepts TopicMetadataRequests on startup
> ---
>
> Key: KAFKA-
> URL: https://issues.apache.org/jira/browse/KAFKA-
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Rosenberg
>Assignee: Neha Narkhede
>Priority: Major
>
> I have an issue where on startup, the broker starts accepting 
> TopicMetadataRequests before it has had metadata sync'd from the controller.  
> This results in a bunch of log entries that look like this:
> 013-11-01 03:26:01,577  INFO [kafka-request-handler-0] admin.AdminUtils$ - 
> Topic creation { "partitions":{ "0":[ 9, 10 ] }, "version":1 }
> 2013-11-01 03:26:07,767  INFO [kafka-request-handler-1] admin.AdminUtils$ - 
> Topic creation { "partitions":{ "0":[ 9, 11 ] }, "version":1 }
> 2013-11-01 03:26:07,823  INFO [kafka-request-handler-1] admin.AdminUtils$ - 
> Topic creation { "partitions":{ "0":[ 10, 11 ] }, "version":1 }
> 2013-11-01 03:26:11,183  INFO [kafka-request-handler-2] admin.AdminUtils$ - 
> Topic creation { "partitions":{ "0":[ 10, 11 ] }, "version":1 }
> From an email thread, Neha remarks:
> Before a broker receives the first
> LeaderAndIsrRequest/UpdateMetadataRequest, it is technically not ready to
> start serving any request. But it still ends up serving
> TopicMetadataRequest which can re-create topics accidentally. It shouldn't
> succeed, but this is still a problem.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8754) Connect API: Expose the list of available transformations

2019-08-05 Thread JIRA
Stéphane Derosiaux created KAFKA-8754:
-

 Summary: Connect API: Expose the list of available transformations
 Key: KAFKA-8754
 URL: https://issues.apache.org/jira/browse/KAFKA-8754
 Project: Kafka
  Issue Type: Wish
  Components: KafkaConnect
Reporter: Stéphane Derosiaux


The API of Kafka Connect exposes the available connectors through: 
/connector-plugins/.

It would be useful to have another API method to expose the list of available 
transformations.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8758) Know your KafkaPrincipal

2019-08-06 Thread JIRA
Stéphane Derosiaux created KAFKA-8758:
-

 Summary: Know your KafkaPrincipal
 Key: KAFKA-8758
 URL: https://issues.apache.org/jira/browse/KAFKA-8758
 Project: Kafka
  Issue Type: Wish
  Components: admin
Reporter: Stéphane Derosiaux


Hi,

In order to manage some tools around ACLs, it seems it's not possible to know 
"who you are" through the Admin API (to prevent deleting your own permissions 
for instance, but more globally, to know who you are).

The KafkaPrincipal is determined in the broker according to the channel and the 
principalBuilder, thus a client only can't determine its own identity.

Is it feasible to expose this info through a new KafkaAdmin API?

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8786) Deprecated Gradle features making it incompatible with Gradle 6.0.

2019-08-10 Thread JIRA
Aljoscha Pörtner created KAFKA-8786:
---

 Summary: Deprecated Gradle features making it incompatible with 
Gradle 6.0.
 Key: KAFKA-8786
 URL: https://issues.apache.org/jira/browse/KAFKA-8786
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.2.1
Reporter: Aljoscha Pörtner
 Attachments: Unbenannt.PNG

The lines 549-552 of the build.gradle makes it incompatible with Gradle 6.0. It 
can be fixed by changing
{code:java}
additionalSourceDirs = files(javaProjects.sourceSets.main.allSource.srcDirs)
sourceDirectories = files(javaProjects.sourceSets.main.allSource.srcDirs)
classDirectories = files(javaProjects.sourceSets.main.output)
executionData = files(javaProjects.jacocoTestReport.executionData)
{code}
to
{code:java}
  additionalSourceDirs.from = javaProjects.sourceSets.main.allSource.srcDirs
  sourceDirectories.from = javaProjects.sourceSets.main.allSource.srcDirs
  classDirectories.from = javaProjects.sourceSets.main.output
  executionData.from = javaProjects.jacocoTestReport.executionData
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8806) Kafka.poll spends significant amount of time in KafkaConsumer.updateAssignmentMetadataIfNeeded

2019-08-14 Thread JIRA
Xavier Léauté created KAFKA-8806:


 Summary: Kafka.poll spends significant amount of time in 
KafkaConsumer.updateAssignmentMetadataIfNeeded
 Key: KAFKA-8806
 URL: https://issues.apache.org/jira/browse/KAFKA-8806
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Xavier Léauté


Comparing the performance profile of 2.2.0 and 2.3.0, we are seeing significant 
performance differences in the 
{{KafkaConumer.updateAssignmentMetadataIfNeeded()}} method.

The call to {{KafkaConsumer.updateAssignmentMetadataIfNeeded()}} now represents 
roughly 40% of CPU time spent in {{KafkaConsumer.poll()}}, when before it only 
represented less than 2%.

Most of the extra time appears to be spent in 
{{KafkaConsumer.validateOffsetsIfNeeded()}}, which did not show up in previous 
profiles.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-5546) Lost data when the leader is disconnected.

2017-06-30 Thread JIRA
Björn Eriksson created KAFKA-5546:
-

 Summary: Lost data when the leader is disconnected.
 Key: KAFKA-5546
 URL: https://issues.apache.org/jira/browse/KAFKA-5546
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.10.2.1
Reporter: Björn Eriksson
 Attachments: kafka-failure-log.txt

We've noticed that if the leaders networking is deconfigured (with {{ifconfig 
eth0 down}}) the producer won't notice this and doesn't immediately connect to 
the newly elected leader.

{{docker-compose.yml}} and test runner are at 
https://github.com/owbear/kafka-network-failure-tests with sample test output 
at 
https://github.com/owbear/kafka-network-failure-tests/blob/master/README.md#sample-results

I was expecting a transparent failover to the new leader.

The attached log shows that while the producer produced values between 
{{12:37:33}} and {{12:37:54}}, theres a gap between {{12:37:41}} and 
{{12:37:50}} where no values was stored in the log after the network was taken 
down at {{12:37:42}}.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5563) Clarify handling of connector name in config

2017-07-06 Thread JIRA
Sönke Liebau created KAFKA-5563:
---

 Summary: Clarify handling of connector name in config 
 Key: KAFKA-5563
 URL: https://issues.apache.org/jira/browse/KAFKA-5563
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Sönke Liebau
Priority: Minor


The connector name is currently being stored in two places, once at the root 
level of the connector and once in the config:
{code:java}
{
"name": "test",
"config": {
"connector.class": 
"org.apache.kafka.connect.tools.MockSinkConnector",
"tasks.max": "3",
"topics": "test-topic",
"name": "test"
},
"tasks": [
{
"connector": "test",
"task": 0
}
]
}
{code}

If no name is provided in the "config" element, then the name from the root 
level is [copied there when the connector is being 
created|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L95].
 If however a name is provided in the config then it is not touched, which 
means it is possible to create a connector with a different name at the root 
level and in the config like this:
{code:java}
{
"name": "test1",
"config": {
"connector.class": 
"org.apache.kafka.connect.tools.MockSinkConnector",
"tasks.max": "3",
"topics": "test-topic",
"name": "differentname"
},
"tasks": [
{
"connector": "test1",
"task": 0
}
]
}
{code}

I am not aware of any issues that this currently causes, but it is at least 
confusing and probably not intended behavior and definitely bears potential for 
bugs, if different functions take the name from different places.

Would it make sense to add a check to reject requests that provide different 
names in the request and the config section?




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5591) Infinite loop during failed Handshake

2017-07-14 Thread JIRA
Marcin Łuczyński created KAFKA-5591:
---

 Summary: Infinite loop during failed Handshake
 Key: KAFKA-5591
 URL: https://issues.apache.org/jira/browse/KAFKA-5591
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.0
 Environment: Linux x86_64 x86_64 x86_64 GNU/Linux
Reporter: Marcin Łuczyński
 Attachments: client.truststore.jks

For testing purposes of a connection from my client app to my secured Kafka 
broker (via SSL) I followed preparation procedure described in this section 
[http://kafka.apache.org/090/documentation.html#security_ssl]. There is a flow 
there in description of certificates generation. I was able to find a proper 
sequence of generation of certs and keys on Confluent.io 
[https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/],
 but before that, when I used the first trust store I generated, it caused 
handshake exception as shown below:

{quote}[2017-07-14 05:24:48,958] DEBUG Accepted connection from 
/10.20.40.20:55609 on /10.20.40.12:9093 and assigned it to processor 3, 
sendBufferSize [actual|requested]: [102400|102400] recvBufferSize 
[actual|requested]: [102400|102400] (kafka.network.Acceptor)
[2017-07-14 05:24:48,959] DEBUG Processor 3 listening to new connection from 
/10.20.40.20:55609 (kafka.network.Processor)
[2017-07-14 05:24:48,971] DEBUG SSLEngine.closeInBound() raised an exception. 
(org.apache.kafka.common.network.SslTransportLayer)
javax.net.ssl.SSLException: Inbound closed before receiving peer's 
close_notify: possible truncation attack?
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1666)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1634)
at sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:1561)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeFailure(SslTransportLayer.java:730)
at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:313)
at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:74)
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:374)
at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
at kafka.network.Processor.poll(SocketServer.scala:499)
at kafka.network.Processor.run(SocketServer.scala:435)
at java.lang.Thread.run(Thread.java:748)
[2017-07-14 05:24:48,971] DEBUG Connection with /10.20.40.20 disconnected 
(org.apache.kafka.common.network.Selector)
javax.net.ssl.SSLProtocolException: Handshake message sequence violation, state 
= 1, type = 1
at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1487)
at 
sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:535)
at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:813)
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:411)
at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:74)
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:374)
at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
at kafka.network.Processor.poll(SocketServer.scala:499)
at kafka.network.Processor.run(SocketServer.scala:435)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.net.ssl.SSLProtocolException: Handshake message sequence 
violation, state = 1, type = 1
at 
sun.security.ssl.ServerHandshaker.processMessage(ServerHandshaker.java:213)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1026)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:966)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:963)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1416)
at 
org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:335)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:416)
... 7 more
{quote}

Which is ok obviously for the broken trust store case. However my client app 
did not receive any exception or error message back. It did not stop either. 
Instead it fell into a infinite loop of re-tries, generating huge log with 
exceptions as shown above. I tried to check if there is any client app property 
that controls the number 

[jira] [Created] (KAFKA-5592) Connection with plain client to SSL-secured broker causes OOM

2017-07-14 Thread JIRA
Marcin Łuczyński created KAFKA-5592:
---

 Summary: Connection with plain client to SSL-secured broker causes 
OOM
 Key: KAFKA-5592
 URL: https://issues.apache.org/jira/browse/KAFKA-5592
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.0
 Environment: Linux x86_64 x86_64 x86_64 GNU/Linux
Reporter: Marcin Łuczyński
 Attachments: heapdump.20170713.100129.14207.0002.phd, Heap.PNG, 
javacore.20170713.100129.14207.0003.txt, Snap.20170713.100129.14207.0004.trc, 
Stack.PNG

While testing connection with client app that does not have configured 
truststore with a Kafka broker secured by SSL, my JVM crashes with 
OutOfMemoryError. I saw it mixed with StackOverfowError. I attach dump files.

The stack trace to start with is here:

{quote}at java/nio/HeapByteBuffer. (HeapByteBuffer.java:57) 
at java/nio/ByteBuffer.allocate(ByteBuffer.java:331) 
at 
org/apache/kafka/common/network/NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
 
at 
org/apache/kafka/common/network/NetworkReceive.readFrom(NetworkReceive.java:71) 
at org/apache/kafka/common/network/KafkaChannel.receive(KafkaChannel.java:169) 
at org/apache/kafka/common/network/KafkaChannel.read(KafkaChannel.java:150) 
at 
org/apache/kafka/common/network/Selector.pollSelectionKeys(Selector.java:355) 
at org/apache/kafka/common/network/Selector.poll(Selector.java:303) 
at org/apache/kafka/clients/NetworkClient.poll(NetworkClient.java:349) 
at 
org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
 
at 
org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.poll(ConsumerNetworkClient.java:188)
 
at 
org/apache/kafka/clients/consumer/internals/AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:207)
 
at 
org/apache/kafka/clients/consumer/internals/AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
 
at 
org/apache/kafka/clients/consumer/internals/ConsumerCoordinator.poll(ConsumerCoordinator.java:279)
 
at 
org/apache/kafka/clients/consumer/KafkaConsumer.pollOnce(KafkaConsumer.java:1029)
 
at org/apache/kafka/clients/consumer/KafkaConsumer.poll(KafkaConsumer.java:995) 
at 
com/ibm/is/cc/kafka/runtime/KafkaConsumerProcessor.process(KafkaConsumerProcessor.java:237)
 
at com/ibm/is/cc/kafka/runtime/KafkaProcessor.process(KafkaProcessor.java:173) 
at 
com/ibm/is/cc/javastage/connector/CC_JavaAdapter.run(CC_JavaAdapter.java:443){quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5625) Invalid subscription data may cause streams app to throw BufferUnderflowException

2017-07-21 Thread JIRA
Xavier Léauté created KAFKA-5625:


 Summary: Invalid subscription data may cause streams app to throw 
BufferUnderflowException
 Key: KAFKA-5625
 URL: https://issues.apache.org/jira/browse/KAFKA-5625
 Project: Kafka
  Issue Type: Bug
Reporter: Xavier Léauté
Priority: Minor


I was able to cause my streams app to crash with the following error when 
attempting to join the same consumer group with a rogue client.

At the very least I would expect streams to throw a {{TaskAssignmentException}} 
to indicate invalid subscription data.

{code}
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:506)
at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:361)
at 
org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.decode(SubscriptionInfo.java:97)
at 
org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign(StreamPartitionAssignor.java:302)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:365)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:512)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1100(AbstractCoordinator.java:93)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:462)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:445)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:798)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:778)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:488)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:348)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:168)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:358)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:310)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:297)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:574)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:545)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:519)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5629) Console Consumer overrides auto.offset.reset property when provided on the command line without warning about it.

2017-07-23 Thread JIRA
Sönke Liebau created KAFKA-5629:
---

 Summary: Console Consumer overrides auto.offset.reset property 
when provided on the command line without warning about it.
 Key: KAFKA-5629
 URL: https://issues.apache.org/jira/browse/KAFKA-5629
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Affects Versions: 0.11.0.0
Reporter: Sönke Liebau
Assignee: Sönke Liebau
Priority: Trivial


The console consumer allows to specify consumer options on the command line 
with the --consumer-property parameter.

In the case of auto.offset.reset this parameter will always silently be ignored 
though, because this behavior is controlled via the --from-beginning parameter.
I believe that behavior to be fine, however we should log a warning in case 
auto.offset.reset is specified on the command line and overridden to something 
else in the code to avoid potential confusion.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5668) queryable state window store range scan only returns results from one store

2017-07-27 Thread JIRA
Xavier Léauté created KAFKA-5668:


 Summary: queryable state window store range scan only returns 
results from one store
 Key: KAFKA-5668
 URL: https://issues.apache.org/jira/browse/KAFKA-5668
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.11.0.0
Reporter: Xavier Léauté
Assignee: Xavier Léauté


Queryable state range scans return results from all local stores for simple 
key-value stores.
For windowed stores however it only returns results from a single local store.

The expected behavior is that windowed store range scans would return results 
from all local stores as it does for simple key-value stores.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5669) Define allowed characters for connector names

2017-07-27 Thread JIRA
Sönke Liebau created KAFKA-5669:
---

 Summary: Define allowed characters for connector names
 Key: KAFKA-5669
 URL: https://issues.apache.org/jira/browse/KAFKA-5669
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Sönke Liebau
Assignee: Sönke Liebau
Priority: Minor


There are currently no real checks being performed against connector names and 
thus a lot of characters are allowed that create issues subsequently when 
trying to query connector state or delete connectors.
KAFKA-4930 blacklists a few of the obvious culprits but is far from a final 
solution.

We should come up with a definitive set of rules for connector names that can 
then be enforced.

Kafka is very restrictive where topic names are concerned, it only allows the 
following characters: a-zA-Z0-9._-
For Connect we may want to be a bit more forgiving, but probably not too much 
so, to avoid having to go back and fix things that were missed multiple times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5708) Update Jackson dependencies (from 2.8.5 to 2.9.x)

2017-08-07 Thread JIRA
Dejan Stojadinović created KAFKA-5708:
-

 Summary: Update Jackson dependencies (from 2.8.5 to 2.9.x)
 Key: KAFKA-5708
 URL: https://issues.apache.org/jira/browse/KAFKA-5708
 Project: Kafka
  Issue Type: Task
  Components: build
Reporter: Dejan Stojadinović
Priority: Blocker
 Fix For: 1.0.0


In addition to update: remove deprecated version forcing for 
'jackson-annotations'

*_Notes:_*
* wait until [Jackson 
2.9.1|https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.9.1] is 
released (expected in September 2017)
* inspired by pull request: https://github.com/apache/kafka/pull/3631








--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5742) Support passing ZK chroot in system tests

2017-08-16 Thread JIRA
Xavier Léauté created KAFKA-5742:


 Summary: Support passing ZK chroot in system tests
 Key: KAFKA-5742
 URL: https://issues.apache.org/jira/browse/KAFKA-5742
 Project: Kafka
  Issue Type: Test
  Components: system tests
Reporter: Xavier Léauté


Currently spinning up multiple Kafka clusters in a system tests requires at 
least one ZK node per Kafka cluster, which wastes a lot of resources. We 
currently also don't test anything outside of the ZK root path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5772) Improve Util classes

2017-08-23 Thread JIRA
Matthias Weßendorf created KAFKA-5772:
-

 Summary: Improve Util classes
 Key: KAFKA-5772
 URL: https://issues.apache.org/jira/browse/KAFKA-5772
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Matthias Weßendorf


Utils with static methods should not be instantiated, hence we should improve 
them by marking the classes final and adding a private constructor as well.

In addition this is consistent w/ a few of the existing Util classes, such as 
ByteUtils, see:
https://github.com/apache/kafka/blob/d345d53e4e5e4f74707e2521aa635b93ba3f1e7b/clients/src/main/java/org/apache/kafka/common/utils/ByteUtils.java#L29-L31



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5836) Kafka Streams - API for specifying internal stream name on join

2017-09-05 Thread JIRA
Lovro Pandžić created KAFKA-5836:


 Summary: Kafka Streams - API for specifying internal stream name 
on join
 Key: KAFKA-5836
 URL: https://issues.apache.org/jira/browse/KAFKA-5836
 Project: Kafka
  Issue Type: New Feature
Reporter: Lovro Pandžić


Automatic topic name can be problematic in case of streams operation 
change/migration.
I'd like to be able to specify name of an internal topic so I can avoid 
creation of new stream and data "loss" when changing the Stream building.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5895) Update readme to reflect that Gradle 2 is no longer good enough

2017-09-14 Thread JIRA
Matthias Weßendorf created KAFKA-5895:
-

 Summary: Update readme to reflect that Gradle 2 is no longer good 
enough
 Key: KAFKA-5895
 URL: https://issues.apache.org/jira/browse/KAFKA-5895
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.11.0.2
Reporter: Matthias Weßendorf
Priority: Trivial


The README says:

Kafka requires Gradle 2.0 or higher.

but running with "2.13", I am getting an ERROR message, saying that 3.0+ is 
needed:

{code}
> Failed to apply plugin [class 
> 'com.github.jengelman.gradle.plugins.shadow.ShadowBasePlugin']
   > This version of Shadow supports Gradle 3.0+ only. Please upgrade.

{code}

Full log here:

{code}
➜  kafka git:(utils_improvment) ✗ gradle 
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Download https://jcenter.bintray.com/org/ajoberstar/grgit/1.9.3/grgit-1.9.3.pom
Download 
https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.15.0/gradle-versions-plugin-0.15.0.pom
Download 
https://repo1.maven.org/maven2/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.pom
Download 
https://jcenter.bintray.com/com/github/jengelman/gradle/plugins/shadow/2.0.1/shadow-2.0.1.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/4.5.2.201704071617-r/org.eclipse.jgit-4.5.2.201704071617-r.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit-parent/4.5.2.201704071617-r/org.eclipse.jgit-parent-4.5.2.201704071617-r.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/4.5.2.201704071617-r/org.eclipse.jgit.ui-4.5.2.201704071617-r.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.9/jsch.agentproxy.jsch-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy/0.0.9/jsch.agentproxy-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.9/jsch.agentproxy.pageant-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.9/jsch.agentproxy.sshagent-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.9/jsch.agentproxy.usocket-jna-0.0.9.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.9/jsch.agentproxy.usocket-nc-0.0.9.pom
Download https://repo1.maven.org/maven2/com/jcraft/jsch/0.1.54/jsch-0.1.54.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.9/jsch.agentproxy.core-0.0.9.pom
Download https://jcenter.bintray.com/org/ajoberstar/grgit/1.9.3/grgit-1.9.3.jar
Download 
https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.15.0/gradle-versions-plugin-0.15.0.jar
Download 
https://repo1.maven.org/maven2/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.jar
Download 
https://jcenter.bintray.com/com/github/jengelman/gradle/plugins/shadow/2.0.1/shadow-2.0.1.jar
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/4.5.2.201704071617-r/org.eclipse.jgit-4.5.2.201704071617-r.jar
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/4.5.2.201704071617-r/org.eclipse.jgit.ui-4.5.2.201704071617-r.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.9/jsch.agentproxy.jsch-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.9/jsch.agentproxy.pageant-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.9/jsch.agentproxy.sshagent-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.9/jsch.agentproxy.usocket-jna-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.9/jsch.agentproxy.usocket-nc-0.0.9.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.9/jsch.agentproxy.core-0.0.9.jar
Building project 'core' with Scala version 2.11.11

FAILURE: Build failed with an exception.

* Where:
Build file '/home/Matthias/Work/Apache/kafka/build.gradle' line: 978

* What went wrong:
A problem occurred evaluating root project 'kafka'.
> Failed to apply plugin [class 
> 'com.github.jengelman.gradle.plugins.shadow.ShadowBasePlugin']
   > This version of Shadow supports Gradle 3.0+ only. Please upgrade.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 14.637 secs
➜  kafka git:(utils_improvment) ✗ gradle --version


Gradle 2.13
{code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-3560) Kafka is not working on Solaris

2016-06-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-3560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15312230#comment-15312230
 ] 

Marcus Gründler commented on KAFKA-3560:


With Kafka 0.10.0.0 everything works fine on Solaris with the quick start 
tutorial!

> Kafka is not working on Solaris
> ---
>
> Key: KAFKA-3560
> URL: https://issues.apache.org/jira/browse/KAFKA-3560
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
> Environment: Solaris 5.10, i386
>Reporter: Marcus Gründler
>Priority: Blocker
> Attachments: controller.log, log-cleaner.log, server.log, 
> state-change.log
>
>
> We are trying to run Kafka 0.9.0.x on Solaris 5.10 but Kafka fails to connect 
> controller to broker at startup. We have no problem running kafka 0.8.2.x on 
> the very same machine. Due to this bug kafka is completely unusable on 
> Solaris.
> We use 1 Broker with the default configuration just as described in the 
> quickstart guide.
> The problem can easily be reproduced by the following steps:
> 1. Download kafka_2.11-0.9.0.1.tgz and unpack.
> 2. Start zookeeper: 
> {noformat}
> bin/zookeeper-server-start.sh config/zookeeper.properties
> {noformat}
> 3. Start kafka: 
> {noformat}
> bin/kafka-server-start.sh config/server.properties
> {noformat}
> 4. Wait 30 seconds
> 5. See timouts in logs/controller.log
> {noformat}
> [2016-04-14 17:01:42,752] WARN [Controller-0-to-broker-0-send-thread], 
> Controller 0's connection to broker Node(0, srvs010.ac.aixigo.de, 9092) was 
> unsuccessful (kafka.controller.RequestSendThread)
> java.net.SocketTimeoutException: Failed to connect within 3 ms
> at 
> kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:228)
> at 
> kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:172)
> at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:171)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> {noformat}
> We can create topics with:
> {noformat}
> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 
> 1 --partitions 1 --topic test
> {noformat}
> And we can list topics with:
> {noformat}
> bin/kafka-topics.sh --list --zookeeper localhost:2181
> {noformat}
> But we can *not* write data into the topic:
> {noformat}
> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
> {noformat}
> Our environment is:
> {noformat}
> > uname -a
> SunOS serverXXX 5.10 Generic_147441-16 i86pc i386 i86pc Solaris
> > java -version
> java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
> {noformat}
> Filesystems we tested are
> * Local file system UFS
> * Network filesystem NFS
> I will provide log files in a minute.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3783) Race condition on last ACL removal for a resource fails with a ZkBadVersionException

2016-06-02 Thread JIRA
Sébastien Launay created KAFKA-3783:
---

 Summary: Race condition on last ACL removal for a resource fails 
with a ZkBadVersionException
 Key: KAFKA-3783
 URL: https://issues.apache.org/jira/browse/KAFKA-3783
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.0.0, 0.9.0.1
Reporter: Sébastien Launay
Priority: Minor


When removing the last ACL for a given resource, the znode storing the ACLs 
will get removed.
The version number of the znode is used for optimistic locking in a loop to 
provide atomic changes across brokers.

Unfortunately the exception thrown when the operation fails because of a 
different version number is the wrong one 
({{KeeperException.BadVersionException}} instead of ZkClient 
{{ZkBadVersionException}})  and does not get caught resulting in the following 
stack trace:
{noformat}
org.I0Itec.zkclient.exception.ZkBadVersionException: 
org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = 
BadVersion for /kafka-acl/Topic/e6df8028-f268-408c-814e-d418e943b2fa
at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:51)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:1000)
at org.I0Itec.zkclient.ZkClient.delete(ZkClient.java:1047)
at kafka.utils.ZkUtils.conditionalDeletePath(ZkUtils.scala:522)
at 
kafka.security.auth.SimpleAclAuthorizer.kafka$security$auth$SimpleAclAuthorizer$$updateResourceAcls(SimpleAclAuthorizer.scala:282)
at 
kafka.security.auth.SimpleAclAuthorizer$$anonfun$removeAcls$1.apply$mcZ$sp(SimpleAclAuthorizer.scala:187)
at 
kafka.security.auth.SimpleAclAuthorizer$$anonfun$removeAcls$1.apply(SimpleAclAuthorizer.scala:187)
at 
kafka.security.auth.SimpleAclAuthorizer$$anonfun$removeAcls$1.apply(SimpleAclAuthorizer.scala:187)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:239)
at 
kafka.security.auth.SimpleAclAuthorizer.removeAcls(SimpleAclAuthorizer.scala:186)
...
Caused by: org.apache.zookeeper.KeeperException$BadVersionException: 
KeeperErrorCode = BadVersion for 
/kafka-acl/Topic/e6df8028-f268-408c-814e-d418e943b2fa
at org.apache.zookeeper.KeeperException.create(KeeperException.java:115)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873)
at org.I0Itec.zkclient.ZkConnection.delete(ZkConnection.java:109)
at org.I0Itec.zkclient.ZkClient$11.call(ZkClient.java:1051)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:990)
... 18 more
{noformat}

I noticed this behaviour while working on another fix when running the 
{{SimpleAclAuthorizerTest}} unit tests but this can happens when running 
simultaneously the {{kafka-acls.sh}} command on different brokers in rare cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   6   7   8   9   10   >