[jira] [Resolved] (KAFKA-5462) Add a configuration for users to specify a template for building a custom principal name

2018-10-25 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5462.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

> Add a configuration for users to specify a template for building a custom 
> principal name
> 
>
> Key: KAFKA-5462
> URL: https://issues.apache.org/jira/browse/KAFKA-5462
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Koelli Mungee
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.2.0
>
>
> Add a configuration for users to specify a template for building a custom 
> principal name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7535) KafkaConsumer doesn't report records-lag if isolation.level is read_committed

2018-10-25 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7535.
--
Resolution: Fixed

> KafkaConsumer doesn't report records-lag if isolation.level is read_committed
> -
>
> Key: KAFKA-7535
> URL: https://issues.apache.org/jira/browse/KAFKA-7535
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.0.0
>Reporter: Alexey Vakhrenev
>Assignee: lambdaliu
>Priority: Major
>  Labels: regression
> Fix For: 2.0.1, 2.1.0
>
>
> Starting from 2.0.0, {{KafkaConsumer}} doesn't report {{records-lag}} if 
> {{isolation.level}} is {{read_committed}}. The last version, where it works 
> is {{1.1.1}}.
> The issue can be easily reproduced in {{kafka.api.PlaintextConsumerTest}} by 
> adding {{consumerConfig.setProperty("isolation.level", "read_committed")}} 
> witin related tests:
>  - {{testPerPartitionLagMetricsCleanUpWithAssign}}
>  - {{testPerPartitionLagMetricsCleanUpWithSubscribe}}
>  - {{testPerPartitionLagWithMaxPollRecords}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7579) System Test Failure - security_test.SecurityTest.test_client_ssl_endpoint_validation_failure

2018-11-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7579.
--
   Resolution: Fixed
Fix Version/s: 2.1.0

This is fixed via  KAFKA-7561.

> System Test Failure - 
> security_test.SecurityTest.test_client_ssl_endpoint_validation_failure
> 
>
> Key: KAFKA-7579
> URL: https://issues.apache.org/jira/browse/KAFKA-7579
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.1
>Reporter: Stanislav Kozlovski
>Priority: Blocker
> Fix For: 2.1.0, 2.0.2
>
>
> The security_test.SecurityTest.test_client_ssl_endpoint_validation_failure 
> test with security_protocol=SSL fails to pass
> {code:java}
> SESSION REPORT (ALL TESTS) ducktape version: 0.7.1 session_id: 
> 2018-10-31--002 run time: 2 minutes 12.452 seconds tests run: 2 passed: 1 
> failed: 1 ignored: 0 test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=PLAINTEXT.security_protocol=SSL
>  status: FAIL run time: 1 minute 2.149 seconds Node ducker@ducker05: did not 
> stop within the specified timeout of 15 seconds Traceback (most recent call 
> last): File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 132, in run data = self.run_test() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 185, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/mark/_mark.py", line 324, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File "/opt/kafka-dev/tests/kafkatest/tests/core/security_test.py", line 114, 
> in test_client_ssl_endpoint_validation_failure self.consumer.stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/background_thread.py",
>  line 80, in stop super(BackgroundThreadService, self).stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/service.py", line 
> 278, in stop self.stop_node(node) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 254, in 
> stop_node (str(node.account), str(self.stop_timeout_sec)) AssertionError: 
> Node ducker@ducker05: did not stop within the specified timeout of 15 seconds 
> test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=SSL.security_protocol=PLAINTEXT
>  status: PASS run time: 1 minute 10.144 seconds ducker-ak test failed
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7554) zookeeper.session.timeout.ms Value

2018-11-19 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7554.
--
Resolution: Not A Problem

Default Zookeeper session timeout value 6 seconds. If the server fails to 
heartbeat to zookeeper within this period of time it is considered dead. If you 
set this too low the server may be falsely considered dead; if you set it too 
high it may take too long to recognize a truly dead server.
We can tune this parameter as per requirement. 

> zookeeper.session.timeout.ms Value
> --
>
> Key: KAFKA-7554
> URL: https://issues.apache.org/jira/browse/KAFKA-7554
> Project: Kafka
>  Issue Type: Improvement
>  Components: zkclient
>Reporter: BELUGA BEHR
>Priority: Major
>
> {quote}
> zookeeper.session.timeout.ms = 6000 (6s)
> zookeeper.connection.timeout.ms = 6000 (6s)
> {quote}
> - https://kafka.apache.org/documentation/#configuration
> Kind of an odd value?  Was it supposed to be 6 (60s) ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7323) add replication factor doesn't work

2018-11-19 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7323.
--
Resolution: Not A Problem

Closing as per above comment.  probably heap memory is not sufficient. Please 
reopen if you think the issue still exists

> add replication factor doesn't work
> ---
>
> Key: KAFKA-7323
> URL: https://issues.apache.org/jira/browse/KAFKA-7323
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.11.0.2
>Reporter: superheizai
>Priority: Major
>
> I have topic with 256 parititons.
> Firstly, I generate the  topic partitions with their brokerIds with 
> kafka-reassign-partitions generate.
> Seconld, I add a brokerId for each partition.
> Then, I run kafka-reassign-partitions, some partitions increased their 
> replication factor, but the others stoped.
> When I read log controller.log,  some partitions' replication factors 
> increased. Then I remove these paritions which replication factor base been 
> increased and run kafka-reassign-partitions again, but no log in 
> controller.log, all paritions are "still in progress", no network flow 
> changed when watch zabbix network.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-764) Race Condition in Broker Registration after ZooKeeper disconnect

2018-11-19 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-764.
-
Resolution: Duplicate

This old issue is similar to KAFKA-7165. Closing this as duplicate KAFKA-7165

> Race Condition in Broker Registration after ZooKeeper disconnect
> 
>
> Key: KAFKA-764
> URL: https://issues.apache.org/jira/browse/KAFKA-764
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.7.1
>Reporter: Bob Cotton
>Priority: Major
> Attachments: BPPF_2900-Broker_Logs.tbz2
>
>
> When running our ZooKeepers in VMware, occasionally all the keepers 
> simultaneously pause long enough for the Kafka clients to time out and then 
> the keepers simultaneously un-pause.
> When this happens, the zk clients disconnect from ZooKeeper. When ZooKeeper 
> comes back ZkUtils.createEphemeralPathExpectConflict discovers the node id of 
> itself and does not re-register the broker id node and the function call 
> succeeds. Then ZooKeeper figures out the broker disconnected from the keeper 
> and deletes the ephemeral node *after* allowing the consumer to read the data 
> in the /brokers/ids/x node.  The broker then goes on to register all the 
> topics, etc.  When consumers connect, they see topic nodes associated with 
> the broker but thy can't find the broker node to get connection information 
> for the broker, sending them into a rebalance loop until they reach 
> rebalance.retries.max and fail.
> This might also be a ZooKeeper issue, but the desired behavior for a 
> disconnect case might be, if the broker node is found to explicitly delete 
> and recreate it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7659) dummy test

2018-11-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7659.
--
Resolution: Invalid

> dummy test
> --
>
> Key: KAFKA-7659
> URL: https://issues.apache.org/jira/browse/KAFKA-7659
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: kaushik srinivas
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7616) MockConsumer can return ConsumerRecords objects with a non-empty map but no records

2018-11-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7616.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 5901
[https://github.com/apache/kafka/pull/5901]

> MockConsumer can return ConsumerRecords objects with a non-empty map but no 
> records
> ---
>
> Key: KAFKA-7616
> URL: https://issues.apache.org/jira/browse/KAFKA-7616
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.0.1
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Trivial
> Fix For: 2.2.0
>
>
> The ConsumerRecords returned from MockConsumer.poll can return false for 
> isEmpty while not containing any records. This behavior is because 
> MockConsumer.poll eagerly adds entries to the returned Map List>, based on which partitions have been added. If no 
> records are returned for a partition, e.g. because the position was too far 
> ahead, the entry for that partition will still be there.
>  
> The MockConsumer should lazily add entries to the map as they are needed, 
> since it is more in line with how the real consumer behaves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6971) Passing in help flag to kafka-console-producer should print arg options

2018-11-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6971.
--
Resolution: Duplicate

> Passing in help flag to kafka-console-producer should print arg options
> ---
>
> Key: KAFKA-6971
> URL: https://issues.apache.org/jira/browse/KAFKA-6971
> Project: Kafka
>  Issue Type: Improvement
>  Components: core, producer 
>Affects Versions: 1.1.0
>Reporter: Yeva Byzek
>Priority: Minor
>  Labels: newbie
>
> {{kafka-console-consumer --help}} prints "help is not a recognized option" as 
> well as output of options
> {{kafka-console-producer --help}} prints "help is not a recognized option" 
> but no output of options
> Possible solutions:
> (a) Enhance {{kafka-console-producer}} to also print out all options when a 
> user passes in an unrecognized option
> (b) Enhance both {{kafka-console-producer}} and {{kafka-console-consumer}} to 
> legitimately accept the {{--help}} flag



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6149) LogCleanerManager should include topic partition name when warning of invalid cleaner offset

2018-11-28 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6149.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.1
   1.0.0

This was fixed in 1.0 and 0.11.0.1+ releases

> LogCleanerManager should include topic partition name when warning of invalid 
> cleaner offset 
> -
>
> Key: KAFKA-6149
> URL: https://issues.apache.org/jira/browse/KAFKA-6149
> Project: Kafka
>  Issue Type: Improvement
>  Components: log, logging
>Reporter: Ryan P
>Priority: Major
> Fix For: 1.0.0, 0.11.0.1
>
>
> The following message would be a lot more helpful if the topic partition name 
> were included.
> if (!isCompactAndDelete(log))
>   warn(s"Resetting first dirty offset to log start offset 
> $logStartOffset since the checkpointed offset $offset is invalid.")



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7259) Remove deprecated ZKUtils usage from ZkSecurityMigrator

2018-11-29 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7259.
--
Resolution: Fixed

Issue resolved by pull request 5480
[https://github.com/apache/kafka/pull/5480]

> Remove deprecated ZKUtils usage from ZkSecurityMigrator
> ---
>
> Key: KAFKA-7259
> URL: https://issues.apache.org/jira/browse/KAFKA-7259
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Minor
> Fix For: 2.2.0
>
>
> ZkSecurityMigrator code currently uses ZKUtils.  We can replace ZKUtils usage 
> with KafkaZkClient. Also remove usage of ZKUtils from various tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7617) Document security primitives

2018-11-30 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7617.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 5906
[https://github.com/apache/kafka/pull/5906]

> Document security primitives
> 
>
> Key: KAFKA-7617
> URL: https://issues.apache.org/jira/browse/KAFKA-7617
> Project: Kafka
>  Issue Type: Task
>Reporter: Viktor Somogyi
>Assignee: Viktor Somogyi
>Priority: Minor
> Fix For: 2.2.0
>
>
> Although the documentation gives help on configuring the authentication and 
> authorization, it won't list what are the security primitives (operations and 
> resources) that can be used which makes it hard for users to easily set up 
> thorough authorization rules.
> This task would cover adding these to the security page of the Kafka 
> documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7694) Support ZooKeeper based master/secret key management for delegation tokens

2018-12-02 Thread Manikumar (JIRA)
Manikumar created KAFKA-7694:


 Summary:  Support ZooKeeper based master/secret key management for 
delegation tokens
 Key: KAFKA-7694
 URL: https://issues.apache.org/jira/browse/KAFKA-7694
 Project: Kafka
  Issue Type: Sub-task
Reporter: Manikumar


Master/secret key is used to generate and verify delegation tokens. currently, 
master key/secret is stored as plain text in server.properties config file. 
Same key must be configured across all the brokers. We require a re-deployment 
when the secret needs to be rotated.

This JIRA is to explore and implement a ZooKeeper based master/secret key 
management to automate secret key generation and expiration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7742) DelegationTokenCache#hmacIdCache entry is not cleared when a token is removed using removeToken(String tokenId) API.

2018-12-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7742.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 6037
[https://github.com/apache/kafka/pull/6037]

> DelegationTokenCache#hmacIdCache entry is not cleared when a token is removed 
> using removeToken(String tokenId) API.
> 
>
> Key: KAFKA-7742
> URL: https://issues.apache.org/jira/browse/KAFKA-7742
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Satish Duggana
>Assignee: Satish Duggana
>Priority: Major
> Fix For: 2.2.0
>
>
> DelegationTokenCache#hmacIdCache entry is not cleared when a token is removed 
> using `removeToken(String tokenId)`[1] API.
> 1) 
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/token/delegation/internals/DelegationTokenCache.java#L84



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7762) KafkaConsumer uses old API in the javadocs

2018-12-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7762.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 6052
[https://github.com/apache/kafka/pull/6052]

> KafkaConsumer uses old API in the javadocs
> --
>
> Key: KAFKA-7762
> URL: https://issues.apache.org/jira/browse/KAFKA-7762
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Matthias Weßendorf
>Priority: Major
> Fix For: 2.2.0
>
>
> the `poll(ms)` API is deprecated, hence the javadoc should not use it 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7054) Kafka describe command should throw topic doesn't exist exception.

2018-12-21 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7054.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 5211
[https://github.com/apache/kafka/pull/5211]

> Kafka describe command should throw topic doesn't exist exception.
> --
>
> Key: KAFKA-7054
> URL: https://issues.apache.org/jira/browse/KAFKA-7054
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Reporter: Manohar Vanam
>Priority: Minor
> Fix For: 2.2.0
>
>
> If topic doesn't exist then Kafka describe command should throw topic doesn't 
> exist exception.
> like alter and delete commands :
> {code:java}
> local:bin mvanam$ ./kafka-topics.sh --zookeeper localhost:2181 --delete 
> --topic manu
> Error while executing topic command : Topic manu does not exist on ZK path 
> localhost:2181
> [2018-06-13 15:08:13,111] ERROR java.lang.IllegalArgumentException: Topic 
> manu does not exist on ZK path localhost:2181
>  at kafka.admin.TopicCommand$.getTopics(TopicCommand.scala:91)
>  at kafka.admin.TopicCommand$.deleteTopic(TopicCommand.scala:184)
>  at kafka.admin.TopicCommand$.main(TopicCommand.scala:71)
>  at kafka.admin.TopicCommand.main(TopicCommand.scala)
>  (kafka.admin.TopicCommand$)
> local:bin mvanam$ ./kafka-topics.sh --zookeeper localhost:2181 --alter 
> --topic manu
> Error while executing topic command : Topic manu does not exist on ZK path 
> localhost:2181
> [2018-06-13 15:08:43,663] ERROR java.lang.IllegalArgumentException: Topic 
> manu does not exist on ZK path localhost:2181
>  at kafka.admin.TopicCommand$.getTopics(TopicCommand.scala:91)
>  at kafka.admin.TopicCommand$.alterTopic(TopicCommand.scala:125)
>  at kafka.admin.TopicCommand$.main(TopicCommand.scala:65)
>  at kafka.admin.TopicCommand.main(TopicCommand.scala)
>  (kafka.admin.TopicCommand$){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7752) zookeeper-security-migration.sh does not remove ACL on kafka-acl-extended

2019-01-04 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7752.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 6072
[https://github.com/apache/kafka/pull/6072]

> zookeeper-security-migration.sh does not remove ACL on kafka-acl-extended
> -
>
> Key: KAFKA-7752
> URL: https://issues.apache.org/jira/browse/KAFKA-7752
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.0.0
>Reporter: Attila Sasvari
>Assignee: Attila Sasvari
>Priority: Major
> Fix For: 2.2.0
>
>
> Executed {{zookeeper-security-migration.sh --zookeeper.connect $(hostname 
> -f):2181/kafka --zookeeper.acl secure}} to secure Kafka znodes and then 
> {{zookeeper-security-migration.sh --zookeeper.connect $(hostname 
> -f):2181/kafka --zookeeper.acl unsecure}} to unsecure those.
> I noticed that the tool did not remove ACLs on certain nodes: 
> {code}
> ] getAcl /kafka/kafka-acl-extended
> 'world,'anyone
> : r
> 'sasl,'kafka
> : cdrwa
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5994) Improve transparency of broker user ACL misconfigurations

2019-01-10 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5994.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 5021
[https://github.com/apache/kafka/pull/5021]

> Improve transparency of broker user ACL misconfigurations
> -
>
> Key: KAFKA-5994
> URL: https://issues.apache.org/jira/browse/KAFKA-5994
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Dustin Cote
>Priority: Major
> Fix For: 2.2.0
>
>
> When the user for inter broker communication is not a super user and ACLs are 
> configured with allow.everyone.if.no.acl.found=false, the cluster will not 
> serve data. This is extremely confusing to debug because there is no security 
> negotiation problem or indication of an error other than no data can make it 
> in or out of the broker. If one knew to look in the authorizer log, it would 
> be more clear, but that didn't make it into my workflow at least. Here's an 
> example of a problematic debugging scenario
> SASL_SSL, SSL, SASL_PLAINTEXT ports on the brokers
> SASL user specified in `super.users`
> SSL specified as the inter broker protocol
> The only way I could figure out ACLs were an issue without gleaning it 
> through configuration inspection was that controlled shutdown indicated that 
> a cluster action had failed. 
> It would be good if we could be more transparent about the failure here. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7781) Add validation check for Topic retention.ms property

2019-01-13 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7781.
--
   Resolution: Fixed
 Assignee: Kamal Chandraprakash
Fix Version/s: 2.2.0

> Add validation check for Topic retention.ms property
> 
>
> Key: KAFKA-7781
> URL: https://issues.apache.org/jira/browse/KAFKA-7781
> Project: Kafka
>  Issue Type: Bug
>Reporter: Kamal Chandraprakash
>Assignee: Kamal Chandraprakash
>Priority: Major
> Fix For: 2.2.0
>
>
> Using AdminClient#alterConfigs, topic _retention.ms_ property can be assigned 
> to a value less than -1. This leads to inconsistency while describing the 
> topic configuration. We should not allow values less than -1. In 
> server.properties, if _log.retention.ms_ configured to a value less than 
> zero, it's 
> [set|https://github.com/apache/kafka/blob/9295444d48eb057900ef09f1176e34b37331f60b/core/src/main/scala/kafka/server/KafkaConfig.scala#L1320]
>  as -1.
> This doesn't create any issue in log segment deletion, as the 
> [condition|https://github.com/apache/kafka/blob/9295444d48eb057900ef09f1176e34b37331f60b/core/src/main/scala/kafka/log/Log.scala#L1466]
>  for infinite log retention checks for value less than zero. To maintain 
> consistency, we should add the validation check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6837) Apache kafka broker got stopped.

2018-05-06 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6837.
--
Resolution: Information Provided

> Apache kafka broker got stopped.
> 
>
> Key: KAFKA-6837
> URL: https://issues.apache.org/jira/browse/KAFKA-6837
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.2
>Reporter: Rajendra Jangir
>Priority: Major
>
> We are using kafka version 2.11-0.11.0.2 and zookeeper version 3.3.6. And 
> they are they are running on Ubuntu 16.0.4.
> We are producing data with rate of 3k messages per second and the size of 
> each message is around 150 Bytes. we have 7 brokers and 1 zookeeper, and 
> sometimes,  some  brokers get down.
> So how we can find out the reason that why a particular broker was got down? 
> And where the broker stop logs are stored from where we can check the logs 
> and can find why it was stopped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3921) Periodic refresh of metadata causes spurious log messages

2018-05-06 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3921.
--
Resolution: Auto Closed

Closing inactive issue. The old producer is no longer supported. Please upgrade 
to the Java producer whenever possible.


> Periodic refresh of metadata causes spurious log messages
> -
>
> Key: KAFKA-3921
> URL: https://issues.apache.org/jira/browse/KAFKA-3921
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Steven Schlansker
>Priority: Major
>
> Kafka cluster metadata has a configurable expiry period.  (I don't understand 
> why this is -- cluster updates can happen at any time, and we have to pick 
> those up quicker than every 10 minutes?  But this ticket isn't about that.)
> When this interval expires, the ClientUtils class spins up a SyncProducer, 
> which sends a special message to retrieve metadata.  The producer is then 
> closed immediately after processing this message.
> This causes the SyncProducer to log both a connection open and close at INFO 
> level:
> {code}
> 2016-06-30T17:50:19.408Z INFO <> 
> [ProducerSendThread-central-buzzsaw-1-myhost] kafka.client.ClientUtils$ - 
> Fetching metadata from broker BrokerEndPoint(2,broker-3.mycorp.com,9092) with 
> correlation id 17188 for 1 topic(s) Set(logstash)
> 2016-06-30T17:50:19.410Z INFO <> 
> [ProducerSendThread-central-buzzsaw-1-myhost] kafka.producer.SyncProducer - 
> Connected to broker-3.mycorp.com:9092 for producing
> 2016-06-30T17:50:19.411Z INFO <> 
> [ProducerSendThread-central-buzzsaw-1-myhost] kafka.producer.SyncProducer - 
> Disconnecting from broker-3.mycorp.com:9092
> 2016-06-30T17:50:19.411Z INFO <> 
> [ProducerSendThread-central-buzzsaw-1-myhost] kafka.producer.SyncProducer - 
> Disconnecting from broker-14.mycorp.com:9092
> 2016-06-30T17:50:19.411Z INFO <> 
> [ProducerSendThread-central-buzzsaw-1-myhost] kafka.producer.SyncProducer - 
> Disconnecting from broker-logkafka-13.mycorp.com:9092
> 2016-06-30T17:50:19.411Z INFO <> 
> [ProducerSendThread-central-buzzsaw-1-myhost] kafka.producer.SyncProducer - 
> Disconnecting from broker-12.mycorp.com:9092
> 2016-06-30T17:50:19.413Z INFO <> 
> [ProducerSendThread-central-buzzsaw-1-myhost] kafka.producer.SyncProducer - 
> Connected to broker-12.mycorp.com:9092 for producing
> {code}
> When you are reading the logs, this appears periodically.  We've had more 
> than one administrator then think that the cluster is unhealthy, and client 
> connections are getting dropped -- it's disconnecting from the broker so 
> frequently!  What is wrong???  But in reality, it is just this harmless / 
> expected metadata update.
> Can we tweak the log levels so that the periodic background refresh does not 
> log unless something goes wrong?  The log messages are misleading and easy to 
> misinterpret.  I had to read the code pretty thoroughly to convince myself 
> that these messages are actually harmless.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6825) DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG is private

2018-05-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6825.
--
   Resolution: Fixed
 Assignee: Guozhang Wang
Fix Version/s: 2.0.0

> DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG is private
> 
>
> Key: KAFKA-6825
> URL: https://issues.apache.org/jira/browse/KAFKA-6825
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.1.0
>Reporter: Anna O
>Assignee: Guozhang Wang
>Priority: Minor
> Fix For: 2.0.0
>
>
> org.apache.kafka.streams.StreamsConfig 
> DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG constant is private.
> Should be public.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6141) Errors logs when running integration/kafka/tools/MirrorMakerIntegrationTest

2018-05-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6141.
--
Resolution: Fixed

Closing this as log level changed to debug in ZOOKEEPER-2795 / Zookeeper 3.4.12

> Errors logs when running integration/kafka/tools/MirrorMakerIntegrationTest
> ---
>
> Key: KAFKA-6141
> URL: https://issues.apache.org/jira/browse/KAFKA-6141
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Pavel
>Priority: Trivial
>
> There are some error logs when running Tests extended from 
> ZooKeeperTestHarness, for example 
> integration/kafka/tools/MirrorMakerIntegrationTest:
> [2017-10-27 18:28:02,557] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> [2017-10-27 18:28:09,110] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> And these logs have no impact on test results. I think it would be great to 
> eliminate these logs from output by providing a ZKShutdownHandler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6682) Kafka reconnection after broker restart

2018-05-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6682.
--
Resolution: Duplicate

Resolving this as duplicate of KAFKA-6260.  Please reopen if the issue still 
exists.

> Kafka reconnection after broker restart
> ---
>
> Key: KAFKA-6682
> URL: https://issues.apache.org/jira/browse/KAFKA-6682
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.0.0
>Reporter: madi
>Priority: Major
>
> I am using kafka producer plugin for logback (danielwegener) with the clients 
> library 1.0.0 and after restart of broker all my JVMs connected to it get 
> tons of the exceptions:
> {code:java}
> 11:22:48.738 [kafka-producer-network-thread | app-logback-relaxed] cid: 
> clid: E [    @] a: o.a.k.c.p.internals.Sender - [Producer 
> clientId=id-id-logback-relaxed] Uncaught error in kafka producer I/O 
> thread:  ex:java.lang.NullPointerException: null
>     at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:436)
>     at org.apache.kafka.common.network.Selector.poll(Selector.java:399)
>     at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
>     at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
>     at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
>     at java.lang.Thread.run(Thread.java:798){code}
> During restart there are still other brokers available behind LB.    
> Dosen't matter kafka is up again, only restarting JVM helps
> {code:java}
>      class="com.github.danielwegener.logback.kafka.KafkaAppender">
>     
>     
>    
>  %date{"-MM-dd'T'HH:mm:ss.SSS'Z'"} ${HOSTNAME} 
> [%thread] %logger{32} - %message ex:%exf%n
>     
>     mytopichere
>     
>      class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" 
> />
>     
>      class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy"
>  />
>     
>    
>  
>     
>     bootstrap.servers=10.99.99.1:9092
>     
>     acks=0
>     
>     block.on.buffer.full=false
>     
>     
> client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed
>     
>     
>     compression.type=none
>    
>  
>     max.block.ms=0
>     {code}
> I provide loadbalancer address in bootstrap servers here. There are three 
> kafka brokers behind.
> {code:java}
> java version "1.7.0"
> Java(TM) SE Runtime Environment (build pap6470sr9fp60ifix-20161110_01(SR9 
> FP60)+IV90630+IV90578))
> IBM J9 VM (build 2.6, JRE 1.7.0 AIX ppc64-64 Compressed References 
> 20161005_321282 (JIT enabled, AOT enabled)
> J9VM - R26_Java726_SR9_20161005_1259_B321282
> JIT  - tr.r11_20161001_125404
> GC   - R26_Java726_SR9_20161005_1259_B321282_CMPRSS
> J9CL - 20161005_321282)
> JCL - 20161021_01 based on Oracle jdk7u121-b15{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6426) Kafka SASL/SCRAM authentication does not fail for incorrect username or password.

2018-05-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6426.
--
Resolution: Cannot Reproduce

This looks like configuration issue. Please reopen if you think the issue still 
exists. 

> Kafka SASL/SCRAM authentication does not fail for incorrect username or 
> password.
> -
>
> Key: KAFKA-6426
> URL: https://issues.apache.org/jira/browse/KAFKA-6426
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.1
> Environment: Ubuntu 16.04, JDK 1.8, Kafka_2.10-0.10.2.1
>Reporter: Menaka Madushanka
>Priority: Major
> Attachments: broker-jaas.conf, client-jaas.conf, consumer.properties, 
> producer.properties, server.properties
>
>
> Hi,
> I configured Kafka 0.10.2.1 for SASL/SCRAM by following the documentation 
> [1]. 
> But it does work when I use incorrect username or password in the client as 
> well. 
> I have attached the server.properties, consumer.properties, 
> producer.properties, jass config files for broker and client. 
> Also, in my producer, I have set
>  {{props.put("sasl.mechanism", "SCRAM-SHA-256");}}
> but when running, it shows,
> {{kafka.utils.VerifiableProperties  - Property sasl.mechanism is not valid}}
> [1] 
> [https://kafka.apache.org/documentation/#security_sasl_scram|https://kafka.apache.org/documentation/#security_sasl_scram]
> Thanks and Regards,
> Menaka



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-1977) Make logEndOffset available in the Zookeeper consumer

2018-05-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1977.
--
Resolution: Auto Closed

Closing this as the Scala consumers have been deprecated and no further work is 
planned. This requirement will be tracked in KAFKA-2500 for java consumer.

> Make logEndOffset available in the Zookeeper consumer
> -
>
> Key: KAFKA-1977
> URL: https://issues.apache.org/jira/browse/KAFKA-1977
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Will Funnell
>Priority: Minor
> Attachments: 
> Make_logEndOffset_available_in_the_Zookeeper_consumer.patch
>
>
> The requirement is to create a snapshot from the Kafka topic but NOT do 
> continual reads after that point. For example you might be creating a backup 
> of the data to a file.
> In order to achieve that, a recommended solution by Joel Koshy and Jay Kreps 
> was to expose the high watermark, as maxEndOffset, from the FetchResponse 
> object through to each MessageAndMetadata object in order to be aware when 
> the consumer has reached the end of each partition.
> The submitted patch achieves this by adding the maxEndOffset to the 
> PartitionTopicInfo, which is updated when a new message arrives in the 
> ConsumerFetcherThread and then exposed in MessageAndMetadata.
> See here for discussion:
> http://search-hadoop.com/m/4TaT4TpJy71



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3180) issue for extracting JSON from https web page

2018-05-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3180.
--
Resolution: Not A Problem

 I suggest to post these kind of queries to 
[[us...@kafka.apache.org|mailto:us...@kafka.apache.org]|mailto:[us...@kafka.apache.org|mailto:us...@kafka.apache.org]]
 mailing list ([[http://kafka.apache.org/contact]]) for more visibility.

> issue for extracting JSON from https web page
> -
>
> Key: KAFKA-3180
> URL: https://issues.apache.org/jira/browse/KAFKA-3180
> Project: Kafka
>  Issue Type: Test
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: cloudera 5.4.2.0
>Reporter: swayam
>Priority: Critical
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Hi Team,
> Could you help me how to extract JSON info from https web page by help of 
> kafka into HDFS . 
> here is the json available URL : 
> https://affiliate-api.flipkart.net/affiliate/api/8924b177d4c64fcab4db860b94fbcea2.json
> Please help me to get the info ..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3620) Clean up Protocol class.

2018-05-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3620.
--
Resolution: Fixed

Closing this as some cleanup done in newer versions.  Please reopen if you 
think otherwise.

> Clean up Protocol class.
> 
>
> Key: KAFKA-3620
> URL: https://issues.apache.org/jira/browse/KAFKA-3620
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish Singh
>Assignee: Ashish Singh
>Priority: Major
>
> This came up on PR of KAFKA-3307. Below is excerpt.
> {quote}
> With the versioning getting a little more complex in Protocol class, it makes 
> sense to try and encapsulate some of its logic a little better. For example, 
> rather than using raw arrays for each request type, we could have something 
> like this:
> {code}
> class KafkaApi {
>   private ApiKey api;
>   private Schema[] requests;
>   private Schema[] responses;
>   Schema currentSchema();
>   Schema schemaFor(int version);
>   int minVersion();
>   int currentVersion();
> }
> {code}
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3843) Endless consuming messages in Kafka cluster

2018-05-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3843.
--
Resolution: Auto Closed

The Scala consumers have been deprecated and no further work is planned, please 
upgrade to the Java consumer whenever possible.

> Endless consuming messages in Kafka cluster
> ---
>
> Key: KAFKA-3843
> URL: https://issues.apache.org/jira/browse/KAFKA-3843
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
>Reporter: Tomas Benc
>Assignee: Neha Narkhede
>Priority: Major
>
> We are running Kafka in cluster (3 virtual machines). Kafka is configured  
> min.insync.replicas = 2 and topics are configured replication factor = 3. 
> This configuration means, there must be at least 2 brokers of 3 in cluster up 
> and running to receive any messages. This works as expected.
> Our consumers are high level consumers and offsets are commited manually 
> (auto.commit disabled) and stored in Kafka.
> Reproducing the issue:
> 1. Kafka cluster up and running and receives messages
> 2. Consumers are disabled (messages in Kafka are in lag)
> 3. Disable 2 Kafka brokers in cluster
> 4. Enable consumers
> Consumers are consuming messages in batch, and commiting offsets after 
> processing. But commit offsets fails in Kafka, because of 
> NotEnoughReplicasException. That is correct. What is not correct, high level 
> consumer has no idea, that offset are not commited and consumes same messages 
> again and again.
> It would be helpful, that method commitOffsets() in interface 
> kafka.javaapi.consumer.ConsumerConnector should return some information 
> (return boolean or throw exception) about this operation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4091) Unable to produce or consume on any topic

2018-05-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4091.
--
Resolution: Cannot Reproduce

{color:#00}Closing inactive issue. Please reopen if you think the issue 
still exists.{color}
 

> Unable to produce or consume on any topic
> -
>
> Key: KAFKA-4091
> URL: https://issues.apache.org/jira/browse/KAFKA-4091
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
> Environment: Amazon Linux, t2.micro
>Reporter: Avi Chopra
>Priority: Critical
>
> While trying to set kafka on 2 slave and 1 master box, got a weird condition 
> where I was not able to consume or produce to a topic.
> Using Mirror Maker to sync data between slave <--> Master. Getting following 
> logs unending :
> [2016-08-26 14:28:33,897] WARN Bootstrap broker localhost:9092 disconnected 
> (org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:43,515] WARN 
> Bootstrap broker localhost:9092 disconnected 
> (org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:45,118] WARN 
> Bootstrap broker localhost:9092 disconnected 
> (org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:46,721] WARN 
> Bootstrap broker localhost:9092 disconnected 
> (org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:48,324] WARN 
> Bootstrap broker localhost:9092 disconnected 
> (org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:49,927] WARN 
> Bootstrap broker localhost:9092 disconnected 
> (org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:53,029] WARN 
> Bootstrap broker localhost:9092 disconnected 
> (org.apache.kafka.clients.NetworkClient)
> Only way I could recover was by restarting Kafka which produced this kind of 
> logs :
> [2016-08-26 14:30:54,856] WARN Found a corrupted index file, 
> /tmp/kafka-logs/__consumer_offsets-43/.index, deleting 
> and rebuilding index... (kafka.log.Log) [2016-08-26 14:30:54,856] INFO 
> Recovering unflushed segment 0 in log __consumer_offsets-43. (kafka.log.Log) 
> [2016-08-26 14:30:54,857] INFO Completed load of log __consumer_offsets-43 
> with log end offset 0 (kafka.log.Log) [2016-08-26 14:30:54,860] WARN Found a 
> corrupted index file, 
> /tmp/kafka-logs/__consumer_offsets-26/.index, deleting 
> and rebuilding index... (kafka.log.Log) [2016-08-26 14:30:54,860] INFO 
> Recovering unflushed segment 0 in log __consumer_offsets-26. (kafka.log.Log) 
> [2016-08-26 14:30:54,861] INFO Completed load of log __consumer_offsets-26 
> with log end offset 0 (kafka.log.Log) [2016-08-26 14:30:54,864] WARN Found a 
> corrupted index file, 
> /tmp/kafka-logs/__consumer_offsets-35/.index, deleting 
> and rebuilding index... (kafka.log.Log)
> ERROR Error when sending message to topic dr_ubr_analytics_limits with key: 
> null, value: 1 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) 
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 6 ms.
> The consumer group command was showing a major lag.
> This is my test phase so I was able to restart and recover from the master 
> box but I want know what caused this issue and how can it be avoided. Is 
> there a way to debug this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5728) Stopping consumer thread cause loosing message in the partition

2018-05-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5728.
--
Resolution: Auto Closed

looks like this is related to spring kafka config issue. must be related to 
committing offsets. Pls take a look at spring kafka docs.

> Stopping consumer thread cause loosing message in the partition
> ---
>
> Key: KAFKA-5728
> URL: https://issues.apache.org/jira/browse/KAFKA-5728
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0
>Reporter: Vasudevan Karnan
>Priority: Major
>
> Currently using Spring boot Kafka listener thread to consume the message from 
> partition.
> Having 10 partitions and concurrency to 10 in the consumer group.
> In testing, I have 2 messages in the single partition (say for ex: partition 
> 4). Created listener to read the message and post to service. During normal 
> days, read the message and post to service, and working as expected. No 
> issues on that.
> Suppose if the service is down, then I am doing Spring Retry template to 
> retry to post the message to service (repeatedly) for number of retry and 
> backoff time in ms. If I stop the listener, then getting 
> org.springframework.retry.backoff.BackOffInterruptedException: Thread 
> interrupted while sleeping; nested exception is 
> java.lang.InterruptedException: sleep interrupted
>   at 
> org.springframework.retry.backoff.FixedBackOffPolicy.doBackOff(FixedBackOffPolicy.java:86)
>  ~[spring-retry-1.1.4.RELEASE.jar:na]
>   at 
> org.springframework.retry.backoff.StatelessBackOffPolicy.backOff(StatelessBackOffPolicy.java:36)
>  ~[spring-retry-1.1.4.RELEASE.jar:na]
> After that I am loosing the message from particular partition (message that 
> are got retried is lost in the middle) and lag is reduced. (This is happening 
> during the end of stopping the listener).
> Is there any way, we don't loose the message even I am getting the sleep 
> interrupted exception?
> Suppose during stopping the server, if I dont face sleep interrupt exception, 
> in the next time listener startup, face the same issue and loosing the 
> message again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3246) Transient Failure in kafka.producer.SyncProducerTest.testProduceCorrectlyReceivesResponse

2018-05-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3246.
--
Resolution: Auto Closed

> Transient Failure in 
> kafka.producer.SyncProducerTest.testProduceCorrectlyReceivesResponse
> -
>
> Key: KAFKA-3246
> URL: https://issues.apache.org/jira/browse/KAFKA-3246
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: transient-unit-test-failure
>
> {code}
> java.lang.AssertionError: expected:<0> but was:<3>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> kafka.producer.SyncProducerTest.testProduceCorrectlyReceivesResponse(SyncProducerTest.scala:182)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 

[jira] [Resolved] (KAFKA-5039) Logging in BlockingChannel and SyncProducer connect

2018-05-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5039.
--
Resolution: Auto Closed

Closing as scala producer is removed from codebase.

> Logging in BlockingChannel and SyncProducer connect
> ---
>
> Key: KAFKA-5039
> URL: https://issues.apache.org/jira/browse/KAFKA-5039
> Project: Kafka
>  Issue Type: Bug
>Reporter: Arun Mahadevan
>Assignee: Arun Mahadevan
>Priority: Minor
>
> When an exception is thrown in BlockingChannel::connect, the connection is 
> disconnected but the actual exception is not logged. This later manifests as 
> ClosedChannelException when trying to send. Also the SyncProducer wrongfully 
> logs "Connected to host:port for producing" even in case of exceptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3345) ProducerResponse could gracefully handle no throttle time provided

2018-05-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3345.
--
Resolution: Auto Closed

Closing as scala producer is removed from codebase.
[|https://issues.apache.org/jira/secure/AddComment!default.jspa?id=13062336]

> ProducerResponse could gracefully handle no throttle time provided
> --
>
> Key: KAFKA-3345
> URL: https://issues.apache.org/jira/browse/KAFKA-3345
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bryan Baugher
>Priority: Minor
>
> When doing some compatibility testing between kafka 0.8 and 0.9 I found that 
> the old producer using 0.9 libraries could write to a cluster running 0.8 if 
> 'request.required.acks' was set to 0. If it was set to anything else it would 
> fail with,
> {code}
> java.nio.BufferUnderflowException
>   at java.nio.Buffer.nextGetIndex(Buffer.java:506) 
>   at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:361) 
>   at kafka.api.ProducerResponse$.readFrom(ProducerResponse.scala:41) 
>   at kafka.producer.SyncProducer.send(SyncProducer.scala:109) 
> {code}
> In 0.9 there was a one line change to the response here[1] to look for a 
> throttle time value in the response. It seems if the 0.9 code gracefully 
> handled throttle time not being provided this would work. Would you be open 
> to this change?
> [1] - 
> https://github.com/apache/kafka/blob/0.9.0.1/core/src/main/scala/kafka/api/ProducerResponse.scala#L41



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2174) Wrong TopicMetadata deserialization

2018-05-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2174.
--
Resolution: Auto Closed

Closing inactive issue.

> Wrong TopicMetadata deserialization
> ---
>
> Key: KAFKA-2174
> URL: https://issues.apache.org/jira/browse/KAFKA-2174
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Alexey Ozeritskiy
>Priority: Major
> Attachments: KAFKA-2174.patch
>
>
> TopicMetadata.readFrom assumes that ByteBuffer always contains the full set 
> of partitions but it is not true. On incomplete metadata we will get 
> java.lang.ArrayIndexOutOfBoundsException:
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 47
> at 
> kafka.api.TopicMetadata$$anonfun$readFrom$1.apply$mcVI$sp(TopicMetadata.scala:38)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> at kafka.api.TopicMetadata$.readFrom(TopicMetadata.scala:36)
> at 
> kafka.api.TopicMetadataResponse$$anonfun$3.apply(TopicMetadataResponse.scala:31)
> at 
> kafka.api.TopicMetadataResponse$$anonfun$3.apply(TopicMetadataResponse.scala:31)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Range.foreach(Range.scala:141)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.api.TopicMetadataResponse$.readFrom(TopicMetadataResponse.scala:31)
> {code}
> We sometimes get this exceptions on any broker restart (kill -TERM, 
> controlled.shutdown.enable=false).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-1460) NoReplicaOnlineException: No replica for partition

2018-05-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1460.
--
Resolution: Auto Closed

Closing inactive issue.  Please reopen if you think the issue still exists in 
newer versions.

> NoReplicaOnlineException: No replica for partition
> --
>
> Key: KAFKA-1460
> URL: https://issues.apache.org/jira/browse/KAFKA-1460
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Artur Denysenko
>Priority: Critical
> Attachments: state-change.log
>
>
> We have a standalone kafka server.
> After several days of running we get:
> {noformat}
> kafka.common.NoReplicaOnlineException: No replica for partition 
> [gk.q.module,1] is alive. Live brokers are: [Set()], Assigned replicas are: 
> [List(0)]
>   at 
> kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:61)
>   at 
> kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:336)
>   at 
> kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:185)
>   at 
> kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:99)
>   at 
> kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:96)
>   at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:743)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:772)
>   at 
> scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
>   at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
>   at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45)
>   at scala.collection.mutable.HashMap.foreach(HashMap.scala:95)
>   at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:742)
>   at 
> kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:96)
>   at 
> kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:68)
>   at 
> kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:312)
>   at 
> kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:162)
>   at 
> kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:63)
>   at 
> kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply$mcZ$sp(KafkaController.scala:1068)
>   at 
> kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1066)
>   at 
> kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1066)
>   at kafka.utils.Utils$.inLock(Utils.scala:538)
>   at 
> kafka.controller.KafkaController$SessionExpirationListener.handleNewSession(KafkaController.scala:1066)
>   at org.I0Itec.zkclient.ZkClient$4.run(ZkClient.java:472)
>   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {noformat}
> Please see attached [state-change.log]
> You can find all server logs (450mb) here: 
> http://46.4.114.35:/deploy/kafka-logs.2014-05-14-16.tgz
> On client we get:
> {noformat}
> 16:28:36,843 [ool-12-thread-2] WARN  ZookeeperConsumerConnector - 
> [dev_dev-1400257716132-e7b8240c], no brokers found when trying to rebalance.
> {noformat}
> If we try to send message using 'kafka-console-producer.sh':
> {noformat}
> [root@dev kafka]# /srv/kafka/bin/kafka-console-producer.sh --broker-list 
> localhost:9092 --topic test
> message
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
> details.
> [2014-05-16 19:45:30,950] WARN Fetching topic metadata with correlation id 0 
> for topics [Set(test)] from broker [id:0,host:localhost,port:9092] failed 
> (kafka.client.ClientUtils$)
> java.net.SocketTimeoutException
> at 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:229)
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
> at 
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
> at kafka.utils.Utils$.read(Utils.scala:375)
> at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)

[jira] [Resolved] (KAFKA-4419) Unable to GetOffset when the ACL of topic is defined

2018-05-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4419.
--
Resolution: Duplicate

Resolving as duplicate of  KAFKA-3355.

> Unable to GetOffset when the ACL of topic is defined 
> -
>
> Key: KAFKA-4419
> URL: https://issues.apache.org/jira/browse/KAFKA-4419
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, security
>Affects Versions: 0.9.0.1
> Environment: kafka 0.9.0.1 
> centos 7
> kafka server with kerberos (zokeeper also with kerberos)
> listeners=PLAINTEXT://:9092,SASL_PLAINTEXT://:9093
>Reporter: Mohammed amine GARMES
>Priority: Critical
>  Labels: security
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> I have a kafka server with kerberos enable 
> (listeners=PLAINTEXT://:9092,SASL_PLAINTEXT://:9093), I create a test topic 
> and I pushed some data to in. I run the GetOffsetShell  to get  topic offset :
> [root@kafka1 ~]#  /opt/kafka/bin/kafka-run-class.sh 
> kafka.tools.GetOffsetShell --broker-list kafka1:9092 --topic test-topic 
> --time -1 [2016-11-17 16:52:02,471] INFO 
> Verifying properties (kafka.utils.VerifiableProperties)
> [2016-11-17 16:52:02,479] INFO Property client.id is overridden to 
> GetOffsetShell (kafka.utils.VerifiableProperties)
> [2016-11-17 16:52:02,479] INFO Property metadata.broker.list is overridden to 
> kafka1:9092 (kafka.utils.VerifiableProperties)
> [2016-11-17 16:52:02,480] INFO Property request.timeout.ms is overridden to 
> 1000 (kafka.utils.VerifiableProperties)
> [2016-11-17 16:52:02,513] INFO Fetching metadata from broker 
> BrokerEndPoint(0,kafka1,9092) with correlation id 0 for 1 topic(s) 
> Set(test-topic) (kafka.client.ClientUtils$)
> [2016-11-17 16:52:02,561] INFO Connected to kafka1:9092 for producing 
> (kafka.producer.SyncProducer)
> [2016-11-17 16:52:02,573] INFO Disconnecting from kafka1:9092 
> (kafka.producer.SyncProducer)
> test-topic:2:773
> test-topic:1:773
> test-topic:0:772
> I added an user to ACL for my test topic:
> [root@kafka1 ~]#   $KAFKA_HOME/bin/kafka-acls.sh --authorizer-properties 
> zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181   --add 
> --allow-principal User:garmes  --operation All  --topic test-topic
> I pushed some data again. I run the GetOffsetShell  to get  topic offset but 
> this time I dont have offset :
> [root@kafka1 ~]#  /opt/kafka/bin/kafka-run-class.sh 
> kafka.tools.GetOffsetShell --broker-list kafka1:9092 --topic test-topic 
> --time -1
> [2016-11-17 16:43:31,289] INFO Verifying properties 
> (kafka.utils.VerifiableProperties)
> [2016-11-17 16:43:31,305] INFO Property client.id is overridden to 
> GetOffsetShell (kafka.utils.VerifiableProperties)
> [2016-11-17 16:43:31,305] INFO Property metadata.broker.list is overridden to 
> kafka1:9092 (kafka.utils.VerifiableProperties)
> [2016-11-17 16:43:31,305] INFO Property request.timeout.ms is overridden to 
> 1000 (kafka.utils.VerifiableProperties)
> [2016-11-17 16:43:31,339] INFO Fetching metadata from broker 
> BrokerEndPoint(0,kafka1,9092) with correlation id 0 for 1 topic(s) 
> Set(test-topic) (kafka.client.ClientUtils$)
> [2016-11-17 16:43:31,382] INFO Connected to kafka1:9092 for producing 
> (kafka.producer.SyncProducer)
> [2016-11-17 16:43:31,394] INFO Disconnecting from kafka1:9092 
> (kafka.producer.SyncProducer)
> [root@kafka1 ~]# 
> I changed the broker port from 9092 to 9093, but I have below  error :
> [root@kafka1 ~]# /opt/kafka/bin/kafka-run-class.sh kafka.tools.GetOffsetShell 
> --broker-list kafka1:9093 --topic test-topic --time -1
>  [2016-11-17 16:59:18,112] INFO Verifying properties 
> (kafka.utils.VerifiableProperties)
> [2016-11-17 16:59:18,129] INFO Property client.id is overridden to 
> GetOffsetShell (kafka.utils.VerifiableProperties)
> [2016-11-17 16:59:18,129] INFO Property metadata.broker.list is overridden to 
> kafka1:9093 (kafka.utils.VerifiableProperties)
> [2016-11-17 16:59:18,129] INFO Property request.timeout.ms is overridden to 
> 1000 (kafka.utils.VerifiableProperties)
> [2016-11-17 16:59:18,162] INFO Fetching metadata from broker 
> BrokerEndPoint(0,kafka1,9093) with correlation id 0 for 1 topic(s) 
> Set(test-topic) (kafka.client.ClientUtils$)
> [2016-11-17 16:59:18,206] INFO Connected to kafka1:9093 for producing 
> (kafka.producer.SyncProducer)
> [2016-11-17 16:59:18,210] INFO Disconnecting from kafka1:9093 
> (kafka.producer.SyncProducer)
> [2016-11-17 16:59:18,212] WARN Fetching topic metadata with correlation id 0 
> for topics [Set(test-topic)] from broker [BrokerEndPoint(0,kafka1,9093)] 
> failed (kafka.client.ClientUtils$)
> java.io.EOFException
> at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.jav

[jira] [Resolved] (KAFKA-5432) producer and consumer SocketTimeoutException

2018-05-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5432.
--
Resolution: Auto Closed

Closing inactive issue.  Exceptions observed with deprecated scala clients.  
Please reopen if you think the issue still exists in newer versions.

> producer and consumer  SocketTimeoutException
> -
>
> Key: KAFKA-5432
> URL: https://issues.apache.org/jira/browse/KAFKA-5432
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.10.2.0
> Environment: os:Red Hat 4.4.7-17
> java:
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Jian Lin
>Priority: Major
> Attachments: server.properties
>
>
> Hey all, I met a strange problem.
> The program ran normally for a week, and I did not do any changes, but today 
> it reported a mistake suddenly
> Producer error log:
> {code:java}
> 2017-06-12 10:46:01[qtp958382397-80:591423838]-[WARN] Failed to send producer 
> request with correlation id 234645 to broker 176 with data for partitions 
> [sms,3]
> java.net.SocketTimeoutException
>   at 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:229)
>   at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
>   at 
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
>   at kafka.utils.Utils$.read(Utils.scala:380)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:75)
>   at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
>   at 
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)
>   at 
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
>   at 
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102)
>   at 
> kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
>   at 
> kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.producer.SyncProducer.send(SyncProducer.scala:101)
>   at 
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255)
>   at 
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106)
>   at 
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:772)
>   at 
> scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
>   at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
>   at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45)
>   at scala.collection.mutable.HashMap.foreach(HashMap.scala:95)
>   at 
> kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100)
>   at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
>   at kafka.producer.Producer.send(Producer.scala:77)
>   at kafka.javaapi.producer.Producer.send(Producer.scala:33)
> {code}
> Consumer error log:
> {code:java}
> 2017-06-12 
> 10:46:52[ConsumerFetcherThread-sms-consumer-group1_zw_78_64-1496632739724-69516149-0-176:602874523]-[INFO]
>  Reconnect due to socket error: java.net.SocketTimeoutException
> 2017-06-12 
> 10:47:22[ConsumerFetcherThread-sms-consumer-group1_zw_78_64-1496632739724-69516149-0-176:602904544]-[WARN]
>  
> [ConsumerFetcherThread-sms-consumer-group1_zw_78_64-1496632739724-69516149-0-176],
>  Error in fetch Name: FetchRequest; Version: 0; CorrelationId: 6060231; 
> ClientId: sms-consumer-group1; ReplicaId: -1; MaxWait: 100 ms; MinBytes: 1 
> bytes; RequestInfo: [sms,0] -> PartitionFetchInfo(81132,1048576),[sms,3] -> 
> PartitionFetch

[jira] [Resolved] (KAFKA-2943) Transient Failure in kafka.producer.SyncProducerTest.testReachableServer

2018-05-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2943.
--
Resolution: Auto Closed

> Transient Failure in kafka.producer.SyncProducerTest.testReachableServer
> 
>
> Key: KAFKA-2943
> URL: https://issues.apache.org/jira/browse/KAFKA-2943
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: transient-unit-test-failure
>
> {code}
> Stacktrace
> java.lang.AssertionError: Unexpected failure sending message to broker. null
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> kafka.producer.SyncProducerTest.testReachableServer(SyncProducerTest.scala:58)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> Standard Output
> [2015-12-03 07:10:17,494] ERROR [Replica Manager on Broker 0]: Error 
> processing append operation on partition [minisrtest,0] 

[jira] [Resolved] (KAFKA-6356) UnknownTopicOrPartitionException & NotLeaderForPartitionException and log deletion happening with retention bytes kept at -1.

2018-05-23 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6356.
--
Resolution: Not A Problem

old data is discarded after log retention period or when the log reaches 
retention time.  In this case, you may need to increase retention period.  
replication errors looks normal. Please reopen if you think the issue still 
exists

Post these kind of queries to us...@kafka.apache.org mailing list 
(http://kafka.apache.org/contact) for quicker response.

> UnknownTopicOrPartitionException & NotLeaderForPartitionException and log 
> deletion happening with retention bytes kept at -1.
> -
>
> Key: KAFKA-6356
> URL: https://issues.apache.org/jira/browse/KAFKA-6356
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
> Environment: Cent OS 7.2,
> HDD : 2Tb,
> CPUs: 56 cores,
> RAM : 256GB
>Reporter: kaushik srinivas
>Priority: Major
> Attachments: configs.txt, stderr_b0, stderr_b1, stderr_b2, stdout_b0, 
> stdout_b1, stdout_b2, topic_description, topic_offsets
>
>
> Facing issues in kafka topic with partitions and replication factor of 3.
> Config used :
> No of partitions : 20
> replication factor : 3
> No of brokers : 3
> Memory for broker : 32GB
> Heap for broker : 12GB
> Producer is run to produce data for 20 partitions of a single topic.
> But observed that partitions for which the leader is one of the 
> broker(broker-1), the offsets are never incremented and also we see log file 
> with 0MB size in the broker disk.
> Seeing below error in the brokers :
> error 1:
> 2017-12-13 07:11:11,191] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [test2,5] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> error 2:
> [2017-12-11 12:19:41,599] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [test1,13] to broker 
> 2:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server 
> is not the leader for that topic-partition. 
> (kafka.server.ReplicaFetcherThread)
> Attaching,
> 1. error and std out files of all the brokers.
> 2. kafka config used.
> 3. offsets and topic description.
> Retention bytes was kept to -1 and retention period 96 hours.
> But still observing some of the log files deleting at the broker,
> from logs :
> [2017-12-11 12:20:20,586] INFO Deleting index 
> /var/lib/mesos/slave/slaves/7b319cf4-f06e-4a35-a6fe-fd4fcc0548e6-S7/frameworks/7b319cf4-f06e-4a35-a6fe-fd4fcc0548e6-0006/executors/ckafka__5f085d0c-e296-40f0-a686-8953dd14e4c6/runs/506a1ce7-23d1-45ea-bb7c-84e015405285/kafka-broker-data/broker-1/test1-12/.timeindex
>  (kafka.log.TimeIndex)
> [2017-12-11 12:20:20,587] INFO Deleted log for partition [test1,12] in 
> /var/lib/mesos/slave/slaves/7b319cf4-f06e-4a35-a6fe-fd4fcc0548e6-S7/frameworks/7b319cf4-f06e-4a35-a6fe-fd4fcc0548e6-0006/executors/ckafka__5f085d0c-e296-40f0-a686-8953dd14e4c6/runs/506a1ce7-23d1-45ea-bb7c-84e015405285/kafka-broker-data/broker-1/test1-12.
>  (kafka.log.LogManager)
> We are expecting the logs to be never delete if retention bytes set to -1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3793) Kafka Python Consumer library messages gets truncated

2018-05-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3793.
--
Resolution: Cannot Reproduce

{color:#00}Closing inactive issue. Please reopen if you think the issue 
still exists{color}
 

> Kafka Python Consumer library messages gets truncated
> -
>
> Key: KAFKA-3793
> URL: https://issues.apache.org/jira/browse/KAFKA-3793
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rahul
>Priority: Major
>
> Snippet code is below:
> from kafka import KafkaConsumer
> from kafka.client import KafkaClient
> from kafka.consumer import SimpleConsumer
> consumer = KafkaConsumer('eventdetails_ingestion' , 
> group_id='1',bootstrap_servers=‘:9092', 
> max_partition_fetch_bytes=1055)
> for msg in consumer:
>try:
>jValue = json.loads(str(msg.value))
>   except ValueError:
>fileErr.write(str(msg.value)+"\n")
> Steps:
> We send/produce large sets of messages to Kafka of around 20 to 30 KB size 
> each messages in JSON format and producing around 200 messages / sec for 1 
> hour duration. We have 3 Kafka Brokers running and I am trying to consume the 
> messages from these 3 Kafka Brokers from the same topic using the above code. 
> The problem is that sometimes some of the messages gets truncated, I am not 
> sure why it happen ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2036) Consumer and broker have different networks

2018-05-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2036.
--
Resolution: Duplicate

Resolving this duplicate of KIP-235/KIP-302 

> Consumer and broker have different networks
> ---
>
> Key: KAFKA-2036
> URL: https://issues.apache.org/jira/browse/KAFKA-2036
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.8.2.1
> Environment: oracle java {7,8}, ipv6 only consumer, ipv4 + ipv6 broker
>Reporter: Arsenii Krasikov
>Assignee: Jun Rao
>Priority: Major
> Attachments: patch, patch2
>
>
> If broker is connected to several networks ( for example ipv6 and ipv4 ) and 
> not all of them are reachable to consumer then 
> {{kafka.network.BlockingChannel}} gives up to connect after the first 
> "Network is unreachable" error not triyng remaining networks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4505) Cannot get topic lag since kafka upgrade from 0.8.1.0 to 0.10.1.0

2018-05-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4505.
--
Resolution: Auto Closed

{color:#00}Closing inactive issue. Please reopen if you think the issue 
still exists{color}
 

> Cannot get topic lag since kafka upgrade from 0.8.1.0 to 0.10.1.0
> -
>
> Key: KAFKA-4505
> URL: https://issues.apache.org/jira/browse/KAFKA-4505
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, metrics, offset manager
>Affects Versions: 0.10.1.0
>Reporter: Romaric Parmentier
>Priority: Critical
>
> We were using kafka 0.8.1.1 and we just migrate to version 0.10.1.0.
> Since we migrate we are using the new script kafka-consumer-groups.sh to 
> retreive topic lags but it don't seem to work anymore. 
> Because the application is using the 0.8 driver we have added the following 
> conf to each kafka servers:
> log.message.format.version=0.8.2
> inter.broker.protocol.version=0.10.0.0
> When I'm using the option --list with kafka-consumer-groups.sh I can see 
> every consumer groups I'm using but the --describe is not working:
> /usr/share/kafka$ bin/kafka-consumer-groups.sh --zookeeper ip:2181 --describe 
> --group group_name
> No topic available for consumer group provided
> GROUP  TOPIC  PARTITION  
> CURRENT-OFFSET  LOG-END-OFFSET  LAG OWNER
> When I'm looking into zookeeper I can see the offset increasing for this 
> consumer group.
> Any idea ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6862) test toolchain

2018-05-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6862.
--
Resolution: Invalid

Please reopen the Jira with more details.

> test toolchain
> --
>
> Key: KAFKA-6862
> URL: https://issues.apache.org/jira/browse/KAFKA-6862
> Project: Kafka
>  Issue Type: Test
>  Components: build
>Reporter: ravi
>Priority: Major
>
> test toolchain



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6945) Add support to allow users to acquire delegation tokens for other users

2018-05-24 Thread Manikumar (JIRA)
Manikumar created KAFKA-6945:


 Summary: Add support to allow users to acquire delegation tokens 
for other users
 Key: KAFKA-6945
 URL: https://issues.apache.org/jira/browse/KAFKA-6945
 Project: Kafka
  Issue Type: Sub-task
Reporter: Manikumar
Assignee: Manikumar


Currently, we only allow a user to create delegation token for that user only. 
We should allow users to acquire delegation tokens for other users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6921) Remove old Scala producer and all related code, tests, and tools

2018-05-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6921.
--
Resolution: Fixed

> Remove old Scala producer and all related code, tests, and tools
> 
>
> Key: KAFKA-6921
> URL: https://issues.apache.org/jira/browse/KAFKA-6921
> Project: Kafka
>  Issue Type: Task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Major
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3649) Add capability to query broker process for configuration properties

2018-05-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3649.
--
Resolution: Fixed

Requested features are added in  KIP-133: Describe and Alter Configs Admin APIs 
and KIP-226 - Dynamic Broker Configuration.

> Add capability to query broker process for configuration properties
> ---
>
> Key: KAFKA-3649
> URL: https://issues.apache.org/jira/browse/KAFKA-3649
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, config, core
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: David Tucker
>Assignee: Liquan Pei
>Priority: Major
>
> Developing an API by which running brokers could be queries for the various 
> configuration settings is an important feature to managing the Kafka cluster.
> Long term, the API could be enhanced to allow updates for those properties 
> that could be changed at run time ... but this involves a more thorough 
> evaluation of configuration properties (which once can be modified in a 
> running broker and which require a restart {of individual nodes or the entire 
> cluster}).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6929) ZkData - Consumers offsets Zookeeper path is not correct

2018-05-26 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6929.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

Fixed via https://github.com/apache/kafka/pull/5060

> ZkData - Consumers offsets Zookeeper path is not correct
> 
>
> Key: KAFKA-6929
> URL: https://issues.apache.org/jira/browse/KAFKA-6929
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.1.0
>Reporter: maytal
>Priority: Major
>  Labels: patch-available
> Fix For: 2.0.0
>
>
> ZkData.java contains ConsumerOffset.path which is wrong. should contain the 
> word `offsets` instead of `offset`.
> [https://github.com/maytals/kafka-1/blob/1.1/core/src/main/scala/kafka/zk/ZkData.scala#L411]
>  
> Already created patch:
> [https://github.com/apache/kafka/pull/5060]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5346) Kafka Producer Failure After Idle Time

2018-05-28 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5346.
--
Resolution: Not A Problem

Closing as per above comment. Please reopen if you think the issue still exists

> Kafka Producer Failure After Idle Time
> --
>
> Key: KAFKA-5346
> URL: https://issues.apache.org/jira/browse/KAFKA-5346
> Project: Kafka
>  Issue Type: Bug
> Environment: 0.9.0.1 , windows
>Reporter: Manikandan P
>Priority: Major
>  Labels: windows
>
> We are using kafka (2.11-0.9.0.1) in windows and using .NET Kafka SDK 
> (kafka-net) for connecting kafka server.
> When we produce the data to kafka server after 15 minutes of idle time of 
> .NET Client, we are getting below exception in the Kafka SDK Logs.
> TcpClient Socket [http://10.X.X.100:9092//10.X.X.211:50290] Socket.Poll(S): 
> Data was not available, may be connection was closed. 
> TcpClient Socket [http://10.X.X.100:9092//10.X.X.211:50290] has been closed 
> successfully.
> It seems that Kafka Server is accepting the socket request but not responding 
> the request due to which we are not able to produce the message to Kafka even 
> though Kafka Server is online.
> We also tried to increase the threads and also decrease the idle time in 
> server.properties as below in kafka Server and still getting above logs.
> num.network.threads=6
> num.io.threads=16
> connections.max.idle.ms =12
> Please help us to resolve the above issue as it is breaking functional flow 
> and we are having in go live next week.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6769) Upgrade jetty library version

2018-05-28 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6769.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

Jetty version upgraded to 9.4.10.v20180503 in trunk.

> Upgrade jetty library version
> -
>
> Key: KAFKA-6769
> URL: https://issues.apache.org/jira/browse/KAFKA-6769
> Project: Kafka
>  Issue Type: Task
>  Components: core, security
>Affects Versions: 1.1.0
>Reporter: Di Shang
>Priority: Critical
> Fix For: 2.0.0
>
>
> jetty 9.2 has reached end of life as of Jan 2018
> [http://www.eclipse.org/jetty/documentation/current/what-jetty-version.html#d0e203]
> Current version used in Kafka 1.1.0: 9.2.24.v20180105
> For security reason please upgrade to a later version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4368) Unclean shutdown breaks Kafka cluster

2018-05-28 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4368.
--
Resolution: Auto Closed

Closing inactive issue.

> Unclean shutdown breaks Kafka cluster
> -
>
> Key: KAFKA-4368
> URL: https://issues.apache.org/jira/browse/KAFKA-4368
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: Anukool Rattana
>Priority: Critical
>
> My team has observed that if broker process die unclean then it will block 
> producer from sending messages to kafka topic.
> Here is how to reproduce the problem:
> 1) Create a Kafka 0.10 with three brokers (A, B and C). 
> 2) Create topic with replication_factor = 2 
> 3) Set producer to send messages with "acks=all" meaning all replicas must be 
> created before able to proceed next message. 
> 4) Force IEM (IBM Endpoint Manager) to send patch to broker A and force 
> server to reboot after patches installed.
> Note: min.insync.replicas = 1
> Result: - Producers are not able send messages to kafka topic after broker 
> rebooted and come back to join cluster with following error messages. 
> [2016-09-28 09:32:41,823] WARN Error while fetching metadata with correlation 
> id 0 : {logstash=LEADER_NOT_AVAILABLE} 
> (org.apache.kafka.clients.NetworkClient)
> We suspected that number of replication_factor (2) is not sufficient to our 
> kafka environment but really need an explanation on what happen when broker 
> facing unclean shutdown. 
> The same issue occurred when setting cluster with 2 brokers and 
> replication_factor = 1.
> The workaround i used to recover service is to cleanup both kafka topic log 
> file and zookeeper data (rmr /brokers/topics/XXX and rmr /consumers/XXX).
> Note:
> Topic list after A comeback from rebooted.
> Topic:logstash  PartitionCount:3ReplicationFactor:2 Configs:
> Topic: logstash Partition: 0Leader: 1   Replicas: 1,3   Isr: 
> 1,3
> Topic: logstash Partition: 1Leader: 2   Replicas: 2,1   Isr: 
> 2,1
> Topic: logstash Partition: 2Leader: 3   Replicas: 3,2   Isr: 
> 2,3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6427) Inconsistent exception type from KafkaConsumer.position

2018-05-28 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6427.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

Fixed in https://github.com/apache/kafka/pull/5005

> Inconsistent exception type from KafkaConsumer.position
> ---
>
> Key: KAFKA-6427
> URL: https://issues.apache.org/jira/browse/KAFKA-6427
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jay Kahrman
>Priority: Trivial
> Fix For: 2.0.0
>
>
> If KafkaConsumer.position is called with a partition that the consumer isn't 
> assigned, it throws an IllegalArgumentException. All other APIs throw an 
> IllegalStateException when the consumer tries to act on a partition that is 
> not assigned to the consumer. 
> Looking at the implementation, if it weren't for subscription test and 
> IllegalArgumentException thrown at the beginning of KafkaConsumer.position, 
> the very next line would throw an IllegalStateException anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3293) Consumers are not able to get messages.

2018-05-29 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3293.
--
Resolution: Cannot Reproduce

 Closing inactive issue. Please reopen if the issue still exists in newer 
versions.

> Consumers are not able to get messages.
> ---
>
> Key: KAFKA-3293
> URL: https://issues.apache.org/jira/browse/KAFKA-3293
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, offset manager
>Affects Versions: 0.9.0.1
> Environment: kafka: kafka_2.11-0.9.0.1
> java: jdk1.8.0_65
> OS: Linux stephen-T450s 3.19.0-51-generic #57~14.04.1-Ubuntu SMP Fri Feb 19 
> 14:36:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Stephen Wong
>Assignee: Neha Narkhede
>Priority: Major
>
> Overview
> ===
> The results of test are not consistent.
> The problem is that something is preventing the consumer from receiving the 
> messages.
> Configuration
> ==
> Server (only num.partitions is changed)
> diff config/server.properties config.backup/server.properties
> 65c65
> < num.partitions=8
> ---
> > num.partitions=1
> Producer
> properties.put("bootstrap.servers", “localhost:9092”);
> properties.put("acks", "all");
> properties.put("key.serializer", 
> "org.apache.kafka.common.serialization.LongSerializer");
> properties.put("value.serializer", 
> "org.apache.kafka.common.serialization.StringSerializer");
> properties.put("partitioner.class", 
> "kafkatest.sample2.SimplePartitioner");
> Consumer
> properties.put("bootstrap.servers", “localhost:9092”);
> properties.put("group.id", "testGroup");
> properties.put("key.deserializer", 
> "org.apache.kafka.common.serialization.LongDeserializer");
> properties.put("value.deserializer", 
> "org.apache.kafka.common.serialization.StringDeserializer");
> properties.put("enable.auto.commit", "false");
> Steps to reproduce:
> ===
> 1. started the zookeeper
> 2. started the kafka server
> 3. created topic
> $ bin/kafka-topics.sh --zookeeper localhost:2181 --create 
> --replication-factor 1 --partition 8 --topic testTopic4
> 4. Ran SimpleProducerDriver with 5 producers, and the amount of messages 
> produced is 50
> 5. Offset Status
> $ bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> localhost:9092 --topic testTopic4 --time -1
> testTopic4:2:1
> testTopic4:5:27
> testTopic4:4:1
> testTopic4:7:2
> testTopic4:1:8
> testTopic4:3:0
> testTopic4:6:11
> testTopic4:0:0
> 6. waited till the producer driver completes, it takes no more than a few 
> seconds
> 7. ran the SimpleConsumerDriver a couple of times, and no message is 
> received. Following DEBUG information is found:
> 2016-02-25 22:42:19 DEBUG [pool-1-thread-2] Fetcher: - Ignoring fetched 
> records for partition testTopic4-3 since it is no longer fetchable
> 8. altered the properties of consumer, had the auto commit disabled:
> //properties.put("enable.auto.commit", "false");
> 9. ran the SimpleConsumerDriver a couple of times, still, no message is 
> received.
> Following DEBUG information is found:
> 2016-02-25 22:47:23 DEBUG [pool-1-thread-2] ConsumerCoordinator: - Committed 
> offset 8 for partition testTopic4-1
> seems like the offset was updated?
> 10. re-enabled the auto commit, nothing changed.
> Following DEBUG information is found:
> 2016-02-25 22:49:38 DEBUG [pool-1-thread-7] Fetcher: - Resetting offset for 
> partition testTopic4-6 to the committed offset 11
> 11. ran the SimpleProducerDriver again, another 50 messages are published
> 12. ran the SimpleConsumerDriver again, 100 messages were consumed.
> 13. ran the SimpleConsumerDriver again, 50 messages were consumed.
> As auto commit is disabled, all messages (100) should be consumed.
> The results of test are not consistent.
> The problem is that something is preventing the consumer from receiving the 
> messages.
> And sometimes it required running the producer when the consumers are still 
> active so as to get around it.
> And once the consumers started to consume messages, the problem did not occur 
> any more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2556) Corrupted index on log.dir updates

2018-05-29 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2556.
--
Resolution: Not A Problem

Closing inactive issue.  Kafka log.dir contains topic  data and metadata 
information. We cannot arbitrarily replace the log.dir contents.  Please reopen 
if you think the issue still exists

> Corrupted index on log.dir updates
> --
>
> Key: KAFKA-2556
> URL: https://issues.apache.org/jira/browse/KAFKA-2556
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Denis
>Priority: Major
>
> Partition is corrupted when user updates server configuration.
> Topic would be corrupted if two or more `log.dir` directories contains 
> segments for the same topic after configuration changes.
> Steps to reproduce:
> * Start Kafka service with several `log.dir` directories
> * Stop Kafka service
> * Update configuration. Remove one of `log.dir` directories
> * Start Kafka service. Kafka creates another directory for the partition in 
> another directory
> * Stop Kafka service
> * Update configuration. Restore the folder we removed on the third step
> * Start Kafka process
> Result:
> {code}
> [2015-09-18 10:28:48,764] INFO Verifying properties 
> (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,801] INFO Property auto.create.topics.enable is 
> overridden to false (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,801] INFO Property auto.leader.rebalance.enable is 
> overridden to false (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,801] INFO Property broker.id is overridden to 6 
> (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,802] INFO Property default.replication.factor is 
> overridden to 1 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,802] INFO Property host.name is overridden to 
> inferno07.chi.net (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,803] WARN Property kafka.metrics.graphite.group is not 
> valid (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,803] WARN Property kafka.metrics.graphite.host is not 
> valid (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,803] WARN Property kafka.metrics.graphite.port is not 
> valid (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,803] WARN Property kafka.metrics.graphite.regex is not 
> valid (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,803] WARN Property kafka.metrics.polling.interval.secs 
> is not valid (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,804] WARN Property kafka.metrics.reporters is not valid 
> (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,804] WARN Property log.cleanup.interval.mins is not 
> valid (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,804] INFO Property log.dirs is overridden to 
> /mnt/21e57274-0ba1-41e0-9ea4-befef32ec62c/data/kafka,/mnt/f81fc52a-7784-47c3-8b6d-be95cc935697/data/kafka,/mnt/75928c58-289d-4954-8640-0b82a990b3bc/data/kafka,/mnt/6bcb0c8e-da33-43e6-b878-703600f622d1/data/kafka,/mnt/afd7361c-dc16-4b78-8608-adda119559d0/data/kafka,/mnt/96a93811-6d91-41aa-ab1a-3804f43a5b05/data/kafka,/mnt/c085ee04-65ff-493d-b236-fa3dd462595c/data/kafka,/mnt/fbec6169-ca70-4f07-a7e9-af85424a97c5/data/kafka,/mnt/363d4473-531f-446c-b483-49593587f965/data/kafka,/mnt/d47fd346-1b77-41a8-a0a8-1c517c394868/data/kafka,/mnt/2c97da3c-4d5d-49bf-ae5a-a46c2839b18a/data/kafka,/mnt/fc6fd370-a213-4b02-9a7c-07f7a5ad505f/data/kafka
>  (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,804] INFO Property log.retention.hours is overridden to 
> 168 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,804] INFO Property log.segment.bytes is overridden to 
> 536870912 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,805] INFO Property message.max.bytes is overridden to 
> 41943040 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,805] INFO Property num.io.threads is overridden to 45 
> (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,805] INFO Property num.network.threads is overridden to 
> 150 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,805] INFO Property num.partitions is overridden to 1 
> (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,806] INFO Property num.replica.fetchers is overridden to 
> 10 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,806] INFO Property port is overridden to 9093 
> (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,806] INFO Property replica.fetch.max.bytes is overridden 
> to 41943040 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,806] INFO Property socket.receive.buffer.bytes is 
> overridden to 1048576 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,806] INFO Property socket.request.max.bytes is 
> overridden to 104857600 (kafka.u

[jira] [Resolved] (KAFKA-3743) kafka-server-start.sh: Unhelpful error message

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3743.
--
Resolution: Duplicate

> kafka-server-start.sh: Unhelpful error message
> --
>
> Key: KAFKA-3743
> URL: https://issues.apache.org/jira/browse/KAFKA-3743
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.0.0
>Reporter: Magnus Edenhill
>Priority: Minor
>
> When trying to start Kafka from an uncompiled source tarball rather than the 
> binary the kafka-server-start.sh command gives a mystical error message:
> ```
> $ bin/kafka-server-start.sh config/server.properties 
> Error: Could not find or load main class config.server.properties
> ```
> This could probably be improved to say something closer to the truth.
> This is on 0.10.0.0-rc6 tarball from github.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6581) ConsumerGroupCommand hangs if even one of the partition is unavailable

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6581.
--
   Resolution: Fixed
Fix Version/s: (was: 0.10.0.2)
   2.0.0

This is addressed by KIP-266.

> ConsumerGroupCommand hangs if even one of the partition is unavailable
> --
>
> Key: KAFKA-6581
> URL: https://issues.apache.org/jira/browse/KAFKA-6581
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, core, tools
>Affects Versions: 0.10.0.0
>Reporter: Sahil Aggarwal
>Priority: Minor
> Fix For: 2.0.0
>
>
> ConsumerGroupCommand.scala uses consumer internally to get the position for 
> each partition but if the partition is unavailable the call 
> consumer.position(topicPartition) will block indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3177) Kafka consumer can hang when position() is called on a non-existing partition.

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3177.
--
Resolution: Fixed

This is addressed by KIP-266.

> Kafka consumer can hang when position() is called on a non-existing partition.
> --
>
> Key: KAFKA-3177
> URL: https://issues.apache.org/jira/browse/KAFKA-3177
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 2.0.0
>
>
> This can be easily reproduced as following:
> {code}
> {
> ...
> consumer.assign(SomeNonExsitingTopicParition);
> consumer.position();
> ...
> }
> {code}
> It seems when position is called we will try to do the following:
> 1. Fetch committed offsets.
> 2. If there is no committed offsets, try to reset offset using reset 
> strategy. in sendListOffsetRequest(), if the consumer does not know the 
> TopicPartition, it will refresh its metadata and retry. In this case, because 
> the partition does not exist, we fall in to the infinite loop of refreshing 
> topic metadata.
> Another orthogonal issue is that if the topic in the above code piece does 
> not exist, position() call will actually create the topic due to the fact 
> that currently topic metadata request could automatically create the topic. 
> This is a known separate issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3503) Throw exception on missing/non-existent partition

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3503.
--
Resolution: Duplicate

> Throw exception on missing/non-existent  partition 
> ---
>
> Key: KAFKA-3503
> URL: https://issues.apache.org/jira/browse/KAFKA-3503
> Project: Kafka
>  Issue Type: Wish
>Affects Versions: 0.9.0.1
> Environment: Java 1.8.0_60. 
> Linux  centos65vm 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC
>Reporter: Navin Markandeya
>Priority: Minor
>
> I would expect some exception to be thrown when a consumer tries to access a 
> non-existent partition. I did not see anyone reporting it. If is already 
> known, please link and close this.
> {code}
> java version "1.8.0_60"
> Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
> Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
> {code}
> {code}
> Linux centos65vm 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC 2015 
> x86_64 x86_64 x86_64 GNU/Linux
> {code}
> {{Kafka release - kafka_2.11-0.9.0.1}}
> Created a topic with 3 partitions
> {code}
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic mytopic
> Topic:mytopic PartitionCount:3ReplicationFactor:1 Configs:
>   Topic: mytopic  Partition: 0Leader: 0   Replicas: 0 Isr: 0
>   Topic: mytopic  Partition: 1Leader: 0   Replicas: 0 Isr: 0
>   Topic: mytopic  Partition: 2Leader: 0   Replicas: 0 Isr: 0
> {code}
> Consumer application does not terminate. A thrown exception that there is no 
> such {{mytopic-3}} partition, that would help to gracefully terminate it.
> {code}
> 14:08:02.885 [main] DEBUG o.a.k.c.c.i.ConsumerCoordinator - Fetching 
> committed offsets for partitions: [mytopic-3, mytopic-0, mytopic-1, mytopic-2]
> 14:08:02.887 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.bytes-sent
> 14:08:02.888 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.bytes-received
> 14:08:02.888 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.latency
> 14:08:02.888 [main] DEBUG o.apache.kafka.clients.NetworkClient - Completed 
> connection to node 2147483647
> 14:08:02.891 [main] DEBUG o.a.k.c.c.i.ConsumerCoordinator - No committed 
> offset for partition mytopic-3
> 14:08:02.891 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Resetting 
> offset for partition mytopic-3 to latest offset.
> 14:08:02.892 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> 14:08:02.965 [main] DEBUG o.apache.kafka.clients.NetworkClient - Sending 
> metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=4,client_id=consumer-2},
>  body={topics=[mytopic]}), isInitiatedByNetworkClient, 
> createdTimeMs=1459804082965, sendTimeMs=0) to node 0
> 14:08:02.968 [main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster 
> metadata version 3 to Cluster(nodes = [Node(0, centos65vm, 9092)], partitions 
> = [Partition(topic = mytopic, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = mytopic, partition = 1, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = mytopic, partition = 2, leader = 0, 
> replicas = [0,], isr = [0,]])
> 14:08:02.968 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> 14:08:03.071 [main] DEBUG o.apache.kafka.clients.NetworkClient - Sending 
> metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=5,client_id=consumer-2},
>  body={topics=[mytopic]}), isInitiatedByNetworkClient, 
> createdTimeMs=1459804083071, sendTimeMs=0) to node 0
> 14:08:03.073 [main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster 
> metadata version 4 to Cluster(nodes = [Node(0, centos65vm, 9092)], partitions 
> = [Partition(topic = mytopic, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = mytopic, partition = 1, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = mytopic, partition = 2, leader = 0, 
> replicas = [0,], isr = [0,]])
> 14:08:03.073 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3899) Consumer.poll() stuck in loop if wrong credentials are supplied

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3899.
--
   Resolution: Fixed
 Assignee: (was: Edoardo Comar)
Fix Version/s: 2.0.0

This is addressed by KIP-266. 

> Consumer.poll() stuck in loop if wrong credentials are supplied
> ---
>
> Key: KAFKA-3899
> URL: https://issues.apache.org/jira/browse/KAFKA-3899
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.0, 0.10.1.0
>Reporter: Edoardo Comar
>Priority: Major
> Fix For: 2.0.0
>
>
> With the broker configured to use SASL PLAIN ,
> if the client is supplying wrong credentials, 
> a consumer calling poll()
> is stuck forever and only inspection of DEBUG-level logging can tell what is 
> wrong.
> [2016-06-24 12:15:16,455] DEBUG Connection with localhost/127.0.0.1 
> disconnected (org.apache.kafka.common.network.Selector)
> java.io.EOFException
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
>   at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.receiveResponseOrToken(SaslClientAuthenticator.java:239)
>   at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:182)
>   at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:64)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:318)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:183)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:973)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3727) Consumer.poll() stuck in loop on non-existent topic manually assigned

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3727.
--
Resolution: Fixed

This is addressed by KIP-266.

> Consumer.poll() stuck in loop on non-existent topic manually assigned
> -
>
> Key: KAFKA-3727
> URL: https://issues.apache.org/jira/browse/KAFKA-3727
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
>Priority: Critical
>
> The behavior of a consumer on poll() for a non-existing topic is surprisingly 
> different/inconsistent 
> between a consumer that subscribed to the topic and one that had the 
> topic-partition manually assigned.
> The "subscribed" consumer will return an empty collection
> The "assigned" consumer will *loop forever* - this feels a bug to me.
> sample snippet to reproduce:
> {quote}
> KafkaConsumer assignKc = new KafkaConsumer<>(props1);
> KafkaConsumer subsKc = new KafkaConsumer<>(props2);
> List tps = new ArrayList<>();
> tps.add(new TopicPartition("topic-not-exists", 0));
> assignKc.assign(tps);
> subsKc.subscribe(Arrays.asList("topic-not-exists"));
> System.out.println("* subscribe k consumer ");
> ConsumerRecords crs2 = subsKc.poll(1000L); 
> print("subscribeKc", crs2); // returns empty
> System.out.println("* assign k consumer ");
> ConsumerRecords crs1 = assignKc.poll(1000L); 
>// will loop forever ! 
> print("assignKc", crs1);
> {quote}
> the logs for the "assigned" consumer show:
> [2016-05-18 17:33:09,907] DEBUG Updated cluster metadata version 8 to 
> Cluster(nodes = [192.168.10.18:9093 (id: 0 rack: null)], partitions = []) 
> (org.apache.kafka.clients.Metadata)
> [2016-05-18 17:33:09,908] DEBUG Partition topic-not-exists-0 is unknown for 
> fetching offset, wait for metadata refresh 
> (org.apache.kafka.clients.consumer.internals.Fetcher)
> [2016-05-18 17:33:10,010] DEBUG Sending metadata request 
> {topics=[topic-not-exists]} to node 0 (org.apache.kafka.clients.NetworkClient)
> [2016-05-18 17:33:10,011] WARN Error while fetching metadata with correlation 
> id 9 : {topic-not-exists=UNKNOWN_TOPIC_OR_PARTITION} 
> (org.apache.kafka.clients.NetworkClient)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3011) Consumer.poll(0) blocks if Kafka not accessible

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3011.
--
Resolution: Fixed

This is addressed by KIP-266. 

> Consumer.poll(0) blocks if Kafka not accessible
> ---
>
> Key: KAFKA-3011
> URL: https://issues.apache.org/jira/browse/KAFKA-3011
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: all
>Reporter: Eric Bowman
>Priority: Major
>
> Because of this loop in ConsumerNetworkClient:
> {code:java}
> public void awaitMetadataUpdate() {
> int version = this.metadata.requestUpdate();
> do {
> poll(Long.MAX_VALUE);
> } while (this.metadata.version() == version);
> }
> {code}
> ...if Kafka is not reachable (perhaps not running, or other network issues, 
> unclear), then KafkaConsumer.poll(0) will block until it's available.
> I suspect that better behavior would be an exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3822) Kafka Consumer close() hangs indefinitely if Kafka Broker shutdown while connected

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3822.
--
Resolution: Fixed

This is addressed by KIP-266. 

> Kafka Consumer close() hangs indefinitely if Kafka Broker shutdown while 
> connected
> --
>
> Key: KAFKA-3822
> URL: https://issues.apache.org/jira/browse/KAFKA-3822
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1, 0.10.0.0
> Environment: x86 Red Hat 6 (1 broker running zookeeper locally, 
> client running on a separate server)
>Reporter: Alexander Cook
>Assignee: Ashish Singh
>Priority: Major
>
> I am using the KafkaConsumer java client to consume messages. My application 
> shuts down smoothly if I am connected to a Kafka broker, or if I never 
> succeed at connecting to a Kafka broker, but if the broker is shut down while 
> my consumer is connected to it, consumer.close() hangs indefinitely. 
> Here is how I reproduce it: 
> 1. Start 0.9.0.1 Kafka Broker
> 2. Start consumer application and consume messages
> 3. Stop 0.9.0.1 Kafka Broker (ctrl-c or stop script)
> 4. Try to stop application...hangs at consumer.close() indefinitely. 
> I also see this same behavior using 0.10 broker and client. 
> This is my first bug reported to Kafka, so please let me know if I should be 
> following a different format. Thanks! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3457) KafkaConsumer.committed(...) hangs forever if port number is wrong

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3457.
--
Resolution: Fixed

This is addressed by KIP-266. 

> KafkaConsumer.committed(...) hangs forever if port number is wrong
> --
>
> Key: KAFKA-3457
> URL: https://issues.apache.org/jira/browse/KAFKA-3457
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Harald Kirsch
>Assignee: Liquan Pei
>Priority: Major
>
> Create a KafkaConsumer with default settings but with a wrong host:port 
> setting for bootstrap.servers. Have it in some consumer group, do not 
> subscribe or assign partitions.
> Then call .committed(...) for a topic/partition combination a few times. It 
> will hang on the 2nd or third call forever. In the debug log you will see 
> that it repeats connections all over again. I waited many minutes and it 
> never came back to throw an Exception.
> The connections problems should at least pop out on the WARNING log level. 
> Likely the connection problems should throw an exception eventually.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4571) Consumer fails to retrieve messages if started before producer

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4571.
--
Resolution: Auto Closed

Closing inactive issue. Please reopen if the issue still exists.

> Consumer fails to retrieve messages if started before producer
> --
>
> Key: KAFKA-4571
> URL: https://issues.apache.org/jira/browse/KAFKA-4571
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.1
> Environment: Ubuntu Desktop 16.04 LTS, Oracle Java 8 1.8.0_101, Core 
> i7 4770K
>Reporter: Sergiu Hlihor
>Priority: Major
>
> In a configuration where topic was never created before, starting the 
> consumer before the producer leads to no message being consumed 
> (KafkaConsumer.pool() returns always an instance of ConsumerRecords with 0 
> count ). 
> Starting another consumer on the same group, same topic after messages were 
> produced is still not consuming them. Starting another consumer with another 
> groupId appears to be working.
> In the consumer logs I see: WARN  NetworkClient - Error while fetching 
> metadata with correlation id 1 : {measurements021=LEADER_NOT_AVAILABLE} 
> Both producer and consumer were launched from inside same JVM. 
> The configuration used is the standard one found in Kafka distribution. If 
> this is a configuration issue, please suggest any change that I should do.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4751) kafka-clients-0.9.0.2.4.2.11-1 issue not throwing exception.

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4751.
--
Resolution: Fixed

This is addressed by KIP-266. 

> kafka-clients-0.9.0.2.4.2.11-1 issue not throwing exception.
> 
>
> Key: KAFKA-4751
> URL: https://issues.apache.org/jira/browse/KAFKA-4751
> Project: Kafka
>  Issue Type: Bug
> Environment: kafka-clients-0.9.0.2.4.2.11-1 java based client
>Reporter: Avinash Kumar Gaur
>Priority: Major
>
> While running consumer with kafka-clients-0.9.0.2.4.2.11-1.jar and connecting 
> directly with broker, kafka consumer is not throwing any exception, if broker 
> is down.
> 1)Create client with kafka-clients-0.9.0.2.4.2.11-1.jar.
> 2)Do not start kafka broker.
> 3)Start kafka consumer with required properties.
> Observation - As you may see consumer is not throwing any exception even if 
> broker is down.
> Expected - It should throw exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5304) Kafka Producer throwing infinite NullPointerExceptions

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5304.
--
Resolution: Auto Closed

Closing inactive issue. Please reopen if the issue still exists.

> Kafka Producer throwing infinite NullPointerExceptions
> --
>
> Key: KAFKA-5304
> URL: https://issues.apache.org/jira/browse/KAFKA-5304
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.1
> Environment: RedHat Enterprise Linux 6.8
>Reporter: Pranay Kumar Chaudhary
>Priority: Major
>
> 2017-05-22 11:38:56,918 LL="ERROR" TR="kafka-producer-network-thread | 
> application-name.hostname.com" LN="o.a.k.c.p.i.Sender"  Uncaught error in 
> kafka producer I/O thread:
> java.lang.NullPointerException: null
> Continuously getting this error in logs which is filling up the disk space. 
> Not able to get a stack trace to pinpoint the source of the error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5424) KafkaConsumer.listTopics() throws Exception when unauthorized topics exist in cluster

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5424.
--
Resolution: Fixed

This has been fixed via KAFKA-3396

> KafkaConsumer.listTopics() throws Exception when unauthorized topics exist in 
> cluster
> -
>
> Key: KAFKA-5424
> URL: https://issues.apache.org/jira/browse/KAFKA-5424
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Mike Fagan
>Assignee: Mickael Maison
>Priority: Major
>
> KafkaConsumer.listTopics() internally calls Fetcher. 
> getAllTopicMetadata(timeout) and this method will throw a 
> TopicAuthorizationException when there exists an unauthorized topic in the 
> cluster. 
> This behavior runs counter to the API docs and makes listTopics() unusable 
> except in the case of the consumer is authorized for every single topic in 
> the cluster. 
> A potentially better approach is to have Fetcher implement a new method 
> getAuthorizedTopicMetadata(timeout)  and have KafkaConsumer call this method 
> instead of getAllTopicMetadata(timeout) from within KafkaConsumer.listTopics()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5940) kafka-delete-records.sh doesn't give any feedback when the JSON offset configuration file is invalid

2018-06-05 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5940.
--
Resolution: Fixed

Fixed in KAFKA-5919

> kafka-delete-records.sh doesn't give any feedback when the JSON offset 
> configuration file is invalid
> 
>
> Key: KAFKA-5940
> URL: https://issues.apache.org/jira/browse/KAFKA-5940
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Jakub Scholz
>Assignee: Jakub Scholz
>Priority: Major
>
> When deleting records using {{bin/kafka-delete-records.sh}}, the user has to 
> pass a JSON file with the list of topics/partitions and the offset to which 
> the records should be deleted. However, currently when such file is invalid 
> the utility doesn't print any visible error:
> {code}
> $ bin/kafka-delete-records.sh --bootstrap-server localhost:9092 
> --offset-json-file offset.json
> Executing records delete operation
> Records delete operation completed:
> $
> {code}
> Instead, I would suggest that it throws an exception to make it clear that 
> the problem is the invalid JSON file:
> {code}
> $ bin/kafka-delete-records.sh --bootstrap-server localhost:9092 
> --offset-json-file offset.json
> Exception in thread "main" kafka.common.AdminCommandFailedException: Offset 
> json file doesn't contain valid JSON data.
>   at 
> kafka.admin.DeleteRecordsCommand$.parseOffsetJsonStringWithoutDedup(DeleteRecordsCommand.scala:54)
>   at 
> kafka.admin.DeleteRecordsCommand$.execute(DeleteRecordsCommand.scala:62)
>   at kafka.admin.DeleteRecordsCommand$.main(DeleteRecordsCommand.scala:37)
>   at kafka.admin.DeleteRecordsCommand.main(DeleteRecordsCommand.scala)
> $
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6972) Kafka ACL does not work expected with wildcard

2018-06-05 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6972.
--
Resolution: Information Provided

> Kafka ACL does not work expected with wildcard
> --
>
> Key: KAFKA-6972
> URL: https://issues.apache.org/jira/browse/KAFKA-6972
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0
> Environment: OS : CentOS 7, 64bit.
> Confluent : 3.3, Kafka 0.11.
>Reporter: Soyee Deng
>Assignee: Sönke Liebau
>Priority: Major
>
> Just started with Confluent 3.3 platform and Kafka 0.11 having SSL as 
> transportation security and Kerberos to restrict the access control based on 
> the holding principals. In order to make life easier, wildcard is extensively 
> used in my environment. But it turned out that is not working as expected. 
> My issue is that when I run the command _kafka-acls_ under one directory with 
> some files, this command would pick up the name of first file as the topic 
> name or group name. e.g. In my case, abcd.txt would be chosen while giving my 
> principal connect-consumer the permissions of consuming message from any 
> topic with any group Id.
> [quality@data-pipeline-1 test_dir]$ 
> KAFKA_OPTS=-Djava.security.auth.login.config='/etc/security/jaas/broker-jaas.conf'
>  kafka-acls --authorizer-properties 
> zookeeper.connect=data-pipeline-1.orion.com:2181 --add --allow-principal 
> User:connect-consumer --consumer --topic * --group *
>  Adding ACLs for resource `Topic:abcd.txt`:
>  User:connect-consumer has Allow permission for operations: Describe from 
> hosts: *
>  User:connect-consumer has Allow permission for operations: Read from hosts: *
> Adding ACLs for resource `Group:abcd.txt`:
>  User:connect-consumer has Allow permission for operations: Read from hosts: *
> Current ACLs for resource `Topic:abcd.txt`:
>  User:connect-consumer has Allow permission for operations: Describe from 
> hosts: *
>  User:connect-consumer has Allow permission for operations: Read from hosts: *
>  User:connect-consumer has Allow permission for operations: Write from hosts: 
> *
> Current ACLs for resource `Group:abcd.txt`:
>  User:connect-consumer has Allow permission for operations: Read from hosts: *
>  
> My current work around solution is changing command context to an empty 
> directory and run above command, it works as expected. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5523) ReplayLogProducer not using the new Kafka consumer

2018-06-05 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5523.
--
   Resolution: Fixed
 Assignee: Manikumar
Fix Version/s: 2.0.0

> ReplayLogProducer not using the new Kafka consumer
> --
>
> Key: KAFKA-5523
> URL: https://issues.apache.org/jira/browse/KAFKA-5523
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Assignee: Manikumar
>Priority: Minor
> Fix For: 2.0.0
>
>
> Hi,
> the ReplayLogProducer is using the latest Kafka producer but not the latest 
> Kafka consumer. Is this tool today deprecated ? I see that something like 
> that could be done using the MirrorMaker. [~ijuma] Does it make sense to 
> update the ReplayLogProducer to the latest Kafka consumer ?
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6982) java.lang.ArithmeticException: / by zero

2018-06-05 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6982.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

> java.lang.ArithmeticException: / by zero
> 
>
> Key: KAFKA-6982
> URL: https://issues.apache.org/jira/browse/KAFKA-6982
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 1.1.0
> Environment: Environment: Windows 10. 
>Reporter: wade wu
>Priority: Major
> Fix For: 2.0.0
>
>
> Producer keeps sending messages to Kafka, Kafka is down. 
> Server.log shows: 
> ..
> [2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
> dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to 
> log file 
> D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due to 
> Corrupt index found, index file 
> (D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
> non-zero size but the last offset is 0 which is no greater than the base 
> offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
>  [2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
> dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to 
> log file 
> D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due to 
> Corrupt index found, index file 
> (D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
> non-zero size but the last offset is 0 which is no greater than the base 
> offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
> ..
>  
> This line of code in SocketServer.scala causing the error: 
>                   {color:#33} currentProcessor = 
> currentProcessor{color:#d04437} % processors.size{color}{color}
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6984) Question

2018-06-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6984.
--
Resolution: Information Provided

Post these kind of queries to 
[us...@kafka.apache.org|mailto:us...@kafka.apache.org] mailing list 
([http://kafka.apache.org/contact]) for  quicker responses.

> Question
> 
>
> Key: KAFKA-6984
> URL: https://issues.apache.org/jira/browse/KAFKA-6984
> Project: Kafka
>  Issue Type: Bug
>Reporter: Remil
>Priority: Major
>
> hadoopuser@sherin-VirtualBox:~$ sudo su -p - zookeeper -c 
> "/usr/local/zookeeper/zookeeper-3.4.12/bin/zkServer.sh start" ZooKeeper JMX 
> enabled by default
> ZooKeeper JMX enabled by default
> Using config: /usr/local/zookeeper/zookeeper-3.4.12/bin/../conf/zoo.cfg
> Starting zookeeper ... STARTED
> hadoopuser@sherin-VirtualBox:~$ sudo 
> /opt/kafka/kafka-1.1.0-src/bin/kafka-server-start.sh 
> /opt/kafka/kafka-1.1.0-src/config/server.properties
> Classpath is empty. Please build the project first e.g. by running './gradlew 
> jar -PscalaVersion=2.11.12'
> hadoopuser@sherin-VirtualBox:~$ ./gradlew jar -PscalaVersion=2.11.12
> bash: ./gradlew: No such file or directory
> hadoopuser@sherin-VirtualBox:~$ sudo updatedb
> hadoopuser@sherin-VirtualBox:~$ locate gradlew
> hadoopuser@sherin-VirtualBox:~$ ./gradlew jar -PscalaVersion=2.11.12
> bash: ./gradlew: No such file or directory
> hadoopuser@sherin-VirtualBox:~$



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2445) Failed test: kafka.producer.ProducerTest > testSendWithDeadBroker FAILED

2018-06-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2445.
--
Resolution: Auto Closed

> Failed test: kafka.producer.ProducerTest > testSendWithDeadBroker FAILED
> 
>
> Key: KAFKA-2445
> URL: https://issues.apache.org/jira/browse/KAFKA-2445
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Priority: Major
>
> This test failed on Jenkins build: 
> https://builds.apache.org/job/Kafka-trunk/590/console
> kafka.producer.ProducerTest > testSendWithDeadBroker FAILED
> java.lang.AssertionError: Message set should have 1 message
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.assertTrue(Assert.java:44)
> at 
> kafka.producer.ProducerTest.testSendWithDeadBroker(ProducerTest.scala:260)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6351) libs directory has duplicate javassist jars

2018-06-12 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6351.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

Closing this as javassist jar is resolving to single version in latest code.

> libs directory has duplicate javassist jars
> ---
>
> Key: KAFKA-6351
> URL: https://issues.apache.org/jira/browse/KAFKA-6351
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.0
>Reporter: pre sto
>Priority: Minor
> Fix For: 2.0.0
>
>
> Downloaded kafka_2.11-1.0.0 and noticed duplicate jars under libs
> javassist-3.20.0-GA.jar
> javassist-3.21.0-GA.jar
> I assume that's a mistake



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2572) zk connection instability

2018-06-12 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2572.
--
Resolution: Auto Closed

Closing inactive issue. Please reopen if the issue still exists in newer 
versions.

> zk connection instability
> -
>
> Key: KAFKA-2572
> URL: https://issues.apache.org/jira/browse/KAFKA-2572
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.8.2.1
> Environment: zk version 3.4.6,
> CentOS 6, 2.6.32-504.1.3.el6.x86_64
>Reporter: John Firth
>Priority: Major
> Attachments: 090815-digest.log, 090815-full.log, 091115-digest.log, 
> 091115-full.log.zip
>
>
> On several occasions, we've seen our process enter a cycle of: zk session 
> expiry; new session creation; rebalancing activity; pause during which 
> nothing is heard from the zk server. Sometimes, the reconnections are 
> successful, elements are pulled from Kafka, but then disconnection and 
> reconnection occurs shortly thereafter, causing OOMs when new elements are 
> pulled in (although OOMs were not seen in the two cases attached as 
> examples). Restarting the process that uses the zk client resolved the 
> problems in both cases.
> This behavior was seen on 09/08 and 09/11 -- the attached 'full' logs show 
> all logs entries minus entries particular to our application. For 09/08, the 
> time span is 2015-09-08T12:52:06.069-04:00 to 2015-09-08T13:14:48.250-04:00; 
> for 11/08, the time span is between 2015-09-11T01:38:17.000-04:00 to 
> 2015-09-11T07:44:47.124-04:00. The digest logs are the result of retaining 
> only error and warning entries, and entries containing any of: "begin 
> rebalancing", "end rebalancing", "timed", and "zookeeper state". For the 
> 09/11 digest logs, entries from the kafka.network.Processor logger are also 
> excised for clarity. Unfortunately, debug logging was not enabled during 
> these events.
> The 09/11 case shows repeated cycles of session expiry, followed by 
> rebalancing activity, followed by a pause during which nothing is heard from 
> the zk server, followed by a session timeout. A stable session seems to have 
> been established at 2015-09-11T04:13:47.140-04:00, but messages of the form 
> "I wrote this conflicted ephemeral node 
> [{"version":1,"subscription":{"binlogs_mailchimp_us2":100},"pattern":"static","timestamp":"1441959227564"}]
>  at 
> /consumers/prologue-second-stage_prod_us2/ids/prologue-second-stage_prod_us2_app01.c1.prologue.prod.atl01.rsglab.com-1441812334972-b967b718
>  a while back in a different session, hence I will backoff for this node to 
> be deleted by Zookeeper and retry" were logged out repeatedly until we 
> restarted the process after 2015-09-11T07:44:47.124-04:00, which marks the 
> final entry in the log.
> The 09/08 case is a little more straightforward than the 09/11 case, in that 
> a stable session was not established prior to our restarting the process.
> It's perhaps also noteworthy that in the 09/08 case, two timeouts for the 
> same session are seen during a single rebalance, at 
> 2015-09-08T12:52:19.107-04:00 and 2015-09-08T12:52:31.639-04:00. The 
> rebalance in question begins at 2015-09-08T12:52:06.667-04:00.
> The connection to ZK expires and is restablished multiple times before the 
> process is killed after 2015-09-08T13:13:41.655-04:00, which marks the last 
> entry in the logs for this day.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3379) Update tools relying on old producer to use new producer.

2018-06-12 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3379.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

> Update tools relying on old producer to use new producer.
> -
>
> Key: KAFKA-3379
> URL: https://issues.apache.org/jira/browse/KAFKA-3379
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Ashish Singh
>Assignee: Ashish Singh
>Priority: Major
> Fix For: 2.0.0
>
>
> Following tools are using old producer.
> * ReplicationVerificationTool
> * SimpleConsumerShell
> * GetOffsetShell
> Old producer is being marked as deprecated in 0.10. These tools should be 
> updated to use new producer. To make sure that this update does not break 
> existing behavior. Below is the action plan.
> For each tool that uses old producer.
> * Add ducktape test to establish current behavior.
> * Once the tests are committed and run fine, add patch for modification of 
> these tools. The ducktape tests added in previous step should confirm that 
> existing behavior is still intact.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-1181) Consolidate brokerList and topicPartitionInfo in BrokerPartitionInfo

2018-06-12 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1181.
--
Resolution: Auto Closed

Closing this as BrokerPartitionInfo class removed in  KAFKA-6921

> Consolidate brokerList and topicPartitionInfo in BrokerPartitionInfo
> 
>
> Key: KAFKA-1181
> URL: https://issues.apache.org/jira/browse/KAFKA-1181
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Priority: Major
>
> brokerList in BrokerConfig is used to send the TopicMetadataRequest to known 
> brokers, and the broker id always starts at 0 and increase incrementally, AND 
> it is never updated in BrokerPartitionInfo even after the topic metadata 
> response has been received.
> The real broker ids info is actually stored in topicPartitionInfo: 
> HashMap[String, TopicMetadata]. Which is refreshed with topic metadata 
> response. Therefore we could see different broker ids from logging entris 
> reporting failues of metadata request and failures of produce requests.
> The solution here is to consolidate these two, reading the initial broker 
> list but keep it refreshed with topic metadata responses.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2550) [Kafka][0.8.2.1][Performance]When there are a lot of partition under a Topic, there are serious performance degradation.

2018-06-12 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2550.
--
Resolution: Auto Closed

{color:#00}Closing inactive issue. Old clients are deprecated. Please 
reopen if you think the issue still exists in newer versions.{color}
 

> [Kafka][0.8.2.1][Performance]When there are a lot of partition under a Topic, 
> there are serious performance degradation.
> 
>
> Key: KAFKA-2550
> URL: https://issues.apache.org/jira/browse/KAFKA-2550
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, producer 
>Affects Versions: 0.8.2.1
>Reporter: yanwei
>Assignee: Neha Narkhede
>Priority: Major
>
> Because of business need to create a large number of partitions,I test the 
> partition number of support.
> But I find When there are a lot of partition under a Topic, there are serious 
> performance degradation.
> Through the analysis, in addition to the hard disk is bottleneck, the client 
> is the bottleneck
> I use JProfile,producer and consumer 100 message(msg size:500byte)
> 1、Consumer high level API:(I find i can't upload picture?)
>  ZookeeperConsumerConnector.scala-->rebalance
> -->val assignmentContext = new AssignmentContext(group, consumerIdString, 
> config.excludeInternalTopics, zkClient)
> -->ZkUtils.getPartitionsForTopics(zkClient, myTopicThreadIds.keySet.toSeq)
> -->getPartitionAssignmentForTopics
> -->Json.parseFull(jsonPartitionMap) 
>  1) one topic 400 partion:
>  JProfile:48.6% cpu run time
>  2) ont topic 3000 partion:
>  JProfile:97.8% cpu run time
>   Maybe the file(jsonPartitionMap) is very big lead to parse is very slow.
>   But this function is executed only once, so the problem should not be too 
> big.
> 2、Producer Scala API:
> BrokerPartitionInfo.scala--->getBrokerPartitionInfo:
> partitionMetadata.map { m =>
>   m.leader match {
> case Some(leader) =>
>   //y00163442 delete log print
>   debug("Partition [%s,%d] has leader %d".format(topic, 
> m.partitionId, leader.id))
>   new PartitionAndLeader(topic, m.partitionId, Some(leader.id))
> case None =>
>   //y00163442 delete log print
>   //debug("Partition [%s,%d] does not have a leader 
> yet".format(topic, m.partitionId))
>   new PartitionAndLeader(topic, m.partitionId, None)
>   }
> }.sortWith((s, t) => s.partitionId < t.partitionId) 
>  
>   When partitions number>25,the function 'format' cpu run time is 44.8%.
>   Nearly half of the time consumption in the format function.whether the 
> log print open, this format will be executed.Led to the decrease of the TPS 
> for five times(25000--->5000).
>   
> 3、Producer JAVA client(clients module):
>   function:org.apache.kafka.clients.producer.KafkaProducer.send
>   I find the function 'send' cpu run time  rise with the rising number of 
> partitions ,when partions is 5000,the cpu run time is 60.8.
>   Because Kafka broker side of CPU, memory, disk, the network didn't 
> reach the bottleneck , No matter request.required.acks is set to 0 or 1, the 
> results are similar, I doubt the send there may be some bottlenecks.
>   
> Very unfortunately to upload pictures don't succeed, can't see the results.
> My test results, for a single server, a single hard disk can support 1000 
> partitions, 7 hard disk can support 3000 partitions.If can solve the 
> bottleneck for the client, then seven hard disk I estimate that can support 
> more partitions.
> Actual production configuration, could be more partitions configuration under 
> more than one TOPIC,Things could be better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4812) We are facing the same issue as SAMZA-590 for kafka

2018-06-14 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4812.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported. Please upgrade 
to the Java consumer whenever possible.

> We are facing the same issue as SAMZA-590 for kafka
> ---
>
> Key: KAFKA-4812
> URL: https://issues.apache.org/jira/browse/KAFKA-4812
> Project: Kafka
>  Issue Type: Bug
>Reporter: Manjeer Srujan. Y
>Priority: Critical
>
> Dead Kafka broker ignores new leader.
> We are facing the same issue as samza issue below. But, we couldn't find any 
> fix for this in kafka. Pasted the log below for reference.
> The kafka client that we are using is below.
> group: 'org.apache.kafka', name: 'kafka_2.10', version: '0.8.2.1'
> https://issues.apache.org/jira/browse/SAMZA-590
> 2017-02-28 09:50:53.189 29708 [Thread-11-vendor-index-spout-executor[35 35]] 
> ERROR org.apache.storm.daemon.executor -  - java.lang.RuntimeException: 
> java.nio.channels.ClosedChannelException
> at 
> org.apache.storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103)
> at 
> org.apache.storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69)
> at org.apache.storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:129)
> at 
> org.apache.storm.daemon.executor$fn__7990$fn__8005$fn__8036.invoke(executor.clj:648)
> at org.apache.storm.util$async_loop$fn__624.invoke(util.clj:484)
> at clojure.lang.AFn.run(AFn.java:22)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.nio.channels.ClosedChannelException
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
> at 
> kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:78)
> at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
> at 
> kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127)
> at 
> kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79)
> at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:75)
> at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:65)
> at 
> org.apache.storm.kafka.PartitionManager.(PartitionManager.java:94)
> at org.apache.storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98)
> ... 6 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2546) CPU utilization very high when no kafka node alive

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2546.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported. Please upgrade 
to the Java consumer whenever possible.

> CPU utilization very high when no kafka node alive
> --
>
> Key: KAFKA-2546
> URL: https://issues.apache.org/jira/browse/KAFKA-2546
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.2.1
>Reporter: Diego Erdody
>Assignee: Neha Narkhede
>Priority: Minor
>
> If you call kafka.consumer.Consumer.createJavaConsumerConnector, and no 
> broker is found in ZK, you end up in 
> kafka.client.ClientUtils.channelToAnyBroker looping continuously with no wait 
> at all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-966) Allow high level consumer to 'nak' a message and force Kafka to close the KafkaStream without losing that message

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-966.
-
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported. 

> Allow high level consumer to 'nak' a message and force Kafka to close the 
> KafkaStream without losing that message
> -
>
> Key: KAFKA-966
> URL: https://issues.apache.org/jira/browse/KAFKA-966
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.0
>Reporter: Chris Curtin
>Assignee: Neha Narkhede
>Priority: Minor
>
> Enhancement request.
> The high level consumer is very close to handling a lot of situations a 
> 'typical' client would need. Except for when the message received from Kafka 
> is valid, but the business logic that wants to consume it has a problem.
> For example if I want to write the value to a MongoDB or Cassandra database 
> and the database is not available. I won't know until I go to do the write 
> that the database isn't available, but by then it is too late to NOT read the 
> message from Kafka. Thus if I call shutdown() to stop reading, that message 
> is lost since the offset Kafka writes to ZooKeeper is the next offset.
> Ideally I'd like to be able to tell Kafka: close the KafkaStream but set the 
> next offset to read for this partition to this message when I start up again. 
> And if there are any messages in the BlockingQueue for other partitions, find 
> the lowest # and use it for that partitions offset since I haven't consumed 
> them yet.
> Thus I can cleanly shutdown my processing, resolve whatever the issue is and 
> restart the process.
> Another idea might be to allow a 'peek' into the next message and if I 
> succeed in writing to the database call 'next' to remove it from the queue. 
> I understand this won't deal with a 'kill -9' or hard failure of the JVM 
> leading to the latest offsets not being written to ZooKeeper but it addresses 
> a likely common scenario for consumers. Nor will it add true transactional 
> support since the ZK update could fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4967) java.io.EOFException Error while committing offsets

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4967.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported. Please upgrade 
to the Java consumer whenever possible.

> java.io.EOFException Error while committing offsets
> ---
>
> Key: KAFKA-4967
> URL: https://issues.apache.org/jira/browse/KAFKA-4967
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.1
> Environment: OS : CentOS
>Reporter: Upendra Yadav
>Priority: Major
>
> kafka server and client : 0.10.0.1
> And consumer and producer side using latest kafka jars as mentioned above but 
> still using old consumer apis in code. 
> kafka server side configuration :
> listeners=PLAINTEXT://:9092
> #below configuration is for old clients, that was exists before. but now 
> every clients are already moved with latest kafka client - 0.10.0.1
> log.message.format.version=0.8.2.1
> broker.id.generation.enable=false
> unclean.leader.election.enable=false
> Some of configurations for kafka consumer :
> auto.commit.enable is overridden to false
> auto.offset.reset is overridden to smallest
> consumer.timeout.ms is overridden to 100
> dual.commit.enabled is overridden to true
> fetch.message.max.bytes is overridden to 209715200
> group.id is overridden to crm_topic1_hadoop_tables
> offsets.storage is overridden to kafka
> rebalance.backoff.ms is overridden to 6000
> zookeeper.session.timeout.ms is overridden to 23000
> zookeeper.sync.time.ms is overridden to 2000
> below exception I'm getting on commit offset.
> Consumer process is still running after this exception..
> but when I'm checking offset position through kafka shell scripts its showing 
> old position(Could not fetch offset from topic1_group1 partition [topic1,0] 
> due to missing offset data in zookeeper). after some time when 2nd commit 
> comes then it get updated.
> because of duel commit enabled, I think kafka side position get update 
> successfully for both time.
> ERROR kafka.consumer.ZookeeperConsumerConnector: [], Error while 
> committing offsets.
> java.io.EOFException
> at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
> at 
> kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
> at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
> at 
> kafka.consumer.ZookeeperConsumerConnector.liftedTree2$1(ZookeeperConsumerConnector.scala:354)
> at 
> kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:351)
> at 
> kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:331)
> at 
> kafka.javaapi.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:111)
> at 
> com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.commitOffset(KafkaHLConsumer.java:173)
> at 
> com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.run(KafkaHLConsumer.java:271)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3798) Kafka Consumer 0.10.0.0 killed after rebalancing exception

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3798.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported. Please upgrade 
to the Java consumer whenever possible.

> Kafka Consumer 0.10.0.0 killed after rebalancing exception
> --
>
> Key: KAFKA-3798
> URL: https://issues.apache.org/jira/browse/KAFKA-3798
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 0.10.0.0
> Environment: Production
>Reporter: Sahitya Agrawal
>Assignee: Neha Narkhede
>Priority: Major
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Hi , 
> I have a topic with 100 partitions and 25 consumers. Consumers were working 
> fine up to some time. 
> After some time I see kafka rebalancing exception in the logs. CPU usage is 
> also 100 % at that time. Consumer process got killed after that. 
> Kafka version : 0.10.0.0
> Some Error print from the logs are following:
> kafka.common.ConsumerRebalanceFailedException: prod_ip- can't rebalance 
> after 10 retries
> at 
> kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:670)
> at 
> kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anon$2.run(ZookeeperConsumerConnector.scala:589)
> exception during rebalance
> org.I0Itec.zkclient.exception.ZkNoNodeException: 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /consumers/prod/ids/prod_ip-***
> at 
> org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
> at 
> org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:1000)
> at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1099)
> at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1094)
> at kafka.utils.ZkUtils.readData(ZkUtils.scala:542)
> at kafka.consumer.TopicCount$.constructTopicCount(TopicCount.scala:61)
> at 
> kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance(ZookeeperConsumerConnector.scala:674)
> at 
> kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1$$anonfun$apply$mcV$sp$1.apply$mcVI$sp(ZookeeperConsumerConnector.scala:646)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> at 
> kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply$mcV$sp(ZookeeperConsumerConnector.scala:637)
> at 
> kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply(ZookeeperConsumerConnector.scala:637)
> at 
> kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply(ZookeeperConsumerConnector.scala:637)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> at 
> kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:636)
> at 
> kafka.consumer.ZookeeperConsumerConnector$ZKSessionExpireListener.handleNewSession(ZookeeperConsumerConnector.scala:522)
> at org.I0Itec.zkclient.ZkClient$6.run(ZkClient.java:735)
> at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> Caused by: org.apache.zookeeper.KeeperException$NoNodeException: 
> KeeperErrorCode = NoNode for /consumers/prod/ids/prod_ip-**
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
> at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
> at org.I0Itec.zkclient.ZkConnection.readData(ZkConnection.java:124)
> at org.I0Itec.zkclient.ZkClient$12.call(ZkClient.java:1103)
> at org.I0Itec.zkclient.ZkClient$12.call(ZkClient.java:1099)
> at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:990)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-796) Kafka Scala classes should declare thrown checked exceptions to be Java friendly

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-796.
-
Resolution: Auto Closed

Closing inactive issue. The scala clients are no longer supported.

> Kafka Scala classes should declare thrown checked exceptions to be Java 
> friendly
> 
>
> Key: KAFKA-796
> URL: https://issues.apache.org/jira/browse/KAFKA-796
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Darren Sargent
>Priority: Minor
>
> For example, ConsumerIterator makeNext() method calls BlockingQueue.take() 
> which declares it throws InterruptedException. However, since makeNext() 
> fails to redeclare this exception, Java client code will be unable to catch 
> it -- javac will complain that InterruptedException cannot be thrown.
> Workaround - in the Java client code, catch Exception then check if 
> instanceof InterruptedException and respond accordingly. But really the Scala 
> method should redeclare checked exceptions for Java's benefit, even though 
> it's not required for Scala since there are no checked exceptions.
> There may be other classes where this needs to be done as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-857) kafka.admin.ListTopicCommand, 'by broker' display/filter

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-857.
-
Resolution: Auto Closed

Closing inactive issue.  This functionality can be achived by using 
KafkaAdminClient methods.

> kafka.admin.ListTopicCommand, 'by broker' display/filter
> 
>
> Key: KAFKA-857
> URL: https://issues.apache.org/jira/browse/KAFKA-857
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Dave DeMaagd
>Priority: Minor
>
> Would be nice if there were an option for kafka.admin.ListTopicCommand that 
> would filter results by broker (either ID of hostname).  Could be helpful in 
> troubleshooting some cases of broker problems (e.g. look at the 
> topic/partition ownership information for a particular broker, maybe for 
> underreplication, maybe to see what the impact is for messing with a 
> particular broker). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-1942) ConsumerGroupCommand does not show offset information in ZK for deleted topic

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1942.
--
Resolution: Auto Closed

Closing as old consumer support is removed from ConsumerGroupCommand

> ConsumerGroupCommand does not show offset information in ZK for deleted topic
> -
>
> Key: KAFKA-1942
> URL: https://issues.apache.org/jira/browse/KAFKA-1942
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Priority: Major
> Attachments: delete-topic-describe-group-2-10-2015.txt
>
>
> Let's say group g consumes from topic t. If we delete topic t using 
> kafka-topics.sh and then try to describe group g using ConsumerGroupCommand, 
> it won't show any of the partition rows for topic t even though the group has 
> offset information for t still in zk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2537) Mirrormaker defaults to localhost with no sanity checks

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2537.
--
Resolution: Auto Closed

The old consumer is no longer supported. All the usages are removed in 
KAFKA-2983

> Mirrormaker defaults to localhost with no sanity checks
> ---
>
> Key: KAFKA-2537
> URL: https://issues.apache.org/jira/browse/KAFKA-2537
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, replication, zkclient
>Affects Versions: 0.8.2.0
>Reporter: Evan Huus
>Assignee: Neha Narkhede
>Priority: Major
>
> Short version: Like many other tools, mirror-maker's consumer defaults to 
> using the localhost zookeeper instance when no specific zookeeper source is 
> specified. It shouldn't do this. MM should also have a sanity check that the 
> source and destination clusters are different.
> Long version: We run multiple clusters, all using mirrormaker to replicate to 
> the master cluster. The kafka, zookeeper, and mirrormaker instances all run 
> on the same nodes in the master cluster since the hardware can more than 
> handle the load. We were doing some zookeeper maintenance on one of our 
> remote clusters recently which accidentally caused our configuration manager 
> (chef) to generate empty zkConnect strings for some mirrormaker instances. 
> These instances defaulted to localhost and started mirroring from the master 
> cluster back to itself, an infinite replication loop that caused all sorts of 
> havok.
> We were able to recover gracefully and we've added additional safe-guards on 
> our end, but mirror-maker is at least partially at fault here as well. There 
> is no reason for it to treat an empty string as anything but an error - 
> especially not localhost, which is typically the target cluster, not the 
> source. Additionally, it should be trivial and very useful for mirrormaker to 
> verify it is not consuming and producing from the same cluster; I can think 
> of no legitimate use case for this kind of cycle.
> If you need any clarification or additional information, please let me know.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2584) SecurityProtocol enum validation should be removed or relaxed for non-config usages

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2584.
--
Resolution: Auto Closed

Closing this as scala clients are deprecated and will be removed in 2.0.0 
release.  Please reopen if  the issue still exists

> SecurityProtocol enum validation should be removed or relaxed for non-config 
> usages
> ---
>
> Key: KAFKA-2584
> URL: https://issues.apache.org/jira/browse/KAFKA-2584
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Joel Koshy
>Priority: Major
>
> While deploying SSL to our clusters, we had to roll back due to another 
> compatibility issue similar to what we mentioned in passing in other 
> threads/KIP hangouts. i.e., picking up jars between official releases. 
> Fortunately, there is an easy server-side hot-fix we can do internally to 
> work around it. However, I would classify the issue below as a bug since 
> there is little point in doing endpoint type validation (except for config 
> validation).
> What happened here is that some (old) consumers (that do not care about SSL) 
> picked up a Kafka jar that understood multiple endpoints but did not have the 
> SSL feature. The rebalance fails because while creating the Broker objects we 
> are forced to validate all the endpoints.
> Yes the old consumer is going away, but this would affect tools as well. The 
> same issue could also happen on the brokers if we were to upgrade them to 
> include (say) a Kerberos endpoint. So the old brokers would not be able to 
> read the registration of newly upgraded brokers. Well you could get around 
> that by doing two rounds of deployment (one to get the new code, and another 
> to expose the Kerberos endpoint) but that’s inconvenient and I think 
> unnecessary. Although validation makes sense for configs, I think the current 
> validate everywhere is overkill. (i.e., an old consumer, tool or broker 
> should not complain because another broker can talk more protocols.)
> {noformat}
> kafka.common.KafkaException: Failed to parse the broker info from zookeeper: 
> {"jmx_port":-1,"timestamp":"1442952770627","endpoints":["PLAINTEXT://:","SSL://:"],"host”:”","version":2,"port”:}
> at kafka.cluster.Broker$.createBroker(Broker.scala:61)
> at kafka.utils.ZkUtils$$anonfun$getCluster$1.apply(ZkUtils.scala:520)
> at kafka.utils.ZkUtils$$anonfun$getCluster$1.apply(ZkUtils.scala:518)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at kafka.utils.ZkUtils$.getCluster(ZkUtils.scala:518)
> at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener
> ...
> Caused by: java.lang.IllegalArgumentException: No enum constant 
> org.apache.kafka.common.protocol.SecurityProtocol.SSL
> at java.lang.Enum.valueOf(Enum.java:238)
> at 
> org.apache.kafka.common.protocol.SecurityProtocol.valueOf(SecurityProtocol.java:24)
> at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:48)
> at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:74)
> at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:73)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.List.foreach(List.scala:318)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at kafka.cluster.Broker$.createBroker(Broker.scala:73)
> ... 70 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3057) "Checking consumer position" docs are referencing (only) deprecated ConsumerOffsetChecker

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3057.
--
Resolution: Fixed

Closing this as docs updated for kafka-consumer-groups.sh.  documentation about 
kafka-consumer-groups.sh's reset offsets feature tracked via KAFKA-6312

> "Checking consumer position" docs are referencing (only) deprecated 
> ConsumerOffsetChecker
> -
>
> Key: KAFKA-3057
> URL: https://issues.apache.org/jira/browse/KAFKA-3057
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, website
>Affects Versions: 0.9.0.0
>Reporter: Stevo Slavic
>Priority: Trivial
>
> ["Checking consumer position" operations 
> instructions|http://kafka.apache.org/090/documentation.html#basic_ops_consumer_lag]
>  are referencing only ConsumerOffsetChecker which is mentioned as deprecated 
> in [Potential breaking changes in 
> 0.9.0.0|http://kafka.apache.org/documentation.html#upgrade_9_breaking]
> Please consider updating docs with new ways for checking consumer position, 
> covering differences between old and new way, and recommendation which one is 
> preferred and why.
> Would be nice to document (and support if not already available), not only 
> how to read/fetch/check consumer (group) offset, but also how to set offset 
> for consumer group using Kafka's operations tools.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3150) kafka.tools.UpdateOffsetsInZK not work (sasl enabled)

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3150.
--
Resolution: Auto Closed

The Scala consumers and related tools are removed in KAFKA-2983

> kafka.tools.UpdateOffsetsInZK not work (sasl enabled)
> -
>
> Key: KAFKA-3150
> URL: https://issues.apache.org/jira/browse/KAFKA-3150
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
> Environment: redhat as6.5
>Reporter: linbao111
>Priority: Major
>
> ./bin/kafka-run-class.sh kafka.tools.UpdateOffsetsInZK earliest 
> config/consumer.properties   alalei_2  
> [2016-01-26 17:20:49,920] WARN Property sasl.kerberos.service.name is not 
> valid (kafka.utils.VerifiableProperties)
> [2016-01-26 17:20:49,920] WARN Property security.protocol is not valid 
> (kafka.utils.VerifiableProperties)
> Exception in thread "main" kafka.common.BrokerEndPointNotAvailableException: 
> End point PLAINTEXT not found for broker 1
> at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
> at 
> kafka.tools.UpdateOffsetsInZK$$anonfun$getAndSetOffsets$2.apply$mcVI$sp(UpdateOffsetsInZK.scala:70)
> at 
> kafka.tools.UpdateOffsetsInZK$$anonfun$getAndSetOffsets$2.apply(UpdateOffsetsInZK.scala:59)
> at 
> kafka.tools.UpdateOffsetsInZK$$anonfun$getAndSetOffsets$2.apply(UpdateOffsetsInZK.scala:59)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at 
> kafka.tools.UpdateOffsetsInZK$.getAndSetOffsets(UpdateOffsetsInZK.scala:59)
> at kafka.tools.UpdateOffsetsInZK$.main(UpdateOffsetsInZK.scala:43)
> at kafka.tools.UpdateOffsetsInZK.main(UpdateOffsetsInZK.scala)
> same error for:
> ./bin/kafka-consumer-offset-checker.sh  --broker-info --group 
> test-consumer-group --topic alalei_2 --zookeeper slave1:2181
> [2016-01-26 17:23:45,218] WARN WARNING: ConsumerOffsetChecker is deprecated 
> and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand 
> instead. (kafka.tools.ConsumerOffsetChecker$)
> Exiting due to: End point PLAINTEXT not found for broker 0.
> ./bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper 
> slave1:2181 --group  test-consumer-group
> [2016-01-26 17:26:15,075] WARN WARNING: ConsumerOffsetChecker is deprecated 
> and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand 
> instead. (kafka.tools.ConsumerOffsetChecker$)
> Exiting due to: End point PLAINTEXT not found for broker 0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4196) Transient test failure: DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4196.
--
Resolution: Auto Closed

Scala consumers and related tools/tests are removed in KAFKA-2983

> Transient test failure: 
> DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK
> ---
>
> Key: KAFKA-4196
> URL: https://issues.apache.org/jira/browse/KAFKA-4196
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Priority: Major
>  Labels: transient-unit-test-failure
>
> The error:
> {code}
> java.lang.AssertionError: Admin path /admin/delete_topic/topic path not 
> deleted even after a replica is restarted
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:752)
>   at kafka.utils.TestUtils$.verifyTopicDeletion(TestUtils.scala:1017)
>   at 
> kafka.admin.DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK(DeleteConsumerGroupTest.scala:156)
> {code}
> Caused by a broken invariant in the Controller: a partition exists in 
> `ControllerContext.partitionLeadershipInfo`, but not 
> `controllerContext.partitionReplicaAssignment`.
> {code}
> [2016-09-20 06:45:13,967] ERROR [BrokerChangeListener on Controller 1]: Error 
> while handling broker changes 
> (kafka.controller.ReplicaStateMachine$BrokerChangeListener:103)
> java.util.NoSuchElementException: key not found: [topic,0]
>   at scala.collection.MapLike$class.default(MapLike.scala:228)
>   at scala.collection.AbstractMap.default(Map.scala:58)
>   at scala.collection.mutable.HashMap.apply(HashMap.scala:64)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.kafka$controller$ControllerBrokerRequestBatch$$updateMetadataRequestMapFor$1(ControllerChannelManager.scala:310)
>   at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$addUpdateMetadataRequestForBrokers$4.apply(ControllerChannelManager.scala:343)
>   at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$addUpdateMetadataRequestForBrokers$4.apply(ControllerChannelManager.scala:343)
>   at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.addUpdateMetadataRequestForBrokers(ControllerChannelManager.scala:343)
>   at 
> kafka.controller.KafkaController.sendUpdateMetadataRequest(KafkaController.scala:1030)
>   at 
> kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:492)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:376)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:357)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:355)
>   at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:843)
>   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4295) kafka-console-consumer.sh does not delete the temporary group in zookeeper

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4295.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported.

> kafka-console-consumer.sh does not delete the temporary group in zookeeper
> --
>
> Key: KAFKA-4295
> URL: https://issues.apache.org/jira/browse/KAFKA-4295
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.9.0.1, 0.10.0.0, 0.10.0.1
>Reporter: Sswater Shi
>Assignee: huxihx
>Priority: Minor
>
> I'm not sure it is a bug or you guys designed it.
> Since 0.9.x.x, the kafka-console-consumer.sh will not delete the group 
> information in zookeeper/consumers on exit when without "--new-consumer". 
> There will be a lot of abandoned zookeeper/consumers/console-consumer-xxx if 
> kafka-console-consumer.sh runs a lot of times.
> When 0.8.x.x,  the kafka-console-consumer.sh can be followed by an argument 
> "group". If not specified, the kafka-console-consumer.sh will create a 
> temporary group name like 'console-consumer-'. If the group name is 
> specified by "group", the information in the zookeeper/consumers will be kept 
> on exit. If the group name is a temporary one, the information in the 
> zookeeper will be deleted when kafka-console-consumer.sh is quitted by 
> Ctrl+C. Why this is changed from 0.9.x.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4723) offsets.storage=kafka - groups stuck in rebalancing with committed offsets

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4723.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported. Please upgrade 
to the Java consumer whenever possible.

> offsets.storage=kafka - groups stuck in rebalancing with committed offsets
> --
>
> Key: KAFKA-4723
> URL: https://issues.apache.org/jira/browse/KAFKA-4723
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: biker73
>Priority: Minor
>
> Hi, I have moved offset store to kafka only, when I now run;
>  bin/kafka-consumer-groups.sh  --bootstrap-server localhost:9094 --describe  
> --new-consumer --group my_consumer_group
> I get the message;
> Consumer group `my_consumer_group` does not exist or is rebalancing.
> I have found the  issue KAFKA-3144 however this refers to consumer groups 
> that have no committed offsets, the groups I am looking do and are constantly 
> in use.
> using --list I get all my consumer groups returned. Although some are 
> inactive I have around 6 very active ones (millions of messages a day 
> constantly). looking at the mbean data and kafka tool etc I can see the lags 
> and offsets changing every second. Therefore I would expect the 
> kafka-consumer-groups.sh script to return the lags and offsets for all 6 
> active consumer groups.
> I think what has happened is when I moved offset storage to kafka from 
> zookeeper (and then disabled sending to both), something has got confused.  
> Querying zookeeper I get the offsets for the alleged missing consumer groups 
> - but they should be stored and committed to kafka.
> Many thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5408) Using the ConsumerRecord instead of BaseConsumerRecord in the ConsoleConsumer

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5408.
--
Resolution: Fixed

This is taken care in KAFKA-2983

> Using the ConsumerRecord instead of BaseConsumerRecord in the ConsoleConsumer
> -
>
> Key: KAFKA-5408
> URL: https://issues.apache.org/jira/browse/KAFKA-5408
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Priority: Minor
>
> Hi,
> because the BaseConsumerRecord is marked as deprecated and will be removed in 
> future versions, it could worth to start removing its usage in the 
> ConsoleConsumer. 
> If it makes sense to you, I'd like to work on that starting to contribute to 
> the project.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5691) ZKUtils.CreateRecursive should handle NOAUTH

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5691.
--
Resolution: Auto Closed

Scala consumers and related tools/tests are removed in KAFKA-2983. This change 
may not be required now.  Please reopen if you think otherwise

 

> ZKUtils.CreateRecursive should handle NOAUTH
> 
>
> Key: KAFKA-5691
> URL: https://issues.apache.org/jira/browse/KAFKA-5691
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ryan P
>Assignee: Ryan P
>Priority: Major
>
> Old consumers are unable to register themselves with secured ZK installations 
> because a NOATH code is returned when attempting to create `/consumers'. 
> Rather than failing Kafka should log the error and continue down the path 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5590) Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5590.
--
Resolution: Information Provided

{color:#00}Closing inactive issue. Please reopen if you think the issue 
still exists{color}
 

> Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin
> ---
>
> Key: KAFKA-5590
> URL: https://issues.apache.org/jira/browse/KAFKA-5590
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.0.0
> Environment: kafka and ranger under ambari
>Reporter: Chaofeng Zhao
>Priority: Major
>
> Hi:
> Recently I develop some applications about kafka under ranger. But when I 
> set enable ranger kafka plugin I can not delete kafka topic completely even 
> though set 'delete.topic.enable=true'. And I find when enable ranger kafka 
> plugin it must be authrized. How can I delete kafka topic completely under 
> ranger. Thank you.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4061) Apache Kafka failover is not working

2018-06-18 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4061.
--
Resolution: Cannot Reproduce

This is mostly due to the health of the consumer offset topic.  replication 
factor of the "__consumer_offsets"  topic should be greater than 1 for greater 
availability.  Please reopen if you think the issue still exists

> Apache Kafka failover is not working
> 
>
> Key: KAFKA-4061
> URL: https://issues.apache.org/jira/browse/KAFKA-4061
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
> Environment: Linux
>Reporter: Sebastian Bruckner
>Priority: Major
>
> We have a 3 node cluster (kafka1 to kafka3) on 0.10.0.0
> When I shut down the node kafka1 i can see in the debug logs of my consumers 
> the following:
> {code}
> Sending coordinator request for group f49dc74f-3ccb-4fef-bafc-a7547fe26bc8 to 
> broker kafka3:9092 (id: 3 rack: null)
> Received group coordinator response 
> ClientResponse(receivedTimeMs=1471511333843, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@3892b449,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=118,client_id=f49dc74f-3ccb-4fef-bafc-a7547fe26bc8},
>  body={group_id=f49dc74f-3ccb-4fef-bafc-a7547fe26bc8}), 
> createdTimeMs=1471511333794, sendTimeMs=1471511333794), 
> responseBody={error_code=0,coordinator={node_id=1,host=kafka1,port=9092}})
> {code}
> So the problem is that kafka3 answers with an response telling the consumer 
> that the coordinator is kafka1 (which is shut down).
> This then happens over and over again.
> When i restart the consumer i can see the following:
> {code}
> Updated cluster metadata version 1 to Cluster(nodes = [kafka2:9092 (id: -2 
> rack: null), kafka1:9092 (id: -1 rack: null), kafka3:9092 (id: -3 rack: 
> null)], partitions = [])
> ... responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> {code}
> The difference is now that it answers with error code 15 
> (GROUP_COORDINATOR_NOT_AVAILABLE). 
> Somehow kafka doesn't elect a new group coordinator. 
> In a local setup with 2 brokers and 1 zookeper it works fine..
> Can you help me debugging this?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3791) Broken tools -- need better way to get offsets and other info

2018-06-18 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3791.
--
Resolution: Fixed

kafka-consumer-offset-checker.sh tool has been removed. Use 
kafka-consumer-groups.sh to get consumer group details.  Old clients related 
tools will be removed in 2.0.0. Please reopen if the issue still exists in 
other tools.

> Broken tools -- need better way to get offsets and other info
> -
>
> Key: KAFKA-3791
> URL: https://issues.apache.org/jira/browse/KAFKA-3791
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.0.0
>Reporter: Greg Zoller
>Priority: Major
>
> Whenever I run included tools like kafka-consumer-offset-checker.sh I get 
> deprecation warnings and it doesn't work for me (offsets not returned).  
> These need to be fixed.  The suggested class in the deprecation warning is 
> not documented clearly in the docs.
> In general it would be nice to streamline and simplify the tool scripts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4870) A question about broker down , the server is doing partition master election,the client producer may send msg fail . How the producer deal with the situation ??

2018-06-18 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4870.
--
Resolution: Information Provided

If the produce request fails, the producer automatically retry based on retries 
config for any retry exceptions. Also Producer updates the metadata for any 
exceptions or if any partitions does not have leader etc..

Post these kind of queries to 
[us...@kafka.apache.org|mailto:us...@kafka.apache.org] mailing list 
([http://kafka.apache.org/contact]) for  quicker responses.

> A question about broker down , the server is doing partition master 
> election,the client producer may send msg fail . How the producer deal with 
> the situation ??
> 
>
> Key: KAFKA-4870
> URL: https://issues.apache.org/jira/browse/KAFKA-4870
> Project: Kafka
>  Issue Type: Test
>  Components: clients
> Environment: java client 
>Reporter: zhaoziyan
>Priority: Minor
>
> the broker down . The kafka cluster is doing partion  master election , the 
> producer send order msg or nomal msg ,the producer may send msg fail .How 
> client update metadata and deal with the msg send fail ?? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5237) SimpleConsumerShell logs terminating message to stdout instead of stderr

2018-06-18 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5237.
--
Resolution: Auto Closed

Old consumer related tools are deprecated and  will be removed in KAFKA-2983.

> SimpleConsumerShell logs terminating message to stdout instead of stderr
> 
>
> Key: KAFKA-5237
> URL: https://issues.apache.org/jira/browse/KAFKA-5237
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.2.1
>Reporter: Daniel Einspanjer
>Priority: Trivial
>
> The SimpleConsumerShell has one big advantage over the standard 
> kafka-console-consumer client, it supports the --no-wait-at-logend parameter 
> which lets you script its use without having to rely on a timeout or dealing 
> with the exception and stacktrace thrown by said timeout.
> Unfortunately, when you use this option, it will write a termination message 
> to stdout when it is finished.  This means if you are using it to dump the 
> contents of a topic to a file, you get an extra line.
> This pull request just changes that one line to call System.err.println 
> instead of println.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   6   >