[jira] [Updated] (KAFKA-3567) Add --security-protocol option to console consumer and producer
[ https://issues.apache.org/jira/browse/KAFKA-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-3567: -- Reviewer: Ismael Juma Fix Version/s: 0.9.0.0 Affects Version/s: 0.9.0.0 Status: Patch Available (was: In Progress) > Add --security-protocol option to console consumer and producer > --- > > Key: KAFKA-3567 > URL: https://issues.apache.org/jira/browse/KAFKA-3567 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.9.0.0 >Reporter: Sriharsha Chintalapani >Assignee: Bharat Viswanadham > Fix For: 0.9.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3567) Add --security-protocol option to console consumer and producer
[ https://issues.apache.org/jira/browse/KAFKA-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293490#comment-15293490 ] Bharat Viswanadham commented on KAFKA-3567: --- I have opened a pull request for this Jira. This is my first time contribution. According to code contribution guide lines, an automatic comment will be placed when test cases are successfully passed. But I am not sure, which step I have missed. So, copying the mail which I have received. GitHub user bharatviswa504 opened a pull request: https://github.com/apache/kafka/pull/1409 Kafka 3567:Add --security-protocol option to console consumer and producer Creating a new pull request, because of branch is out of date. You can merge this pull request into a Git repository by running: $ git pull https://github.com/bharatviswa504/kafka bharatv/Kafka-3567-1 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/1409.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1409 commit 4e21dc6567a36c30ee075005783cdf47145f4832 > Add --security-protocol option to console consumer and producer > --- > > Key: KAFKA-3567 > URL: https://issues.apache.org/jira/browse/KAFKA-3567 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.9.0.0 >Reporter: Sriharsha Chintalapani >Assignee: Bharat Viswanadham > Fix For: 0.9.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-3748) Add consumer-property to console tools consumer (similar to --producer-property)
Bharat Viswanadham created KAFKA-3748: - Summary: Add consumer-property to console tools consumer (similar to --producer-property) Key: KAFKA-3748 URL: https://issues.apache.org/jira/browse/KAFKA-3748 Project: Kafka Issue Type: Improvement Components: core Affects Versions: 0.9.0.0 Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham Fix For: 0.9.0.1, 0.9.0.0 Add --consumer-property to the console consumer. Creating this task from the comment given in KAFKA-3567. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-3748) Add consumer-property to console tools consumer (similar to --producer-property)
[ https://issues.apache.org/jira/browse/KAFKA-3748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-3748: -- Fix Version/s: (was: 0.9.0.1) (was: 0.9.0.0) > Add consumer-property to console tools consumer (similar to > --producer-property) > > > Key: KAFKA-3748 > URL: https://issues.apache.org/jira/browse/KAFKA-3748 > Project: Kafka > Issue Type: Improvement > Components: core >Affects Versions: 0.9.0.0 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: newbie > > Add --consumer-property to the console consumer. > Creating this task from the comment given in KAFKA-3567. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (KAFKA-3748) Add consumer-property to console tools consumer (similar to --producer-property)
[ https://issues.apache.org/jira/browse/KAFKA-3748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on KAFKA-3748 started by Bharat Viswanadham. - > Add consumer-property to console tools consumer (similar to > --producer-property) > > > Key: KAFKA-3748 > URL: https://issues.apache.org/jira/browse/KAFKA-3748 > Project: Kafka > Issue Type: Improvement > Components: core >Affects Versions: 0.9.0.0 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: newbie > > Add --consumer-property to the console consumer. > Creating this task from the comment given in KAFKA-3567. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-3748) Add consumer-property to console tools consumer (similar to --producer-property)
[ https://issues.apache.org/jira/browse/KAFKA-3748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-3748: -- Attachment: KAFKA-3748.PATCH Attaching the patch file. > Add consumer-property to console tools consumer (similar to > --producer-property) > > > Key: KAFKA-3748 > URL: https://issues.apache.org/jira/browse/KAFKA-3748 > Project: Kafka > Issue Type: Improvement > Components: core >Affects Versions: 0.9.0.0 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: newbie > Attachments: KAFKA-3748.PATCH > > > Add --consumer-property to the console consumer. > Creating this task from the comment given in KAFKA-3567. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-3748) Add consumer-property to console tools consumer (similar to --producer-property)
[ https://issues.apache.org/jira/browse/KAFKA-3748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-3748: -- Status: Patch Available (was: In Progress) > Add consumer-property to console tools consumer (similar to > --producer-property) > > > Key: KAFKA-3748 > URL: https://issues.apache.org/jira/browse/KAFKA-3748 > Project: Kafka > Issue Type: Improvement > Components: core >Affects Versions: 0.9.0.0 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: newbie > Attachments: KAFKA-3748.PATCH > > > Add --consumer-property to the console consumer. > Creating this task from the comment given in KAFKA-3567. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3788) Potential message lost when switching to new segment
[ https://issues.apache.org/jira/browse/KAFKA-3788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15321014#comment-15321014 ] Bharat Viswanadham commented on KAFKA-3788: --- Can I take this task, to work on it? > Potential message lost when switching to new segment > > > Key: KAFKA-3788 > URL: https://issues.apache.org/jira/browse/KAFKA-3788 > Project: Kafka > Issue Type: Bug > Components: log >Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0 >Reporter: Arkadiusz Firus >Assignee: Jay Kreps >Priority: Minor > Labels: easyfix > Original Estimate: 1h > Remaining Estimate: 1h > > If a new segment is needed method roll() from class kafka.log.Log is invoked. > It prepares new segment and schedules _asynchronous_ flush of the previous > segment. > Asynchronous call can lead to a problematic situation. As far as I know > neither Linux nor Windows guarantees that the order of files persisted to > disk will be the same as the order of writes to files. This means that > records from the new segment can be flushed before the old ones which in case > of power outage can lead to gaps between records. > Changing asynchronous invocation to synchronous one will solve the problem > because we have guarantee that all records from the previous segment will be > persisted to hard drive before we write any record to the new segment. > I am guessing that asynchronous invocation was chosen to increase performance > but switching between segments is not so often. So it is not a big gain. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (KAFKA-3837) Report the name of the blocking thread when throwing ConcurrentModificationException
[ https://issues.apache.org/jira/browse/KAFKA-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-3837: - Assignee: Bharat Viswanadham > Report the name of the blocking thread when throwing > ConcurrentModificationException > > > Key: KAFKA-3837 > URL: https://issues.apache.org/jira/browse/KAFKA-3837 > Project: Kafka > Issue Type: Improvement > Components: clients >Affects Versions: 0.9.0.1 >Reporter: Simon Cooper >Assignee: Bharat Viswanadham >Priority: Minor > > {{KafkaConsumer.acquire}} throws {{ConcurrentModificationException}} if the > current thread it does not match the {{currentThread}} field. It would be > useful if the name of the other thread was included in the exception message > to help debug the problem when this exception occurs. > As it stands, it can be really difficult to work out what's going wrong when > there's several threads all accessing the consumer at the same time, and your > existing exclusive access logic doesn't seem to be working as it should. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3837) Report the name of the blocking thread when throwing ConcurrentModificationException
[ https://issues.apache.org/jira/browse/KAFKA-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330744#comment-15330744 ] Bharat Viswanadham commented on KAFKA-3837: --- Hi Simon, In the kafka consumer already we have thread ids stored. I have printed the thread id, instead of thread name. When concurrent modification exception occurred, now with help of thread id's we can know which thread has caused the problem. > Report the name of the blocking thread when throwing > ConcurrentModificationException > > > Key: KAFKA-3837 > URL: https://issues.apache.org/jira/browse/KAFKA-3837 > Project: Kafka > Issue Type: Improvement > Components: clients >Affects Versions: 0.9.0.1 >Reporter: Simon Cooper >Assignee: Bharat Viswanadham >Priority: Minor > > {{KafkaConsumer.acquire}} throws {{ConcurrentModificationException}} if the > current thread it does not match the {{currentThread}} field. It would be > useful if the name of the other thread was included in the exception message > to help debug the problem when this exception occurs. > As it stands, it can be really difficult to work out what's going wrong when > there's several threads all accessing the consumer at the same time, and your > existing exclusive access logic doesn't seem to be working as it should. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (KAFKA-3837) Report the name of the blocking thread when throwing ConcurrentModificationException
[ https://issues.apache.org/jira/browse/KAFKA-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330744#comment-15330744 ] Bharat Viswanadham edited comment on KAFKA-3837 at 6/14/16 10:18 PM: - Hi Simon, In the kafka consumer already we have thread ids stored. I have printed the thread id, instead of thread name. When concurrent modification exception occurred, now with help of thread id's we can know which thread has caused the problem. Can you please let me know, your inputs on this. was (Author: bharatviswa): Hi Simon, In the kafka consumer already we have thread ids stored. I have printed the thread id, instead of thread name. When concurrent modification exception occurred, now with help of thread id's we can know which thread has caused the problem. > Report the name of the blocking thread when throwing > ConcurrentModificationException > > > Key: KAFKA-3837 > URL: https://issues.apache.org/jira/browse/KAFKA-3837 > Project: Kafka > Issue Type: Improvement > Components: clients >Affects Versions: 0.9.0.1 >Reporter: Simon Cooper >Assignee: Bharat Viswanadham >Priority: Minor > > {{KafkaConsumer.acquire}} throws {{ConcurrentModificationException}} if the > current thread it does not match the {{currentThread}} field. It would be > useful if the name of the other thread was included in the exception message > to help debug the problem when this exception occurs. > As it stands, it can be really difficult to work out what's going wrong when > there's several threads all accessing the consumer at the same time, and your > existing exclusive access logic doesn't seem to be working as it should. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-3837) Report the name of the blocking thread when throwing ConcurrentModificationException
[ https://issues.apache.org/jira/browse/KAFKA-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-3837: -- Status: Patch Available (was: Open) > Report the name of the blocking thread when throwing > ConcurrentModificationException > > > Key: KAFKA-3837 > URL: https://issues.apache.org/jira/browse/KAFKA-3837 > Project: Kafka > Issue Type: Improvement > Components: clients >Affects Versions: 0.9.0.1 >Reporter: Simon Cooper >Assignee: Bharat Viswanadham >Priority: Minor > > {{KafkaConsumer.acquire}} throws {{ConcurrentModificationException}} if the > current thread it does not match the {{currentThread}} field. It would be > useful if the name of the other thread was included in the exception message > to help debug the problem when this exception occurs. > As it stands, it can be really difficult to work out what's going wrong when > there's several threads all accessing the consumer at the same time, and your > existing exclusive access logic doesn't seem to be working as it should. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (KAFKA-3837) Report the name of the blocking thread when throwing ConcurrentModificationException
[ https://issues.apache.org/jira/browse/KAFKA-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on KAFKA-3837 started by Bharat Viswanadham. - > Report the name of the blocking thread when throwing > ConcurrentModificationException > > > Key: KAFKA-3837 > URL: https://issues.apache.org/jira/browse/KAFKA-3837 > Project: Kafka > Issue Type: Improvement > Components: clients >Affects Versions: 0.9.0.1 >Reporter: Simon Cooper >Assignee: Bharat Viswanadham >Priority: Minor > > {{KafkaConsumer.acquire}} throws {{ConcurrentModificationException}} if the > current thread it does not match the {{currentThread}} field. It would be > useful if the name of the other thread was included in the exception message > to help debug the problem when this exception occurs. > As it stands, it can be really difficult to work out what's going wrong when > there's several threads all accessing the consumer at the same time, and your > existing exclusive access logic doesn't seem to be working as it should. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-3837) Report the name of the blocking thread when throwing ConcurrentModificationException
[ https://issues.apache.org/jira/browse/KAFKA-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-3837: -- Status: Open (was: Patch Available) > Report the name of the blocking thread when throwing > ConcurrentModificationException > > > Key: KAFKA-3837 > URL: https://issues.apache.org/jira/browse/KAFKA-3837 > Project: Kafka > Issue Type: Improvement > Components: clients >Affects Versions: 0.9.0.1 >Reporter: Simon Cooper >Assignee: Bharat Viswanadham >Priority: Minor > > {{KafkaConsumer.acquire}} throws {{ConcurrentModificationException}} if the > current thread it does not match the {{currentThread}} field. It would be > useful if the name of the other thread was included in the exception message > to help debug the problem when this exception occurs. > As it stands, it can be really difficult to work out what's going wrong when > there's several threads all accessing the consumer at the same time, and your > existing exclusive access logic doesn't seem to be working as it should. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3837) Report the name of the blocking thread when throwing ConcurrentModificationException
[ https://issues.apache.org/jira/browse/KAFKA-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15332632#comment-15332632 ] Bharat Viswanadham commented on KAFKA-3837: --- Hi Simon, I will update the code to print thread name. You can have a look in to the discussuion happened in the code review of the pull request for more information. > Report the name of the blocking thread when throwing > ConcurrentModificationException > > > Key: KAFKA-3837 > URL: https://issues.apache.org/jira/browse/KAFKA-3837 > Project: Kafka > Issue Type: Improvement > Components: clients >Affects Versions: 0.9.0.1 >Reporter: Simon Cooper >Assignee: Bharat Viswanadham >Priority: Minor > > {{KafkaConsumer.acquire}} throws {{ConcurrentModificationException}} if the > current thread it does not match the {{currentThread}} field. It would be > useful if the name of the other thread was included in the exception message > to help debug the problem when this exception occurs. > As it stands, it can be really difficult to work out what's going wrong when > there's several threads all accessing the consumer at the same time, and your > existing exclusive access logic doesn't seem to be working as it should. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-3837) Report the name of the blocking thread when throwing ConcurrentModificationException
[ https://issues.apache.org/jira/browse/KAFKA-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-3837: -- Status: Patch Available (was: In Progress) > Report the name of the blocking thread when throwing > ConcurrentModificationException > > > Key: KAFKA-3837 > URL: https://issues.apache.org/jira/browse/KAFKA-3837 > Project: Kafka > Issue Type: Improvement > Components: clients >Affects Versions: 0.9.0.1 >Reporter: Simon Cooper >Assignee: Bharat Viswanadham >Priority: Minor > > {{KafkaConsumer.acquire}} throws {{ConcurrentModificationException}} if the > current thread it does not match the {{currentThread}} field. It would be > useful if the name of the other thread was included in the exception message > to help debug the problem when this exception occurs. > As it stands, it can be really difficult to work out what's going wrong when > there's several threads all accessing the consumer at the same time, and your > existing exclusive access logic doesn't seem to be working as it should. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3854) Consecutive regex subscription calls fail
[ https://issues.apache.org/jira/browse/KAFKA-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334973#comment-15334973 ] Bharat Viswanadham commented on KAFKA-3854: --- Hi Vahid, Can I take this PR, if you have not started working on this issue. > Consecutive regex subscription calls fail > - > > Key: KAFKA-3854 > URL: https://issues.apache.org/jira/browse/KAFKA-3854 > Project: Kafka > Issue Type: Bug > Components: consumer >Reporter: Vahid Hashemian >Assignee: Vahid Hashemian > > When consecutive calls are made to new consumer's [regex > subscription|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L850], > like below: > {code} > consumer.subscribe(Pattern.compile("..."), listener); > consumer.poll(0); > consumer.subscribe(Pattern.compile("f.."), listener); > consumer.poll(0); > {code} > the second call fails with the following error: > {code} > Exception in thread "main" java.lang.IllegalStateException: Subscription to > topics, partitions and pattern are mutually exclusive > at > org.apache.kafka.clients.consumer.internals.SubscriptionState.subscribe(SubscriptionState.java:175) > at > org.apache.kafka.clients.consumer.KafkaConsumer.subscribe(KafkaConsumer.java:854) > at > ConsumerSubscriptionSemantics.tryRegexSubscriptionSemantics(ConsumerSubscriptionSemantics.java:76) > at > ConsumerSubscriptionSemantics.main(ConsumerSubscriptionSemantics.java:88) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder
[ https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-3729: - Assignee: Bharat Viswanadham > Auto-configure non-default SerDes passed alongside the topology builder > > > Key: KAFKA-3729 > URL: https://issues.apache.org/jira/browse/KAFKA-3729 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Fred Patton >Assignee: Bharat Viswanadham > Labels: api, newbie > > From Guozhang Wang: > "Only default serdes provided through configs are auto-configured today. But > we could auto-configure other serdes passed alongside the topology builder as > well." -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3854) Consecutive regex subscription calls fail
[ https://issues.apache.org/jira/browse/KAFKA-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335008#comment-15335008 ] Bharat Viswanadham commented on KAFKA-3854: --- Hi vahid, Thats fine. I will have a look in to other Jiras to take-up. > Consecutive regex subscription calls fail > - > > Key: KAFKA-3854 > URL: https://issues.apache.org/jira/browse/KAFKA-3854 > Project: Kafka > Issue Type: Bug > Components: consumer >Reporter: Vahid Hashemian >Assignee: Vahid Hashemian > > When consecutive calls are made to new consumer's [regex > subscription|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L850], > like below: > {code} > consumer.subscribe(Pattern.compile("..."), listener); > consumer.poll(0); > consumer.subscribe(Pattern.compile("f.."), listener); > consumer.poll(0); > {code} > the second call fails with the following error: > {code} > Exception in thread "main" java.lang.IllegalStateException: Subscription to > topics, partitions and pattern are mutually exclusive > at > org.apache.kafka.clients.consumer.internals.SubscriptionState.subscribe(SubscriptionState.java:175) > at > org.apache.kafka.clients.consumer.KafkaConsumer.subscribe(KafkaConsumer.java:854) > at > ConsumerSubscriptionSemantics.tryRegexSubscriptionSemantics(ConsumerSubscriptionSemantics.java:76) > at > ConsumerSubscriptionSemantics.main(ConsumerSubscriptionSemantics.java:88) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3837) Report the name of the blocking thread when throwing ConcurrentModificationException
[ https://issues.apache.org/jira/browse/KAFKA-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336761#comment-15336761 ] Bharat Viswanadham commented on KAFKA-3837: --- Hi Simon, I have updated the code. You can have a look in to it, and provide your comments. > Report the name of the blocking thread when throwing > ConcurrentModificationException > > > Key: KAFKA-3837 > URL: https://issues.apache.org/jira/browse/KAFKA-3837 > Project: Kafka > Issue Type: Improvement > Components: clients >Affects Versions: 0.9.0.1 >Reporter: Simon Cooper >Assignee: Bharat Viswanadham >Priority: Minor > > {{KafkaConsumer.acquire}} throws {{ConcurrentModificationException}} if the > current thread it does not match the {{currentThread}} field. It would be > useful if the name of the other thread was included in the exception message > to help debug the problem when this exception occurs. > As it stands, it can be really difficult to work out what's going wrong when > there's several threads all accessing the consumer at the same time, and your > existing exclusive access logic doesn't seem to be working as it should. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (KAFKA-3948) Invalid broker port in Zookeeper when SSL is enabled
[ https://issues.apache.org/jira/browse/KAFKA-3948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-3948: - Assignee: Bharat Viswanadham > Invalid broker port in Zookeeper when SSL is enabled > > > Key: KAFKA-3948 > URL: https://issues.apache.org/jira/browse/KAFKA-3948 > Project: Kafka > Issue Type: Bug > Components: network >Affects Versions: 0.9.0.1 >Reporter: Gérald Quintana >Assignee: Bharat Viswanadham > > With broker config > {code} > listeners=SSL://:9093,PLAINTEXT://:9092 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468249905473","endpoints":["SSL://kafka1:9093","PLAINTEXT://kafka1:9092"],"host":"kafka1","version":2,"port":9092} > {code} > Notice that port 9092 not 9093 > Then, different scenario, with config: > {code} > listeners=SSL://:9093 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468250372974","endpoints":["SSL://kafka1:9093"],"host":null,"version":2,"port":-1} > {code} > Now host is null and port is -1 > Setting advertised.port doesn't help -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3948) Invalid broker port in Zookeeper when SSL is enabled
[ https://issues.apache.org/jira/browse/KAFKA-3948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375965#comment-15375965 ] Bharat Viswanadham commented on KAFKA-3948: --- I will look in to this issue, to provide PR. > Invalid broker port in Zookeeper when SSL is enabled > > > Key: KAFKA-3948 > URL: https://issues.apache.org/jira/browse/KAFKA-3948 > Project: Kafka > Issue Type: Bug > Components: network >Affects Versions: 0.9.0.1 >Reporter: Gérald Quintana >Assignee: Bharat Viswanadham > > With broker config > {code} > listeners=SSL://:9093,PLAINTEXT://:9092 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468249905473","endpoints":["SSL://kafka1:9093","PLAINTEXT://kafka1:9092"],"host":"kafka1","version":2,"port":9092} > {code} > Notice that port 9092 not 9093 > Then, different scenario, with config: > {code} > listeners=SSL://:9093 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468250372974","endpoints":["SSL://kafka1:9093"],"host":null,"version":2,"port":-1} > {code} > Now host is null and port is -1 > Setting advertised.port doesn't help -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3948) Invalid broker port in Zookeeper when SSL is enabled
[ https://issues.apache.org/jira/browse/KAFKA-3948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15377904#comment-15377904 ] Bharat Viswanadham commented on KAFKA-3948: --- Hi, The behavior you are observing is correct. Host - Default host Port - Default Port The host and port are the for compatibility with old client. The host and port are updated if we have listener with PLAINTEXT. As PLAINTEXT is supported as default. In your first scenario you have both PLAINTEXT and SSL (listeners=SSL://:9093,PLAINTEXT://:9092 port=9093) so, the host got updated to kafka1 (your kafka cluster host) and port 9093 (PLAINTEXT port) In the second scenario, you have onlt SSL (listeners=SSL://:9093) then the host and port are set to default values for host it is null port it is -1 > Invalid broker port in Zookeeper when SSL is enabled > > > Key: KAFKA-3948 > URL: https://issues.apache.org/jira/browse/KAFKA-3948 > Project: Kafka > Issue Type: Bug > Components: network >Affects Versions: 0.9.0.1 >Reporter: Gérald Quintana >Assignee: Bharat Viswanadham > > With broker config > {code} > listeners=SSL://:9093,PLAINTEXT://:9092 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468249905473","endpoints":["SSL://kafka1:9093","PLAINTEXT://kafka1:9092"],"host":"kafka1","version":2,"port":9092} > {code} > Notice that port 9092 not 9093 > Then, different scenario, with config: > {code} > listeners=SSL://:9093 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468250372974","endpoints":["SSL://kafka1:9093"],"host":null,"version":2,"port":-1} > {code} > Now host is null and port is -1 > Setting advertised.port doesn't help -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (KAFKA-3948) Invalid broker port in Zookeeper when SSL is enabled
[ https://issues.apache.org/jira/browse/KAFKA-3948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15377904#comment-15377904 ] Bharat Viswanadham edited comment on KAFKA-3948 at 7/14/16 6:16 PM: Hi, The behavior you are observing is correct. Host - Default host Port - Default Port The host and port are the for compatibility with old client. The host and port are updated if we have listener with PLAINTEXT. As PLAINTEXT is supported as default. In your first scenario you have both PLAINTEXT and SSL (listeners=SSL://:9093,PLAINTEXT://:9092 port=9093) so, the host got updated to kafka1 (your kafka cluster host) and port 9093 (PLAINTEXT port) In the second scenario, you have only SSL (listeners=SSL://:9093) then the host and port are set to default values for host it is null port it is -1 was (Author: bharatviswa): Hi, The behavior you are observing is correct. Host - Default host Port - Default Port The host and port are the for compatibility with old client. The host and port are updated if we have listener with PLAINTEXT. As PLAINTEXT is supported as default. In your first scenario you have both PLAINTEXT and SSL (listeners=SSL://:9093,PLAINTEXT://:9092 port=9093) so, the host got updated to kafka1 (your kafka cluster host) and port 9093 (PLAINTEXT port) In the second scenario, you have onlt SSL (listeners=SSL://:9093) then the host and port are set to default values for host it is null port it is -1 > Invalid broker port in Zookeeper when SSL is enabled > > > Key: KAFKA-3948 > URL: https://issues.apache.org/jira/browse/KAFKA-3948 > Project: Kafka > Issue Type: Bug > Components: network >Affects Versions: 0.9.0.1 >Reporter: Gérald Quintana >Assignee: Bharat Viswanadham > > With broker config > {code} > listeners=SSL://:9093,PLAINTEXT://:9092 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468249905473","endpoints":["SSL://kafka1:9093","PLAINTEXT://kafka1:9092"],"host":"kafka1","version":2,"port":9092} > {code} > Notice that port 9092 not 9093 > Then, different scenario, with config: > {code} > listeners=SSL://:9093 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468250372974","endpoints":["SSL://kafka1:9093"],"host":null,"version":2,"port":-1} > {code} > Now host is null and port is -1 > Setting advertised.port doesn't help -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3948) Invalid broker port in Zookeeper when SSL is enabled
[ https://issues.apache.org/jira/browse/KAFKA-3948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15377908#comment-15377908 ] Bharat Viswanadham commented on KAFKA-3948: --- Pls let me know your comments on my inputs. > Invalid broker port in Zookeeper when SSL is enabled > > > Key: KAFKA-3948 > URL: https://issues.apache.org/jira/browse/KAFKA-3948 > Project: Kafka > Issue Type: Bug > Components: network >Affects Versions: 0.9.0.1 >Reporter: Gérald Quintana >Assignee: Bharat Viswanadham > > With broker config > {code} > listeners=SSL://:9093,PLAINTEXT://:9092 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468249905473","endpoints":["SSL://kafka1:9093","PLAINTEXT://kafka1:9092"],"host":"kafka1","version":2,"port":9092} > {code} > Notice that port 9092 not 9093 > Then, different scenario, with config: > {code} > listeners=SSL://:9093 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468250372974","endpoints":["SSL://kafka1:9093"],"host":null,"version":2,"port":-1} > {code} > Now host is null and port is -1 > Setting advertised.port doesn't help -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-3948) Invalid broker port in Zookeeper when SSL is enabled
[ https://issues.apache.org/jira/browse/KAFKA-3948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-3948: -- Component/s: core > Invalid broker port in Zookeeper when SSL is enabled > > > Key: KAFKA-3948 > URL: https://issues.apache.org/jira/browse/KAFKA-3948 > Project: Kafka > Issue Type: Bug > Components: core, network >Affects Versions: 0.9.0.1 >Reporter: Gérald Quintana >Assignee: Bharat Viswanadham > > With broker config > {code} > listeners=SSL://:9093,PLAINTEXT://:9092 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468249905473","endpoints":["SSL://kafka1:9093","PLAINTEXT://kafka1:9092"],"host":"kafka1","version":2,"port":9092} > {code} > Notice that port 9092 not 9093 > Then, different scenario, with config: > {code} > listeners=SSL://:9093 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468250372974","endpoints":["SSL://kafka1:9093"],"host":null,"version":2,"port":-1} > {code} > Now host is null and port is -1 > Setting advertised.port doesn't help -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3948) Invalid broker port in Zookeeper when SSL is enabled
[ https://issues.apache.org/jira/browse/KAFKA-3948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378099#comment-15378099 ] Bharat Viswanadham commented on KAFKA-3948: --- You can also refer the code for confirmation https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaHealthcheck.scala Refer lines 66-72. > Invalid broker port in Zookeeper when SSL is enabled > > > Key: KAFKA-3948 > URL: https://issues.apache.org/jira/browse/KAFKA-3948 > Project: Kafka > Issue Type: Bug > Components: core, network >Affects Versions: 0.9.0.1 >Reporter: Gérald Quintana >Assignee: Bharat Viswanadham > > With broker config > {code} > listeners=SSL://:9093,PLAINTEXT://:9092 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468249905473","endpoints":["SSL://kafka1:9093","PLAINTEXT://kafka1:9092"],"host":"kafka1","version":2,"port":9092} > {code} > Notice that port 9092 not 9093 > Then, different scenario, with config: > {code} > listeners=SSL://:9093 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468250372974","endpoints":["SSL://kafka1:9093"],"host":null,"version":2,"port":-1} > {code} > Now host is null and port is -1 > Setting advertised.port doesn't help -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (KAFKA-3948) Invalid broker port in Zookeeper when SSL is enabled
[ https://issues.apache.org/jira/browse/KAFKA-3948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved KAFKA-3948. --- Resolution: Not A Problem > Invalid broker port in Zookeeper when SSL is enabled > > > Key: KAFKA-3948 > URL: https://issues.apache.org/jira/browse/KAFKA-3948 > Project: Kafka > Issue Type: Bug > Components: core, network >Affects Versions: 0.9.0.1 >Reporter: Gérald Quintana >Assignee: Bharat Viswanadham > > With broker config > {code} > listeners=SSL://:9093,PLAINTEXT://:9092 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468249905473","endpoints":["SSL://kafka1:9093","PLAINTEXT://kafka1:9092"],"host":"kafka1","version":2,"port":9092} > {code} > Notice that port 9092 not 9093 > Then, different scenario, with config: > {code} > listeners=SSL://:9093 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468250372974","endpoints":["SSL://kafka1:9093"],"host":null,"version":2,"port":-1} > {code} > Now host is null and port is -1 > Setting advertised.port doesn't help -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3948) Invalid broker port in Zookeeper when SSL is enabled
[ https://issues.apache.org/jira/browse/KAFKA-3948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378604#comment-15378604 ] Bharat Viswanadham commented on KAFKA-3948: --- yes got that point. Thank you for info. [~sriharsha] > Invalid broker port in Zookeeper when SSL is enabled > > > Key: KAFKA-3948 > URL: https://issues.apache.org/jira/browse/KAFKA-3948 > Project: Kafka > Issue Type: Bug > Components: core, network >Affects Versions: 0.9.0.1 >Reporter: Gérald Quintana >Assignee: Bharat Viswanadham > > With broker config > {code} > listeners=SSL://:9093,PLAINTEXT://:9092 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468249905473","endpoints":["SSL://kafka1:9093","PLAINTEXT://kafka1:9092"],"host":"kafka1","version":2,"port":9092} > {code} > Notice that port 9092 not 9093 > Then, different scenario, with config: > {code} > listeners=SSL://:9093 > port=9093 > {code} > gives in Zookeeper /brokers/ids/1 > {code} > {"jmx_port":,"timestamp":"1468250372974","endpoints":["SSL://kafka1:9093"],"host":null,"version":2,"port":-1} > {code} > Now host is null and port is -1 > Setting advertised.port doesn't help -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder
[ https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378617#comment-15378617 ] Bharat Viswanadham commented on KAFKA-3729: --- [~thoughtp...@gmail.com][~guozhang] Can you provide some additional information, as i am new to streams, it will help me to start work on this JIRA. Thanks, Bharat > Auto-configure non-default SerDes passed alongside the topology builder > > > Key: KAFKA-3729 > URL: https://issues.apache.org/jira/browse/KAFKA-3729 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Fred Patton >Assignee: Bharat Viswanadham > Labels: api, newbie > > From Guozhang Wang: > "Only default serdes provided through configs are auto-configured today. But > we could auto-configure other serdes passed alongside the topology builder as > well." -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-2629) Enable getting SSL password from an executable rather than passing plaintext password
[ https://issues.apache.org/jira/browse/KAFKA-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412631#comment-15412631 ] Bharat Viswanadham commented on KAFKA-2629: --- Hi [~singhashish] Any updates on this task can you provide? We require this feature for some of our customers using Kafka. > Enable getting SSL password from an executable rather than passing plaintext > password > - > > Key: KAFKA-2629 > URL: https://issues.apache.org/jira/browse/KAFKA-2629 > Project: Kafka > Issue Type: Improvement > Components: security >Affects Versions: 0.9.0.0 >Reporter: Ashish K Singh >Assignee: Ashish K Singh > > Currently there are a couple of options to pass SSL passwords to Kafka, i.e., > via properties file or via command line argument. Both of these are not > recommended security practices. > * A password on a command line is a no-no: it's trivial to see that password > just by using the 'ps' utility. > * Putting a password into a file, and then passing the location to that file, > is the next best option. The access to the file will be governed by unix > access permissions which we all know and love. The downside is that the > password is still just sitting there in a file, and those who have access can > still see it trivially. > * The most general, secure solution is to provide a layer of abstraction: > provide functionality to get the password from "somewhere else". The most > flexible and generic way to do this is to simply call an executable which > returns the desired password. > ** The executable is again protected with normal file system privileges > ** The simplest form, a script that looks like "echo 'my-password'", devolves > back to putting the password in a file > ** A more interesting implementation could open up a local encrypted password > store and extract the password from it > ** A maximally secure implementation could contact an external secret manager > with centralized control and audit functionality. > ** In short: getting the password as the output of a script/executable is > maximally generic and enables both simple and complex use cases. > This JIRA intend to add a config param to enable passing an executable to > Kafka for SSL passwords. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-2629) Enable getting SSL password from an executable rather than passing plaintext password
[ https://issues.apache.org/jira/browse/KAFKA-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15413818#comment-15413818 ] Bharat Viswanadham commented on KAFKA-2629: --- [~singhashish] Thank you for update. Hope it approves and it will be in Kafka sooner. > Enable getting SSL password from an executable rather than passing plaintext > password > - > > Key: KAFKA-2629 > URL: https://issues.apache.org/jira/browse/KAFKA-2629 > Project: Kafka > Issue Type: Improvement > Components: security >Affects Versions: 0.9.0.0 >Reporter: Ashish K Singh >Assignee: Ashish K Singh > > Currently there are a couple of options to pass SSL passwords to Kafka, i.e., > via properties file or via command line argument. Both of these are not > recommended security practices. > * A password on a command line is a no-no: it's trivial to see that password > just by using the 'ps' utility. > * Putting a password into a file, and then passing the location to that file, > is the next best option. The access to the file will be governed by unix > access permissions which we all know and love. The downside is that the > password is still just sitting there in a file, and those who have access can > still see it trivially. > * The most general, secure solution is to provide a layer of abstraction: > provide functionality to get the password from "somewhere else". The most > flexible and generic way to do this is to simply call an executable which > returns the desired password. > ** The executable is again protected with normal file system privileges > ** The simplest form, a script that looks like "echo 'my-password'", devolves > back to putting the password in a file > ** A more interesting implementation could open up a local encrypted password > store and extract the password from it > ** A maximally secure implementation could contact an external secret manager > with centralized control and audit functionality. > ** In short: getting the password as the output of a script/executable is > maximally generic and enables both simple and complex use cases. > This JIRA intend to add a config param to enable passing an executable to > Kafka for SSL passwords. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-2629) Enable getting SSL password from an executable rather than passing plaintext password
[ https://issues.apache.org/jira/browse/KAFKA-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15413820#comment-15413820 ] Bharat Viswanadham commented on KAFKA-2629: --- [~singhashish] What is the final approach for this solution? Will also be using hadoop credential provider to address this problem? > Enable getting SSL password from an executable rather than passing plaintext > password > - > > Key: KAFKA-2629 > URL: https://issues.apache.org/jira/browse/KAFKA-2629 > Project: Kafka > Issue Type: Improvement > Components: security >Affects Versions: 0.9.0.0 >Reporter: Ashish K Singh >Assignee: Ashish K Singh > > Currently there are a couple of options to pass SSL passwords to Kafka, i.e., > via properties file or via command line argument. Both of these are not > recommended security practices. > * A password on a command line is a no-no: it's trivial to see that password > just by using the 'ps' utility. > * Putting a password into a file, and then passing the location to that file, > is the next best option. The access to the file will be governed by unix > access permissions which we all know and love. The downside is that the > password is still just sitting there in a file, and those who have access can > still see it trivially. > * The most general, secure solution is to provide a layer of abstraction: > provide functionality to get the password from "somewhere else". The most > flexible and generic way to do this is to simply call an executable which > returns the desired password. > ** The executable is again protected with normal file system privileges > ** The simplest form, a script that looks like "echo 'my-password'", devolves > back to putting the password in a file > ** A more interesting implementation could open up a local encrypted password > store and extract the password from it > ** A maximally secure implementation could contact an external secret manager > with centralized control and audit functionality. > ** In short: getting the password as the output of a script/executable is > maximally generic and enables both simple and complex use cases. > This JIRA intend to add a config param to enable passing an executable to > Kafka for SSL passwords. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (KAFKA-2629) Enable getting SSL password from an executable rather than passing plaintext password
[ https://issues.apache.org/jira/browse/KAFKA-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15413820#comment-15413820 ] Bharat Viswanadham edited comment on KAFKA-2629 at 8/9/16 4:51 PM: --- [~singhashish] What is the final approach for this solution? was (Author: bharatviswa): [~singhashish] What is the final approach for this solution? Will also be using hadoop credential provider to address this problem? > Enable getting SSL password from an executable rather than passing plaintext > password > - > > Key: KAFKA-2629 > URL: https://issues.apache.org/jira/browse/KAFKA-2629 > Project: Kafka > Issue Type: Improvement > Components: security >Affects Versions: 0.9.0.0 >Reporter: Ashish K Singh >Assignee: Ashish K Singh > > Currently there are a couple of options to pass SSL passwords to Kafka, i.e., > via properties file or via command line argument. Both of these are not > recommended security practices. > * A password on a command line is a no-no: it's trivial to see that password > just by using the 'ps' utility. > * Putting a password into a file, and then passing the location to that file, > is the next best option. The access to the file will be governed by unix > access permissions which we all know and love. The downside is that the > password is still just sitting there in a file, and those who have access can > still see it trivially. > * The most general, secure solution is to provide a layer of abstraction: > provide functionality to get the password from "somewhere else". The most > flexible and generic way to do this is to simply call an executable which > returns the desired password. > ** The executable is again protected with normal file system privileges > ** The simplest form, a script that looks like "echo 'my-password'", devolves > back to putting the password in a file > ** A more interesting implementation could open up a local encrypted password > store and extract the password from it > ** A maximally secure implementation could contact an external secret manager > with centralized control and audit functionality. > ** In short: getting the password as the output of a script/executable is > maximally generic and enables both simple and complex use cases. > This JIRA intend to add a config param to enable passing an executable to > Kafka for SSL passwords. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (KAFKA-4850) RocksDb cannot use Bloom Filters
[ https://issues.apache.org/jira/browse/KAFKA-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-4850: - Assignee: Bharat Viswanadham > RocksDb cannot use Bloom Filters > > > Key: KAFKA-4850 > URL: https://issues.apache.org/jira/browse/KAFKA-4850 > Project: Kafka > Issue Type: Improvement > Components: streams >Affects Versions: 0.10.2.0 >Reporter: Eno Thereska >Assignee: Bharat Viswanadham >Priority: Minor > Fix For: 0.11.0.0 > > > Bloom Filters would speed up RocksDb lookups. However they currently do not > work in RocksDb 5.0.2. This has been fixed in trunk, but we'll have to wait > until that is released and tested. > Then we can add the line in RocksDbStore.java in openDb: > tableConfig.setFilter(new BloomFilter(10)); -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-4850) RocksDb cannot use Bloom Filters
[ https://issues.apache.org/jira/browse/KAFKA-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15971864#comment-15971864 ] Bharat Viswanadham commented on KAFKA-4850: --- Hi [~enothereska] RocksDB v 5.2.1 is released and this functionality is added. Can I update the rocksdb version to v5.2.1 and update the code. > RocksDb cannot use Bloom Filters > > > Key: KAFKA-4850 > URL: https://issues.apache.org/jira/browse/KAFKA-4850 > Project: Kafka > Issue Type: Improvement > Components: streams >Affects Versions: 0.10.2.0 >Reporter: Eno Thereska >Assignee: Bharat Viswanadham >Priority: Minor > Fix For: 0.11.0.0 > > > Bloom Filters would speed up RocksDb lookups. However they currently do not > work in RocksDb 5.0.2. This has been fixed in trunk, but we'll have to wait > until that is released and tested. > Then we can add the line in RocksDbStore.java in openDb: > tableConfig.setFilter(new BloomFilter(10)); -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (KAFKA-4850) RocksDb cannot use Bloom Filters
[ https://issues.apache.org/jira/browse/KAFKA-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15971864#comment-15971864 ] Bharat Viswanadham edited comment on KAFKA-4850 at 4/18/17 12:10 AM: - Hi [~enothereska] RocksDB v 5.2.1 is released and this functionality is added. Can I update the rocksdb version to v5.2.1 and update the code and provide a PR? Could you please let me know any more is required? was (Author: bharatviswa): Hi [~enothereska] RocksDB v 5.2.1 is released and this functionality is added. Can I update the rocksdb version to v5.2.1 and update the code. > RocksDb cannot use Bloom Filters > > > Key: KAFKA-4850 > URL: https://issues.apache.org/jira/browse/KAFKA-4850 > Project: Kafka > Issue Type: Improvement > Components: streams >Affects Versions: 0.10.2.0 >Reporter: Eno Thereska >Assignee: Bharat Viswanadham >Priority: Minor > Fix For: 0.11.0.0 > > > Bloom Filters would speed up RocksDb lookups. However they currently do not > work in RocksDb 5.0.2. This has been fixed in trunk, but we'll have to wait > until that is released and tested. > Then we can add the line in RocksDbStore.java in openDb: > tableConfig.setFilter(new BloomFilter(10)); -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Work started] (KAFKA-4850) RocksDb cannot use Bloom Filters
[ https://issues.apache.org/jira/browse/KAFKA-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on KAFKA-4850 started by Bharat Viswanadham. - > RocksDb cannot use Bloom Filters > > > Key: KAFKA-4850 > URL: https://issues.apache.org/jira/browse/KAFKA-4850 > Project: Kafka > Issue Type: Improvement > Components: streams >Affects Versions: 0.10.2.0 >Reporter: Eno Thereska >Assignee: Bharat Viswanadham >Priority: Minor > Fix For: 0.11.0.0 > > > Bloom Filters would speed up RocksDb lookups. However they currently do not > work in RocksDb 5.0.2. This has been fixed in trunk, but we'll have to wait > until that is released and tested. > Then we can add the line in RocksDbStore.java in openDb: > tableConfig.setFilter(new BloomFilter(10)); -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (KAFKA-4850) RocksDb cannot use Bloom Filters
[ https://issues.apache.org/jira/browse/KAFKA-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15971864#comment-15971864 ] Bharat Viswanadham edited comment on KAFKA-4850 at 4/18/17 12:33 AM: - Hi [~enothereska] RocksDB v 5.2.1 is released and this functionality is fixed. Can I update the rocksdb version to v5.2.1 and update the code and provide a PR? Could you please let me know any more is required? was (Author: bharatviswa): Hi [~enothereska] RocksDB v 5.2.1 is released and this functionality is added. Can I update the rocksdb version to v5.2.1 and update the code and provide a PR? Could you please let me know any more is required? > RocksDb cannot use Bloom Filters > > > Key: KAFKA-4850 > URL: https://issues.apache.org/jira/browse/KAFKA-4850 > Project: Kafka > Issue Type: Improvement > Components: streams >Affects Versions: 0.10.2.0 >Reporter: Eno Thereska >Assignee: Bharat Viswanadham >Priority: Minor > Fix For: 0.11.0.0 > > > Bloom Filters would speed up RocksDb lookups. However they currently do not > work in RocksDb 5.0.2. This has been fixed in trunk, but we'll have to wait > until that is released and tested. > Then we can add the line in RocksDbStore.java in openDb: > tableConfig.setFilter(new BloomFilter(10)); -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-4850) RocksDb cannot use Bloom Filters
[ https://issues.apache.org/jira/browse/KAFKA-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15971995#comment-15971995 ] Bharat Viswanadham commented on KAFKA-4850: --- This issue is not fixed in v5.2.1 , I think sooner we have 5.4 release, I think that should have the fix. > RocksDb cannot use Bloom Filters > > > Key: KAFKA-4850 > URL: https://issues.apache.org/jira/browse/KAFKA-4850 > Project: Kafka > Issue Type: Improvement > Components: streams >Affects Versions: 0.10.2.0 >Reporter: Eno Thereska >Assignee: Bharat Viswanadham >Priority: Minor > Fix For: 0.11.0.0 > > > Bloom Filters would speed up RocksDb lookups. However they currently do not > work in RocksDb 5.0.2. This has been fixed in trunk, but we'll have to wait > until that is released and tested. > Then we can add the line in RocksDbStore.java in openDb: > tableConfig.setFilter(new BloomFilter(10)); -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-5082) Converter configurations are not listed in validation REST API
[ https://issues.apache.org/jira/browse/KAFKA-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15973159#comment-15973159 ] Bharat Viswanadham commented on KAFKA-5082: --- Hi [~ewencp] I am interested in working this. Could you please provide some info and starters to provide KIP, as i am new for this. If you have already started working can work on other jiras. Thanks, Bharat > Converter configurations are not listed in validation REST API > -- > > Key: KAFKA-5082 > URL: https://issues.apache.org/jira/browse/KAFKA-5082 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect >Affects Versions: 0.10.2.1 >Reporter: Ewen Cheslack-Postava > Labels: needs-kip > > The config validation REST API lists available configurations for the > connector, and, with the addition of SMTs, the transformations as well. You > are also allowed to override the converters per KIP-75, but the configs for > the converters are not included. Ideally these could be integrated in the > same way the transformation configs are. Note that while adding them to the > REST API could reasonably be considered a bug fix that would not require a > KIP, Converters do not currently expose their configs so we probably need a > (very simple) KIP to add a method to expose them. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-5057) "Big Message Log"
[ https://issues.apache.org/jira/browse/KAFKA-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15973197#comment-15973197 ] Bharat Viswanadham commented on KAFKA-5057: --- Hi Gwen, Can I work on this Jira, and I am learning Kafka Producer and could you also help me in KIP? Thanks, Bharat > "Big Message Log" > - > > Key: KAFKA-5057 > URL: https://issues.apache.org/jira/browse/KAFKA-5057 > Project: Kafka > Issue Type: Bug >Reporter: Gwen Shapira > > Really large requests can cause significant GC pauses which can cause quite a > few other symptoms on a broker. Will be nice to be able to catch them. > Lets add the option to log details (client id, topic, partition) for every > produce request that is larger than a configurable threshold. > /cc [~apurva] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (KAFKA-5057) "Big Message Log"
[ https://issues.apache.org/jira/browse/KAFKA-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15973197#comment-15973197 ] Bharat Viswanadham edited comment on KAFKA-5057 at 4/18/17 6:09 PM: Hi [~gshapira_impala_35cc] Can I work on this Jira, and I am learning Kafka Producer and could you also help me in KIP? Thanks, Bharat was (Author: bharatviswa): Hi Gwen, Can I work on this Jira, and I am learning Kafka Producer and could you also help me in KIP? Thanks, Bharat > "Big Message Log" > - > > Key: KAFKA-5057 > URL: https://issues.apache.org/jira/browse/KAFKA-5057 > Project: Kafka > Issue Type: Bug >Reporter: Gwen Shapira > > Really large requests can cause significant GC pauses which can cause quite a > few other symptoms on a broker. Will be nice to be able to catch them. > Lets add the option to log details (client id, topic, partition) for every > produce request that is larger than a configurable threshold. > /cc [~apurva] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (KAFKA-5057) "Big Message Log"
[ https://issues.apache.org/jira/browse/KAFKA-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-5057: -- Labels: Needs-kip (was: ) > "Big Message Log" > - > > Key: KAFKA-5057 > URL: https://issues.apache.org/jira/browse/KAFKA-5057 > Project: Kafka > Issue Type: Bug >Reporter: Gwen Shapira > Labels: Needs-kip > > Really large requests can cause significant GC pauses which can cause quite a > few other symptoms on a broker. Will be nice to be able to catch them. > Lets add the option to log details (client id, topic, partition) for every > produce request that is larger than a configurable threshold. > /cc [~apurva] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-5082) Converter configurations are not listed in validation REST API
[ https://issues.apache.org/jira/browse/KAFKA-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15973757#comment-15973757 ] Bharat Viswanadham commented on KAFKA-5082: --- [~ewencp] Thank you for clear info, will look into the code and will get started. > Converter configurations are not listed in validation REST API > -- > > Key: KAFKA-5082 > URL: https://issues.apache.org/jira/browse/KAFKA-5082 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect >Affects Versions: 0.10.2.1 >Reporter: Ewen Cheslack-Postava >Assignee: Bharat Viswanadham > Labels: needs-kip > > The config validation REST API lists available configurations for the > connector, and, with the addition of SMTs, the transformations as well. You > are also allowed to override the converters per KIP-75, but the configs for > the converters are not included. Ideally these could be integrated in the > same way the transformation configs are. Note that while adding them to the > REST API could reasonably be considered a bug fix that would not require a > KIP, Converters do not currently expose their configs so we probably need a > (very simple) KIP to add a method to expose them. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Resolved] (KAFKA-4087) DefaultParitioner Implementation Issue
[ https://issues.apache.org/jira/browse/KAFKA-4087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved KAFKA-4087. --- Resolution: Not A Bug > DefaultParitioner Implementation Issue > -- > > Key: KAFKA-4087 > URL: https://issues.apache.org/jira/browse/KAFKA-4087 > Project: Kafka > Issue Type: Bug > Components: producer >Affects Versions: 0.9.0.1, 0.10.0.1 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: partitioners, producer > > In DefaultPartitioner implementation, when key is null > if (availablePartitions.size() > 0) { > int part = Utils.toPositive(nextValue) % > availablePartitions.size(); > return availablePartitions.get(part).partition(); > } > Where as when key is not null > return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions; > We are returning partition by using total number of partitions. > Should n't we do the same as by considering only available partitions? > https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/DefaultPartitioner.java#L67 > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-4087) DefaultParitioner Implementation Issue
[ https://issues.apache.org/jira/browse/KAFKA-4087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15980200#comment-15980200 ] Bharat Viswanadham commented on KAFKA-4087: --- This is according to design, when key is not null, to get that record with a key assigned to always one partition, we do this. If we do by considering available no of partitions, the same key might be assigned to different partition. > DefaultParitioner Implementation Issue > -- > > Key: KAFKA-4087 > URL: https://issues.apache.org/jira/browse/KAFKA-4087 > Project: Kafka > Issue Type: Bug > Components: producer >Affects Versions: 0.9.0.1, 0.10.0.1 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: partitioners, producer > > In DefaultPartitioner implementation, when key is null > if (availablePartitions.size() > 0) { > int part = Utils.toPositive(nextValue) % > availablePartitions.size(); > return availablePartitions.get(part).partition(); > } > Where as when key is not null > return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions; > We are returning partition by using total number of partitions. > Should n't we do the same as by considering only available partitions? > https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/DefaultPartitioner.java#L67 > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-3567) Add --security-protocol option to console consumer and producer
[ https://issues.apache.org/jira/browse/KAFKA-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15839123#comment-15839123 ] Bharat Viswanadham commented on KAFKA-3567: --- [~ewencp] But it is really helpful when using kerberos, as it is mandatory to use protocol when all brokers are using SASL, so it will be helpful for users to know about this option. It will be helpful for beginners, as i see many new beginners using Kafka miss this. So, that is the reason for having this option. > Add --security-protocol option to console consumer and producer > --- > > Key: KAFKA-3567 > URL: https://issues.apache.org/jira/browse/KAFKA-3567 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.9.0.0 >Reporter: Sriharsha Chintalapani >Assignee: Bharat Viswanadham > Labels: needs-kip > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3567) Add --security-protocol option to console consumer and producer
[ https://issues.apache.org/jira/browse/KAFKA-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15839246#comment-15839246 ] Bharat Viswanadham commented on KAFKA-3567: --- [~ewencp] I agree, if you think of closing this it would be fine with me > Add --security-protocol option to console consumer and producer > --- > > Key: KAFKA-3567 > URL: https://issues.apache.org/jira/browse/KAFKA-3567 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.9.0.0 >Reporter: Sriharsha Chintalapani >Assignee: Bharat Viswanadham > Labels: needs-kip > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-3567) Add --security-protocol option to console consumer and producer
[ https://issues.apache.org/jira/browse/KAFKA-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-3567: -- Resolution: Won't Fix Status: Resolved (was: Patch Available) > Add --security-protocol option to console consumer and producer > --- > > Key: KAFKA-3567 > URL: https://issues.apache.org/jira/browse/KAFKA-3567 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.9.0.0 >Reporter: Sriharsha Chintalapani >Assignee: Bharat Viswanadham > Labels: needs-kip > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (KAFKA-3567) Add --security-protocol option to console consumer and producer
[ https://issues.apache.org/jira/browse/KAFKA-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15839246#comment-15839246 ] Bharat Viswanadham edited comment on KAFKA-3567 at 1/26/17 5:14 AM: [~ewencp] I agree, closing this task was (Author: bharatviswa): [~ewencp] I agree, if you think of closing this it would be fine with me > Add --security-protocol option to console consumer and producer > --- > > Key: KAFKA-3567 > URL: https://issues.apache.org/jira/browse/KAFKA-3567 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.9.0.0 >Reporter: Sriharsha Chintalapani >Assignee: Bharat Viswanadham > Labels: needs-kip > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder
[ https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15848880#comment-15848880 ] Bharat Viswanadham commented on KAFKA-3729: --- [~Roy19] I have not started working on this. I have plans to work on this, if it is urgent for this release you can takeup. > Auto-configure non-default SerDes passed alongside the topology builder > > > Key: KAFKA-3729 > URL: https://issues.apache.org/jira/browse/KAFKA-3729 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Fred Patton >Assignee: Bharat Viswanadham > Labels: api, newbie > > From Guozhang Wang: > "Only default serdes provided through configs are auto-configured today. But > we could auto-configure other serdes passed alongside the topology builder as > well." -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder
[ https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15848880#comment-15848880 ] Bharat Viswanadham edited comment on KAFKA-3729 at 2/1/17 8:08 PM: --- [~Roy19] [~guozhang]Becuase of other works at my work, I have not started working on this. I have plans to work on this, if it is urgent for this release you can takeup. was (Author: bharatviswa): [~Roy19] I have not started working on this. I have plans to work on this, if it is urgent for this release you can takeup. > Auto-configure non-default SerDes passed alongside the topology builder > > > Key: KAFKA-3729 > URL: https://issues.apache.org/jira/browse/KAFKA-3729 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Fred Patton >Assignee: Bharat Viswanadham > Labels: api, newbie > > From Guozhang Wang: > "Only default serdes provided through configs are auto-configured today. But > we could auto-configure other serdes passed alongside the topology builder as > well." -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (KAFKA-3567) Add --security-protocol option console tools
[ https://issues.apache.org/jira/browse/KAFKA-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-3567: - Assignee: Bharat Viswanadham (was: Sriharsha Chintalapani) > Add --security-protocol option console tools > > > Key: KAFKA-3567 > URL: https://issues.apache.org/jira/browse/KAFKA-3567 > Project: Kafka > Issue Type: Improvement >Reporter: Sriharsha Chintalapani >Assignee: Bharat Viswanadham > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-3567) Add --security-protocol option to console consumer and producer
[ https://issues.apache.org/jira/browse/KAFKA-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-3567: -- Summary: Add --security-protocol option to console consumer and producer (was: Add --security-protocol option console tools) > Add --security-protocol option to console consumer and producer > --- > > Key: KAFKA-3567 > URL: https://issues.apache.org/jira/browse/KAFKA-3567 > Project: Kafka > Issue Type: Improvement >Reporter: Sriharsha Chintalapani >Assignee: Bharat Viswanadham > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (KAFKA-3567) Add --security-protocol option to console consumer and producer
[ https://issues.apache.org/jira/browse/KAFKA-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on KAFKA-3567 started by Bharat Viswanadham. - > Add --security-protocol option to console consumer and producer > --- > > Key: KAFKA-3567 > URL: https://issues.apache.org/jira/browse/KAFKA-3567 > Project: Kafka > Issue Type: Improvement >Reporter: Sriharsha Chintalapani >Assignee: Bharat Viswanadham > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-2629) Enable getting SSL password from an executable rather than passing plaintext password
[ https://issues.apache.org/jira/browse/KAFKA-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428880#comment-15428880 ] Bharat Viswanadham commented on KAFKA-2629: --- [~singhashish] Are you working on this? If you are busy with other tasks, if more information can be provided regarding implementation how it needs to be handled, I can takeup this task. > Enable getting SSL password from an executable rather than passing plaintext > password > - > > Key: KAFKA-2629 > URL: https://issues.apache.org/jira/browse/KAFKA-2629 > Project: Kafka > Issue Type: Improvement > Components: security >Affects Versions: 0.9.0.0 >Reporter: Ashish K Singh >Assignee: Ashish K Singh > > Currently there are a couple of options to pass SSL passwords to Kafka, i.e., > via properties file or via command line argument. Both of these are not > recommended security practices. > * A password on a command line is a no-no: it's trivial to see that password > just by using the 'ps' utility. > * Putting a password into a file, and then passing the location to that file, > is the next best option. The access to the file will be governed by unix > access permissions which we all know and love. The downside is that the > password is still just sitting there in a file, and those who have access can > still see it trivially. > * The most general, secure solution is to provide a layer of abstraction: > provide functionality to get the password from "somewhere else". The most > flexible and generic way to do this is to simply call an executable which > returns the desired password. > ** The executable is again protected with normal file system privileges > ** The simplest form, a script that looks like "echo 'my-password'", devolves > back to putting the password in a file > ** A more interesting implementation could open up a local encrypted password > store and extract the password from it > ** A maximally secure implementation could contact an external secret manager > with centralized control and audit functionality. > ** In short: getting the password as the output of a script/executable is > maximally generic and enables both simple and complex use cases. > This JIRA intend to add a config param to enable passing an executable to > Kafka for SSL passwords. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-4087) DefaultParitioner Implementation Issue
Bharat Viswanadham created KAFKA-4087: - Summary: DefaultParitioner Implementation Issue Key: KAFKA-4087 URL: https://issues.apache.org/jira/browse/KAFKA-4087 Project: Kafka Issue Type: Bug Components: producer Affects Versions: 0.9.0.1 Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham In DefaultPartitioner implementation, when key is null if (availablePartitions.size() > 0) { int part = Utils.toPositive(nextValue) % availablePartitions.size(); return availablePartitions.get(part).partition(); } Where as when key is not null return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions; We are returning partition by using total number of partitions. Should n't we do the same as by considering only available partitions? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-4087) DefaultParitioner Implementation Issue
[ https://issues.apache.org/jira/browse/KAFKA-4087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-4087: -- Description: In DefaultPartitioner implementation, when key is null if (availablePartitions.size() > 0) { int part = Utils.toPositive(nextValue) % availablePartitions.size(); return availablePartitions.get(part).partition(); } Where as when key is not null return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions; We are returning partition by using total number of partitions. Should n't we do the same as by considering only available partitions? https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/DefaultPartitioner.java#L67 was: In DefaultPartitioner implementation, when key is null if (availablePartitions.size() > 0) { int part = Utils.toPositive(nextValue) % availablePartitions.size(); return availablePartitions.get(part).partition(); } Where as when key is not null return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions; We are returning partition by using total number of partitions. Should n't we do the same as by considering only available partitions? > DefaultParitioner Implementation Issue > -- > > Key: KAFKA-4087 > URL: https://issues.apache.org/jira/browse/KAFKA-4087 > Project: Kafka > Issue Type: Bug > Components: producer >Affects Versions: 0.9.0.1, 0.10.0.1 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: partitioners, producer > > In DefaultPartitioner implementation, when key is null > if (availablePartitions.size() > 0) { > int part = Utils.toPositive(nextValue) % > availablePartitions.size(); > return availablePartitions.get(part).partition(); > } > Where as when key is not null > return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions; > We are returning partition by using total number of partitions. > Should n't we do the same as by considering only available partitions? > https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/DefaultPartitioner.java#L67 > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-4087) DefaultParitioner Implementation Issue
[ https://issues.apache.org/jira/browse/KAFKA-4087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-4087: -- Affects Version/s: 0.10.0.1 > DefaultParitioner Implementation Issue > -- > > Key: KAFKA-4087 > URL: https://issues.apache.org/jira/browse/KAFKA-4087 > Project: Kafka > Issue Type: Bug > Components: producer >Affects Versions: 0.9.0.1, 0.10.0.1 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: partitioners, producer > > In DefaultPartitioner implementation, when key is null > if (availablePartitions.size() > 0) { > int part = Utils.toPositive(nextValue) % > availablePartitions.size(); > return availablePartitions.get(part).partition(); > } > Where as when key is not null > return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions; > We are returning partition by using total number of partitions. > Should n't we do the same as by considering only available partitions? > https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/DefaultPartitioner.java#L67 > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (KAFKA-5149) New producer hardcodes key and value serializers to ByteArray
[ https://issues.apache.org/jira/browse/KAFKA-5149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-5149: - Assignee: Bharat Viswanadham > New producer hardcodes key and value serializers to ByteArray > - > > Key: KAFKA-5149 > URL: https://issues.apache.org/jira/browse/KAFKA-5149 > Project: Kafka > Issue Type: Bug > Components: producer , tools >Affects Versions: 0.10.2.0 >Reporter: Yeva Byzek >Assignee: Bharat Viswanadham > Labels: newbie > > New producer hardcodes the serializers: > {noformat} > props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArraySerializer") > props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArraySerializer") > {noformat} > And thus cannot be overridden from the commandline argument > {{--key-serializer}} or {{--value-serializer}} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Work started] (KAFKA-5149) New producer hardcodes key and value serializers to ByteArray
[ https://issues.apache.org/jira/browse/KAFKA-5149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on KAFKA-5149 started by Bharat Viswanadham. - > New producer hardcodes key and value serializers to ByteArray > - > > Key: KAFKA-5149 > URL: https://issues.apache.org/jira/browse/KAFKA-5149 > Project: Kafka > Issue Type: Bug > Components: producer , tools >Affects Versions: 0.10.2.0 >Reporter: Yeva Byzek >Assignee: Bharat Viswanadham > Labels: newbie > > New producer hardcodes the serializers: > {noformat} > props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArraySerializer") > props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArraySerializer") > {noformat} > And thus cannot be overridden from the commandline argument > {{--key-serializer}} or {{--value-serializer}} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (KAFKA-5166) Add option "dry run" to Streams application reset tool
[ https://issues.apache.org/jira/browse/KAFKA-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-5166: - Assignee: Bharat Viswanadham > Add option "dry run" to Streams application reset tool > -- > > Key: KAFKA-5166 > URL: https://issues.apache.org/jira/browse/KAFKA-5166 > Project: Kafka > Issue Type: Improvement > Components: streams, tools >Affects Versions: 0.10.2.0 >Reporter: Matthias J. Sax >Assignee: Bharat Viswanadham >Priority: Minor > Labels: needs-kip > > We want to add an option to Streams application reset tool, that allow for a > "dry run". Ie, only prints what topics would get modified/deleted without > actually applying any actions. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-5166) Add option "dry run" to Streams application reset tool
[ https://issues.apache.org/jira/browse/KAFKA-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997426#comment-15997426 ] Bharat Viswanadham commented on KAFKA-5166: --- Hi Matthias, I will look into this, and come up with a proposal. > Add option "dry run" to Streams application reset tool > -- > > Key: KAFKA-5166 > URL: https://issues.apache.org/jira/browse/KAFKA-5166 > Project: Kafka > Issue Type: Improvement > Components: streams, tools >Affects Versions: 0.10.2.0 >Reporter: Matthias J. Sax >Assignee: Bharat Viswanadham >Priority: Minor > Labels: needs-kip > > We want to add an option to Streams application reset tool, that allow for a > "dry run". Ie, only prints what topics would get modified/deleted without > actually applying any actions. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-5166) Add option "dry run" to Streams application reset tool
[ https://issues.apache.org/jira/browse/KAFKA-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997464#comment-15997464 ] Bharat Viswanadham commented on KAFKA-5166: --- Hi Matthias, I have not written KIPS. I will look in to it and provide KIP. But i have a question why public API change will happen. My idea is to add a new option --dry-run and print the info to user, what all actions will be performed. Could you please let me know anything I am missing here. > Add option "dry run" to Streams application reset tool > -- > > Key: KAFKA-5166 > URL: https://issues.apache.org/jira/browse/KAFKA-5166 > Project: Kafka > Issue Type: Improvement > Components: streams, tools >Affects Versions: 0.10.2.0 >Reporter: Matthias J. Sax >Assignee: Bharat Viswanadham >Priority: Minor > Labels: needs-kip > > We want to add an option to Streams application reset tool, that allow for a > "dry run". Ie, only prints what topics would get modified/deleted without > actually applying any actions. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Work started] (KAFKA-5166) Add option "dry run" to Streams application reset tool
[ https://issues.apache.org/jira/browse/KAFKA-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on KAFKA-5166 started by Bharat Viswanadham. - > Add option "dry run" to Streams application reset tool > -- > > Key: KAFKA-5166 > URL: https://issues.apache.org/jira/browse/KAFKA-5166 > Project: Kafka > Issue Type: Improvement > Components: streams, tools >Affects Versions: 0.10.2.0 >Reporter: Matthias J. Sax >Assignee: Bharat Viswanadham >Priority: Minor > Labels: needs-kip > > We want to add an option to Streams application reset tool, that allow for a > "dry run". Ie, only prints what topics would get modified/deleted without > actually applying any actions. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-5166) Add option "dry run" to Streams application reset tool
[ https://issues.apache.org/jira/browse/KAFKA-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997674#comment-15997674 ] Bharat Viswanadham commented on KAFKA-5166: --- Hi Matthias, Thanks for clear info.. I have a question when I looked in to streams resetting tool, on internal topics offset reset we are doing, and next deleting the internal topic. Is there any reason to do that? > Add option "dry run" to Streams application reset tool > -- > > Key: KAFKA-5166 > URL: https://issues.apache.org/jira/browse/KAFKA-5166 > Project: Kafka > Issue Type: Improvement > Components: streams, tools >Affects Versions: 0.10.2.0 >Reporter: Matthias J. Sax >Assignee: Bharat Viswanadham >Priority: Minor > Labels: needs-kip > > We want to add an option to Streams application reset tool, that allow for a > "dry run". Ie, only prints what topics would get modified/deleted without > actually applying any actions. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-5166) Add option "dry run" to Streams application reset tool
[ https://issues.apache.org/jira/browse/KAFKA-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997675#comment-15997675 ] Bharat Viswanadham commented on KAFKA-5166: --- And also one more question, we are doing reset of offsets using application id, so here this reset tool is assuming that this application id will be used as group id internally by streams. Sorry if it is naive question, could you provide any info on that.. > Add option "dry run" to Streams application reset tool > -- > > Key: KAFKA-5166 > URL: https://issues.apache.org/jira/browse/KAFKA-5166 > Project: Kafka > Issue Type: Improvement > Components: streams, tools >Affects Versions: 0.10.2.0 >Reporter: Matthias J. Sax >Assignee: Bharat Viswanadham >Priority: Minor > Labels: needs-kip > > We want to add an option to Streams application reset tool, that allow for a > "dry run". Ie, only prints what topics would get modified/deleted without > actually applying any actions. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-5166) Add option "dry run" to Streams application reset tool
[ https://issues.apache.org/jira/browse/KAFKA-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998798#comment-15998798 ] Bharat Viswanadham commented on KAFKA-5166: --- Hi Matthias, Thank you for info.. Could you please help me in getting permissions to create KIP. userid:bharatv email: bigdatadev...@gmail.com > Add option "dry run" to Streams application reset tool > -- > > Key: KAFKA-5166 > URL: https://issues.apache.org/jira/browse/KAFKA-5166 > Project: Kafka > Issue Type: Improvement > Components: streams, tools >Affects Versions: 0.10.2.0 >Reporter: Matthias J. Sax >Assignee: Bharat Viswanadham >Priority: Minor > Labels: needs-kip > > We want to add an option to Streams application reset tool, that allow for a > "dry run". Ie, only prints what topics would get modified/deleted without > actually applying any actions. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-5117) Kafka Connect REST endpoints reveal Password typed values
[ https://issues.apache.org/jira/browse/KAFKA-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16001717#comment-16001717 ] Bharat Viswanadham commented on KAFKA-5117: --- [~tholmes] are you still working on this? > Kafka Connect REST endpoints reveal Password typed values > - > > Key: KAFKA-5117 > URL: https://issues.apache.org/jira/browse/KAFKA-5117 > Project: Kafka > Issue Type: Bug > Components: KafkaConnect >Affects Versions: 0.10.2.0 >Reporter: Thomas Holmes > > A Kafka Connect connector can specify ConfigDef keys as type of Password. > This type was added to prevent logging the values (instead "[hidden]" is > logged). > This change does not apply to the values returned by executing a GET on > {{connectors/\{connector-name\}}} and > {{connectors/\{connector-name\}/config}}. This creates an easily accessible > way for an attacker who has infiltrated your network to gain access to > potential secrets that should not be available. > I have started on a code change that addresses this issue by parsing the > config values through the ConfigDef for the connector and returning their > output instead (which leads to the masking of Password typed configs as > [hidden]). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-5061) client.id should be set for Connect producers/consumers
[ https://issues.apache.org/jira/browse/KAFKA-5061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16001795#comment-16001795 ] Bharat Viswanadham commented on KAFKA-5061: --- [~ewencp] So you are suggesting to add client.id per task level in this, if user has not supplied any cient.id? And also provide option to user ti define client.id per connector basis > client.id should be set for Connect producers/consumers > --- > > Key: KAFKA-5061 > URL: https://issues.apache.org/jira/browse/KAFKA-5061 > Project: Kafka > Issue Type: Bug > Components: KafkaConnect >Affects Versions: 0.10.2.1 >Reporter: Ewen Cheslack-Postava > Labels: needs-kip, newbie++ > > In order to properly monitor individual tasks using the producer and consumer > metrics, we need to have the framework disambiguate them. Currently when we > create producers > (https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L362) > and create consumers > (https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L371-L394) > the client ID is not being set. You can override it for the entire worker > via worker-level producer/consumer overrides, but you can't get per-task > metrics. > There are a couple of things we might want to consider doing here: > 1. Provide default client IDs based on the worker group ID + task ID > (providing uniqueness for multiple connect clusters up to the scope of the > Kafka cluster they are operating on). This seems ideal since it's a good > default; however it is a public-facing change and may need a KIP. Normally I > would be less worried about this, but some folks may be relying on picking up > metrics without this being set, in which case such a change would break their > monitoring. > 2. Allow overriding client.id on a per-connector basis. I'm not sure if this > will really be useful or not -- it lets you differentiate between metrics for > different connectors' tasks, but within a connector, all metrics would go to > a single client.id. On the other hand, this makes the tasks act as a single > group from the perspective of broker handling of client IDs. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Resolved] (KAFKA-5149) New producer hardcodes key and value serializers to ByteArray
[ https://issues.apache.org/jira/browse/KAFKA-5149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved KAFKA-5149. --- Resolution: Duplicate Closing this Jira as a duplicate of KAFKA-2526 > New producer hardcodes key and value serializers to ByteArray > - > > Key: KAFKA-5149 > URL: https://issues.apache.org/jira/browse/KAFKA-5149 > Project: Kafka > Issue Type: Bug > Components: producer , tools >Affects Versions: 0.10.2.0 >Reporter: Yeva Byzek >Assignee: Bharat Viswanadham > Labels: newbie > > New producer hardcodes the serializers: > {noformat} > props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArraySerializer") > props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArraySerializer") > {noformat} > And thus cannot be overridden from the commandline argument > {{--key-serializer}} or {{--value-serializer}} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (KAFKA-5149) New producer hardcodes key and value serializers to ByteArray
[ https://issues.apache.org/jira/browse/KAFKA-5149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003649#comment-16003649 ] Bharat Viswanadham edited comment on KAFKA-5149 at 5/9/17 10:12 PM: Closing this Jira as a duplicate of KAFKA-2526 from [~junrao] comments was (Author: bharatviswa): Closing this Jira as a duplicate of KAFKA-2526 > New producer hardcodes key and value serializers to ByteArray > - > > Key: KAFKA-5149 > URL: https://issues.apache.org/jira/browse/KAFKA-5149 > Project: Kafka > Issue Type: Bug > Components: producer , tools >Affects Versions: 0.10.2.0 >Reporter: Yeva Byzek >Assignee: Bharat Viswanadham > Labels: newbie > > New producer hardcodes the serializers: > {noformat} > props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArraySerializer") > props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, > "org.apache.kafka.common.serialization.ByteArraySerializer") > {noformat} > And thus cannot be overridden from the commandline argument > {{--key-serializer}} or {{--value-serializer}} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-5210) Application Reset Tool does not need to seek for internal topics
[ https://issues.apache.org/jira/browse/KAFKA-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004085#comment-16004085 ] Bharat Viswanadham commented on KAFKA-5210: --- [~mjsax] I will contribute the changes for this jira > Application Reset Tool does not need to seek for internal topics > > > Key: KAFKA-5210 > URL: https://issues.apache.org/jira/browse/KAFKA-5210 > Project: Kafka > Issue Type: Improvement > Components: streams, tools >Affects Versions: 0.10.2.1 >Reporter: Matthias J. Sax >Priority: Trivial > Labels: beginner, newbie > > As KAFKA-4456 got resolved, there is no modify offsets of internal topics > with the application reset tool, as those offsets will be deleted anyway. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (KAFKA-5210) Application Reset Tool does not need to seek for internal topics
[ https://issues.apache.org/jira/browse/KAFKA-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-5210: - Assignee: Bharat Viswanadham > Application Reset Tool does not need to seek for internal topics > > > Key: KAFKA-5210 > URL: https://issues.apache.org/jira/browse/KAFKA-5210 > Project: Kafka > Issue Type: Improvement > Components: streams, tools >Affects Versions: 0.10.2.1 >Reporter: Matthias J. Sax >Assignee: Bharat Viswanadham >Priority: Trivial > Labels: beginner, newbie > > As KAFKA-4456 got resolved, there is no modify offsets of internal topics > with the application reset tool, as those offsets will be deleted anyway. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Work started] (KAFKA-5210) Application Reset Tool does not need to seek for internal topics
[ https://issues.apache.org/jira/browse/KAFKA-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on KAFKA-5210 started by Bharat Viswanadham. - > Application Reset Tool does not need to seek for internal topics > > > Key: KAFKA-5210 > URL: https://issues.apache.org/jira/browse/KAFKA-5210 > Project: Kafka > Issue Type: Improvement > Components: streams, tools >Affects Versions: 0.10.2.1 >Reporter: Matthias J. Sax >Assignee: Bharat Viswanadham >Priority: Trivial > Labels: beginner, newbie > > As KAFKA-4456 got resolved, there is no modify offsets of internal topics > with the application reset tool, as those offsets will be deleted anyway. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (KAFKA-5220) Application Reset Tool does not work with SASL
[ https://issues.apache.org/jira/browse/KAFKA-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-5220: - Assignee: Bharat Viswanadham > Application Reset Tool does not work with SASL > -- > > Key: KAFKA-5220 > URL: https://issues.apache.org/jira/browse/KAFKA-5220 > Project: Kafka > Issue Type: Improvement > Components: streams, tools >Reporter: Matthias J. Sax >Assignee: Bharat Viswanadham > > Resetting an application with SASL enabled fails with > {noformat} > ERROR: Request GROUP_COORDINATOR failed on brokers List(localhost:9092 (id: > -1 rack: null)) > {noformat} > We would need to allow additional configuration options that get picked up by > the internally used ZK client and KafkaConsumer. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (KAFKA-5225) StreamsResetter doesn't allow custom Consumer properties
[ https://issues.apache.org/jira/browse/KAFKA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-5225: - Assignee: Bharat Viswanadham > StreamsResetter doesn't allow custom Consumer properties > > > Key: KAFKA-5225 > URL: https://issues.apache.org/jira/browse/KAFKA-5225 > Project: Kafka > Issue Type: Bug > Components: streams, tools >Affects Versions: 0.10.2.1 >Reporter: Dustin Cote >Assignee: Bharat Viswanadham > > The StreamsResetter doesn't let the user pass in any configurations to the > embedded consumer. This is a problem in secured environments because you > can't configure the embedded consumer to talk to the cluster. The tool should > take an approach similar to `kafka.admin.ConsumerGroupCommand` which allows a > config file to be passed in the command line for such operations. > cc [~mjsax] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-5225) StreamsResetter doesn't allow custom Consumer properties
[ https://issues.apache.org/jira/browse/KAFKA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008338#comment-16008338 ] Bharat Viswanadham commented on KAFKA-5225: --- Matthias will contribute to this Jira. > StreamsResetter doesn't allow custom Consumer properties > > > Key: KAFKA-5225 > URL: https://issues.apache.org/jira/browse/KAFKA-5225 > Project: Kafka > Issue Type: Bug > Components: streams, tools >Affects Versions: 0.10.2.1 >Reporter: Dustin Cote >Assignee: Bharat Viswanadham > > The StreamsResetter doesn't let the user pass in any configurations to the > embedded consumer. This is a problem in secured environments because you > can't configure the embedded consumer to talk to the cluster. The tool should > take an approach similar to `kafka.admin.ConsumerGroupCommand` which allows a > config file to be passed in the command line for such operations. > cc [~mjsax] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (KAFKA-5220) Application Reset Tool does not work with SASL
[ https://issues.apache.org/jira/browse/KAFKA-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-5220: - Assignee: (was: Bharat Viswanadham) > Application Reset Tool does not work with SASL > -- > > Key: KAFKA-5220 > URL: https://issues.apache.org/jira/browse/KAFKA-5220 > Project: Kafka > Issue Type: Improvement > Components: streams, tools >Reporter: Matthias J. Sax > > Resetting an application with SASL enabled fails with > {noformat} > ERROR: Request GROUP_COORDINATOR failed on brokers List(localhost:9092 (id: > -1 rack: null)) > {noformat} > We would need to allow additional configuration options that get picked up by > the internally used ZK client and KafkaConsumer. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (KAFKA-5225) StreamsResetter doesn't allow custom Consumer properties
[ https://issues.apache.org/jira/browse/KAFKA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16008338#comment-16008338 ] Bharat Viswanadham edited comment on KAFKA-5225 at 5/12/17 4:39 PM: Matthias, I will contribute to this Jira. was (Author: bharatviswa): Matthias will contribute to this Jira. > StreamsResetter doesn't allow custom Consumer properties > > > Key: KAFKA-5225 > URL: https://issues.apache.org/jira/browse/KAFKA-5225 > Project: Kafka > Issue Type: Bug > Components: streams, tools >Affects Versions: 0.10.2.1 >Reporter: Dustin Cote >Assignee: Bharat Viswanadham > > The StreamsResetter doesn't let the user pass in any configurations to the > embedded consumer. This is a problem in secured environments because you > can't configure the embedded consumer to talk to the cluster. The tool should > take an approach similar to `kafka.admin.ConsumerGroupCommand` which allows a > config file to be passed in the command line for such operations. > cc [~mjsax] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (KAFKA-5166) Add option "dry run" to Streams application reset tool
[ https://issues.apache.org/jira/browse/KAFKA-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-5166: -- Status: Patch Available (was: In Progress) > Add option "dry run" to Streams application reset tool > -- > > Key: KAFKA-5166 > URL: https://issues.apache.org/jira/browse/KAFKA-5166 > Project: Kafka > Issue Type: Improvement > Components: streams, tools >Affects Versions: 0.10.2.0 >Reporter: Matthias J. Sax >Assignee: Bharat Viswanadham >Priority: Minor > Labels: needs-kip > > We want to add an option to Streams application reset tool, that allow for a > "dry run". Ie, only prints what topics would get modified/deleted without > actually applying any actions. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (KAFKA-4850) RocksDb cannot use Bloom Filters
[ https://issues.apache.org/jira/browse/KAFKA-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-4850: -- Status: Patch Available (was: In Progress) > RocksDb cannot use Bloom Filters > > > Key: KAFKA-4850 > URL: https://issues.apache.org/jira/browse/KAFKA-4850 > Project: Kafka > Issue Type: Improvement > Components: streams >Affects Versions: 0.10.2.0 >Reporter: Eno Thereska >Assignee: Bharat Viswanadham > Fix For: 0.11.0.0 > > > Bloom Filters would speed up RocksDb lookups. However they currently do not > work in RocksDb 5.0.2. This has been fixed in trunk, but we'll have to wait > until that is released and tested. > Then we can add the line in RocksDbStore.java in openDb: > tableConfig.setFilter(new BloomFilter(10)); -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (KAFKA-4850) RocksDb cannot use Bloom Filters
[ https://issues.apache.org/jira/browse/KAFKA-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-4850: -- Status: In Progress (was: Patch Available) > RocksDb cannot use Bloom Filters > > > Key: KAFKA-4850 > URL: https://issues.apache.org/jira/browse/KAFKA-4850 > Project: Kafka > Issue Type: Improvement > Components: streams >Affects Versions: 0.10.2.0 >Reporter: Eno Thereska >Assignee: Bharat Viswanadham > Fix For: 0.11.0.0 > > > Bloom Filters would speed up RocksDb lookups. However they currently do not > work in RocksDb 5.0.2. This has been fixed in trunk, but we'll have to wait > until that is released and tested. > Then we can add the line in RocksDbStore.java in openDb: > tableConfig.setFilter(new BloomFilter(10)); -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (KAFKA-4850) RocksDb cannot use Bloom Filters
[ https://issues.apache.org/jira/browse/KAFKA-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-4850: -- Status: Patch Available (was: In Progress) > RocksDb cannot use Bloom Filters > > > Key: KAFKA-4850 > URL: https://issues.apache.org/jira/browse/KAFKA-4850 > Project: Kafka > Issue Type: Improvement > Components: streams >Affects Versions: 0.10.2.0 >Reporter: Eno Thereska >Assignee: Bharat Viswanadham > Fix For: 0.11.0.0 > > > Bloom Filters would speed up RocksDb lookups. However they currently do not > work in RocksDb 5.0.2. This has been fixed in trunk, but we'll have to wait > until that is released and tested. > Then we can add the line in RocksDbStore.java in openDb: > tableConfig.setFilter(new BloomFilter(10)); -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Work started] (KAFKA-5225) StreamsResetter doesn't allow custom Consumer properties
[ https://issues.apache.org/jira/browse/KAFKA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on KAFKA-5225 started by Bharat Viswanadham. - > StreamsResetter doesn't allow custom Consumer properties > > > Key: KAFKA-5225 > URL: https://issues.apache.org/jira/browse/KAFKA-5225 > Project: Kafka > Issue Type: Bug > Components: streams, tools >Affects Versions: 0.10.2.1 >Reporter: Dustin Cote >Assignee: Bharat Viswanadham > > The StreamsResetter doesn't let the user pass in any configurations to the > embedded consumer. This is a problem in secured environments because you > can't configure the embedded consumer to talk to the cluster. The tool should > take an approach similar to `kafka.admin.ConsumerGroupCommand` which allows a > config file to be passed in the command line for such operations. > cc [~mjsax] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (KAFKA-5225) StreamsResetter doesn't allow custom Consumer properties
[ https://issues.apache.org/jira/browse/KAFKA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-5225: -- Status: Patch Available (was: In Progress) > StreamsResetter doesn't allow custom Consumer properties > > > Key: KAFKA-5225 > URL: https://issues.apache.org/jira/browse/KAFKA-5225 > Project: Kafka > Issue Type: Bug > Components: streams, tools >Affects Versions: 0.10.2.1 >Reporter: Dustin Cote >Assignee: Bharat Viswanadham > > The StreamsResetter doesn't let the user pass in any configurations to the > embedded consumer. This is a problem in secured environments because you > can't configure the embedded consumer to talk to the cluster. The tool should > take an approach similar to `kafka.admin.ConsumerGroupCommand` which allows a > config file to be passed in the command line for such operations. > cc [~mjsax] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Work started] (KAFKA-5229) Reflections logs excessive warnings when scanning classpaths
[ https://issues.apache.org/jira/browse/KAFKA-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on KAFKA-5229 started by Bharat Viswanadham. - > Reflections logs excessive warnings when scanning classpaths > > > Key: KAFKA-5229 > URL: https://issues.apache.org/jira/browse/KAFKA-5229 > Project: Kafka > Issue Type: Bug > Components: KafkaConnect >Affects Versions: 0.10.0.0, 0.10.0.1, 0.10.1.0, 0.10.1.1, 0.10.2.0, > 0.10.2.1 >Reporter: Ewen Cheslack-Postava >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie > > We use Reflections to scan the classpath for available plugins (connectors, > converters, transformations), but when doing so Reflections tends to generate > a lot of log noise like this: > {code} > [2017-05-12 14:59:48,224] WARN could not get type for name > org.jboss.netty.channel.SimpleChannelHandler from any class loader > (org.reflections.Reflections:396) > org.reflections.ReflectionsException: could not get type for name > org.jboss.netty.channel.SimpleChannelHandler > at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:390) > at org.reflections.Reflections.expandSuperTypes(Reflections.java:381) > at org.reflections.Reflections.(Reflections.java:126) > at > org.apache.kafka.connect.runtime.PluginDiscovery.scanClasspathForPlugins(PluginDiscovery.java:68) > at > org.apache.kafka.connect.runtime.AbstractHerder$1.run(AbstractHerder.java:391) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.ClassNotFoundException: > org.jboss.netty.channel.SimpleChannelHandler > at java.net.URLClassLoader$1.run(URLClassLoader.java:366) > at java.net.URLClassLoader$1.run(URLClassLoader.java:355) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:354) > at java.lang.ClassLoader.loadClass(ClassLoader.java:425) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) > at java.lang.ClassLoader.loadClass(ClassLoader.java:358) > at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:388) > ... 5 more > {code} > Despite being benign, these warnings worry users, especially first time users. > We should either a) see if we can get Reflections to turn off these specific > warnings via some config or b) make Reflections only log at > WARN by default > in our log4j config. (b) is probably safe since we should only be seeing > these at startup and I don't think I've seen any actual issue logged at WARN. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (KAFKA-5229) Reflections logs excessive warnings when scanning classpaths
[ https://issues.apache.org/jira/browse/KAFKA-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-5229: - Assignee: Bharat Viswanadham > Reflections logs excessive warnings when scanning classpaths > > > Key: KAFKA-5229 > URL: https://issues.apache.org/jira/browse/KAFKA-5229 > Project: Kafka > Issue Type: Bug > Components: KafkaConnect >Affects Versions: 0.10.0.0, 0.10.0.1, 0.10.1.0, 0.10.1.1, 0.10.2.0, > 0.10.2.1 >Reporter: Ewen Cheslack-Postava >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie > > We use Reflections to scan the classpath for available plugins (connectors, > converters, transformations), but when doing so Reflections tends to generate > a lot of log noise like this: > {code} > [2017-05-12 14:59:48,224] WARN could not get type for name > org.jboss.netty.channel.SimpleChannelHandler from any class loader > (org.reflections.Reflections:396) > org.reflections.ReflectionsException: could not get type for name > org.jboss.netty.channel.SimpleChannelHandler > at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:390) > at org.reflections.Reflections.expandSuperTypes(Reflections.java:381) > at org.reflections.Reflections.(Reflections.java:126) > at > org.apache.kafka.connect.runtime.PluginDiscovery.scanClasspathForPlugins(PluginDiscovery.java:68) > at > org.apache.kafka.connect.runtime.AbstractHerder$1.run(AbstractHerder.java:391) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.ClassNotFoundException: > org.jboss.netty.channel.SimpleChannelHandler > at java.net.URLClassLoader$1.run(URLClassLoader.java:366) > at java.net.URLClassLoader$1.run(URLClassLoader.java:355) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:354) > at java.lang.ClassLoader.loadClass(ClassLoader.java:425) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) > at java.lang.ClassLoader.loadClass(ClassLoader.java:358) > at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:388) > ... 5 more > {code} > Despite being benign, these warnings worry users, especially first time users. > We should either a) see if we can get Reflections to turn off these specific > warnings via some config or b) make Reflections only log at > WARN by default > in our log4j config. (b) is probably safe since we should only be seeing > these at startup and I don't think I've seen any actual issue logged at WARN. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (KAFKA-5229) Reflections logs excessive warnings when scanning classpaths
[ https://issues.apache.org/jira/browse/KAFKA-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-5229: -- Status: Patch Available (was: In Progress) > Reflections logs excessive warnings when scanning classpaths > > > Key: KAFKA-5229 > URL: https://issues.apache.org/jira/browse/KAFKA-5229 > Project: Kafka > Issue Type: Bug > Components: KafkaConnect >Affects Versions: 0.10.2.1, 0.10.2.0, 0.10.1.1, 0.10.1.0, 0.10.0.1, > 0.10.0.0 >Reporter: Ewen Cheslack-Postava >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie > > We use Reflections to scan the classpath for available plugins (connectors, > converters, transformations), but when doing so Reflections tends to generate > a lot of log noise like this: > {code} > [2017-05-12 14:59:48,224] WARN could not get type for name > org.jboss.netty.channel.SimpleChannelHandler from any class loader > (org.reflections.Reflections:396) > org.reflections.ReflectionsException: could not get type for name > org.jboss.netty.channel.SimpleChannelHandler > at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:390) > at org.reflections.Reflections.expandSuperTypes(Reflections.java:381) > at org.reflections.Reflections.(Reflections.java:126) > at > org.apache.kafka.connect.runtime.PluginDiscovery.scanClasspathForPlugins(PluginDiscovery.java:68) > at > org.apache.kafka.connect.runtime.AbstractHerder$1.run(AbstractHerder.java:391) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.ClassNotFoundException: > org.jboss.netty.channel.SimpleChannelHandler > at java.net.URLClassLoader$1.run(URLClassLoader.java:366) > at java.net.URLClassLoader$1.run(URLClassLoader.java:355) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:354) > at java.lang.ClassLoader.loadClass(ClassLoader.java:425) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) > at java.lang.ClassLoader.loadClass(ClassLoader.java:358) > at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:388) > ... 5 more > {code} > Despite being benign, these warnings worry users, especially first time users. > We should either a) see if we can get Reflections to turn off these specific > warnings via some config or b) make Reflections only log at > WARN by default > in our log4j config. (b) is probably safe since we should only be seeing > these at startup and I don't think I've seen any actual issue logged at WARN. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (KAFKA-5210) Application Reset Tool does not need to seek for internal topics
[ https://issues.apache.org/jira/browse/KAFKA-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated KAFKA-5210: -- Status: Patch Available (was: In Progress) > Application Reset Tool does not need to seek for internal topics > > > Key: KAFKA-5210 > URL: https://issues.apache.org/jira/browse/KAFKA-5210 > Project: Kafka > Issue Type: Improvement > Components: streams, tools >Affects Versions: 0.10.2.1 >Reporter: Matthias J. Sax >Assignee: Bharat Viswanadham >Priority: Trivial > Labels: beginner, newbie > > As KAFKA-4456 got resolved, there is no modify offsets of internal topics > with the application reset tool, as those offsets will be deleted anyway. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (KAFKA-4171) Kafka-connect prints outs keystone and truststore password in log2
[ https://issues.apache.org/jira/browse/KAFKA-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-4171: - Assignee: Bharat Viswanadham > Kafka-connect prints outs keystone and truststore password in log2 > -- > > Key: KAFKA-4171 > URL: https://issues.apache.org/jira/browse/KAFKA-4171 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect >Affects Versions: 0.10.0.0 >Reporter: Akshath Patkar >Assignee: Bharat Viswanadham > > Kafka-connect prints outs keystone and truststore password in log > [2016-09-14 16:30:33,971] WARN The configuration > consumer.ssl.truststore.password = X was supplied but isn't a known > config. (org.apache.kafka.clients.consumer.ConsumerConfig:186) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-4171) Kafka-connect prints outs keystone and truststore password in log2
[ https://issues.apache.org/jira/browse/KAFKA-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014526#comment-16014526 ] Bharat Viswanadham commented on KAFKA-4171: --- Hi, I think this issue got resolved, I think this issue has been resolved. Now logUnsed() method is logging the only key, it is not printing the value. So, this issue will not be seen. > Kafka-connect prints outs keystone and truststore password in log2 > -- > > Key: KAFKA-4171 > URL: https://issues.apache.org/jira/browse/KAFKA-4171 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect >Affects Versions: 0.10.0.0 >Reporter: Akshath Patkar >Assignee: Bharat Viswanadham > > Kafka-connect prints outs keystone and truststore password in log > [2016-09-14 16:30:33,971] WARN The configuration > consumer.ssl.truststore.password = X was supplied but isn't a known > config. (org.apache.kafka.clients.consumer.ConsumerConfig:186) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-4171) Kafka-connect prints outs keystone and truststore password in log2
[ https://issues.apache.org/jira/browse/KAFKA-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014528#comment-16014528 ] Bharat Viswanadham commented on KAFKA-4171: --- Hi Ewen, Please let me know any more work is pending for this work item? > Kafka-connect prints outs keystone and truststore password in log2 > -- > > Key: KAFKA-4171 > URL: https://issues.apache.org/jira/browse/KAFKA-4171 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect >Affects Versions: 0.10.0.0 >Reporter: Akshath Patkar >Assignee: Bharat Viswanadham > > Kafka-connect prints outs keystone and truststore password in log > [2016-09-14 16:30:33,971] WARN The configuration > consumer.ssl.truststore.password = X was supplied but isn't a known > config. (org.apache.kafka.clients.consumer.ConsumerConfig:186) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Work started] (KAFKA-4171) Kafka-connect prints outs keystone and truststore password in log2
[ https://issues.apache.org/jira/browse/KAFKA-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on KAFKA-4171 started by Bharat Viswanadham. - > Kafka-connect prints outs keystone and truststore password in log2 > -- > > Key: KAFKA-4171 > URL: https://issues.apache.org/jira/browse/KAFKA-4171 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect >Affects Versions: 0.10.0.0 >Reporter: Akshath Patkar >Assignee: Bharat Viswanadham > > Kafka-connect prints outs keystone and truststore password in log > [2016-09-14 16:30:33,971] WARN The configuration > consumer.ssl.truststore.password = X was supplied but isn't a known > config. (org.apache.kafka.clients.consumer.ConsumerConfig:186) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (KAFKA-4278) Undocumented REST resources
[ https://issues.apache.org/jira/browse/KAFKA-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-4278: - Assignee: Bharat Viswanadham > Undocumented REST resources > --- > > Key: KAFKA-4278 > URL: https://issues.apache.org/jira/browse/KAFKA-4278 > Project: Kafka > Issue Type: Bug > Components: KafkaConnect >Reporter: Gwen Shapira >Assignee: Bharat Viswanadham > Labels: newbie > > We've added some REST resources and I think we didn't document them. > / - get version > /connector-plugins - show installed connectors > Those are the ones I've found (or rather, failed to find) - there could be > more. > Perhaps the best solution is to auto-generate the REST documentation the way > we generate configuration docs? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Work started] (KAFKA-4278) Undocumented REST resources
[ https://issues.apache.org/jira/browse/KAFKA-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on KAFKA-4278 started by Bharat Viswanadham. - > Undocumented REST resources > --- > > Key: KAFKA-4278 > URL: https://issues.apache.org/jira/browse/KAFKA-4278 > Project: Kafka > Issue Type: Bug > Components: KafkaConnect >Reporter: Gwen Shapira >Assignee: Bharat Viswanadham > Labels: newbie > > We've added some REST resources and I think we didn't document them. > / - get version > /connector-plugins - show installed connectors > Those are the ones I've found (or rather, failed to find) - there could be > more. > Perhaps the best solution is to auto-generate the REST documentation the way > we generate configuration docs? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (KAFKA-4278) Undocumented REST resources
[ https://issues.apache.org/jira/browse/KAFKA-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned KAFKA-4278: - Assignee: (was: Bharat Viswanadham) > Undocumented REST resources > --- > > Key: KAFKA-4278 > URL: https://issues.apache.org/jira/browse/KAFKA-4278 > Project: Kafka > Issue Type: Bug > Components: KafkaConnect >Reporter: Gwen Shapira > Labels: newbie > > We've added some REST resources and I think we didn't document them. > / - get version > /connector-plugins - show installed connectors > Those are the ones I've found (or rather, failed to find) - there could be > more. > Perhaps the best solution is to auto-generate the REST documentation the way > we generate configuration docs? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-4278) Undocumented REST resources
[ https://issues.apache.org/jira/browse/KAFKA-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16016248#comment-16016248 ] Bharat Viswanadham commented on KAFKA-4278: --- [~ewencp]I thought of taking this work item. Not sure how to get started on this, any pointers or example for making getting started for developing doc for Rest services. In configs, I have seen there is a main function to convert config key value pair to HTML and print it. But I am not sure how to get started with Rest services? > Undocumented REST resources > --- > > Key: KAFKA-4278 > URL: https://issues.apache.org/jira/browse/KAFKA-4278 > Project: Kafka > Issue Type: Bug > Components: KafkaConnect >Reporter: Gwen Shapira > Labels: newbie > > We've added some REST resources and I think we didn't document them. > / - get version > /connector-plugins - show installed connectors > Those are the ones I've found (or rather, failed to find) - there could be > more. > Perhaps the best solution is to auto-generate the REST documentation the way > we generate configuration docs? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-4171) Kafka-connect prints outs keystone and truststore password in log2
[ https://issues.apache.org/jira/browse/KAFKA-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16016258#comment-16016258 ] Bharat Viswanadham commented on KAFKA-4171: --- [~ewencp] The behavior is changed in AbstractConfig logUnsed() method to print only keys not values. As a part of KAFKA-4056: Kafka logs values of sensitive configs like passwords In case of unknown configs, only list the name without the value. So, I think this jira can be closed as fixed. > Kafka-connect prints outs keystone and truststore password in log2 > -- > > Key: KAFKA-4171 > URL: https://issues.apache.org/jira/browse/KAFKA-4171 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect >Affects Versions: 0.10.0.0 >Reporter: Akshath Patkar >Assignee: Bharat Viswanadham > > Kafka-connect prints outs keystone and truststore password in log > [2016-09-14 16:30:33,971] WARN The configuration > consumer.ssl.truststore.password = X was supplied but isn't a known > config. (org.apache.kafka.clients.consumer.ConsumerConfig:186) -- This message was sent by Atlassian JIRA (v6.3.15#6346)