Re: KIP-122: Add a tool to Reset Consumer Group Offsets

2017-02-07 Thread Gwen Shapira
As long as the CLI is a bit consistent? Like, not just adding 3 arguments and a JSON parser to the existing tool, right? On Tue, Feb 7, 2017 at 10:29 PM, Onur Karaman wrote: > I think it makes sense to just add the feature to kafka-consumer-groups.sh > > On Tue, Feb 7, 2017 at 10:24 PM, Gwen Shap

Re: KIP-122: Add a tool to Reset Consumer Group Offsets

2017-02-07 Thread Onur Karaman
I think it makes sense to just add the feature to kafka-consumer-groups.sh On Tue, Feb 7, 2017 at 10:24 PM, Gwen Shapira wrote: > Thanks for the KIP. I'm super happy about adding the capability. > > I hate the interface, though. It looks exactly like the replica > assignment tool. A tool everyon

Re: KIP-122: Add a tool to Reset Consumer Group Offsets

2017-02-07 Thread Gwen Shapira
Thanks for the KIP. I'm super happy about adding the capability. I hate the interface, though. It looks exactly like the replica assignment tool. A tool everyone loves so much that there are multiple projects, open and closed, that try to fix it. Can we swap it with something that looks a bit mor

Re: Kafka Streams: Is automatic repartitioning before joins public/stable API?

2017-02-07 Thread Matthias J. Sax
Yes, you can rely on this. The feature was introduced in Kafka 0.10.1 and will stay like this. We already updated the JavaDocs (for upcoming 0.10.2, that is going to be released the next weeks), that explains this, too. See https://issues.apache.org/jira/browse/KAFKA-3561 -Matthias On 2/7/17 7

Re: KTable and cleanup.policy=compact

2017-02-07 Thread Matthias J. Sax
Yes, that is correct. -Matthias On 2/7/17 6:39 PM, Mathieu Fenniak wrote: > Hey kafka users, > > Is it correct that a Kafka topic that is used for a KTable should be set to > cleanup.policy=compact? > > I've never noticed until today that the KStreamBuilder#table() > documentation says: "Howe

Kafka Client 0.8.2.2 talks to Kafka Server 0.10.1.1

2017-02-07 Thread Jeffrey Zhang
hi I have difficulty to have Kafka Client 0.8.2.2 to consume the messages on a Kafka Server 0.10.1.1, though I could produce message from this client 0.8.2.2 to the same server 0.10.1.1. My questions: 1) could Kafka Client 0.8.2.2 be able to consume the messages from Server 0.10.1.1? 1.1) If yes,

Kafka Streams: Is automatic repartitioning before joins public/stable API?

2017-02-07 Thread Dmitry Minkovsky
I accidentally stumbled upon `repartitionRequired` and `repartitionForJoin` in `KStreamImpl`, which are examined/called before KStream join operations to determine whether a repartition is needed. The javadoc to `repartitionForJoin` explains the functionality: > Repartition a stream. This is

Re: KIP-122: Add a tool to Reset Consumer Group Offsets

2017-02-07 Thread Dong Lin
Hey Jorge, Thanks for the KIP. I have some quick comments: - Should we allow user to use wildcard to reset offset of all groups for a given topic as well? - Should we allow user to specify timestamp per topic partition in the json file as well? - Should the script take some credential file to mak

KIP-122: Add a tool to Reset Consumer Group Offsets

2017-02-07 Thread Jorge Esteban Quilcate Otoya
Hi all, I would like to propose a KIP to Add a tool to Reset Consumer Group Offsets. https://cwiki.apache.org/confluence/display/KAFKA/KIP-122%3A+Add+a+tool+to+Reset+Consumer+Group+Offsets Please, take a look at the proposal and share your feedback. Thanks, Jorge.

KTable and cleanup.policy=compact

2017-02-07 Thread Mathieu Fenniak
Hey kafka users, Is it correct that a Kafka topic that is used for a KTable should be set to cleanup.policy=compact? I've never noticed until today that the KStreamBuilder#table() documentation says: "However, no internal changelog topic is created since the original input topic can be used for r

How to measure the load capacity of kafka cluster

2017-02-07 Thread Jiecxy
How to measure the load capacity of one broker or whole cluster? Or maximum throughput? For the quantity of broker, is there any multiple relationship? Such as the capacity of two brokers is twice as much as that of one broker?

Re: Kafka Connect - Unknown magic byte

2017-02-07 Thread Gwen Shapira
Since the data that goes through your Streams app is the one with the bad magic byte, I suspect your streams Serde is not serializing Avro correctly (i.e. in the format that the Connect converter requires). Can you share your Serde code? Gwen On Tue, Feb 7, 2017 at 10:49 AM, Nick DeCoursin wrot

Reg: Kafka Kerberos

2017-02-07 Thread BigData dev
Hi, I am using Kafka 0.10.1.0 and kerberozied cluster. Kafka_jaas.conf file: Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/kafka.service.keytab" storeKey=true useTicketCache=false serviceName="zookeeper" principal=

Re: KIP-121 [Discuss]: Add KStream peek method

2017-02-07 Thread Gwen Shapira
Far better! Thank you! On Tue, Feb 7, 2017 at 10:19 AM, Steven Schlansker wrote: > Thanks for the feedback. I improved the javadoc a bit, do you like it better? > > /** > * Perform an action on each record of {@code KStream}. > * This is a stateless record-by-record operation (cf.

Kafka Connect - Unknown magic byte

2017-02-07 Thread Nick DeCoursin
Hello, I'm experiencing a problem using Kafka Connect's JdbcSinkConnector. I'm creating two connectors using the following script: `./create-connector.sh test` and `./create-connector.sh test2`. The first one `test` works, the second one `test2` doesn't work. Meaning, the first one successfully c

Re: KIP-121 [Discuss]: Add KStream peek method

2017-02-07 Thread Steven Schlansker
Thanks for the feedback. I improved the javadoc a bit, do you like it better? /** * Perform an action on each record of {@code KStream}. * This is a stateless record-by-record operation (cf. {@link #process(ProcessorSupplier, String...)}). * * Peek is a non-terminal opera

Re: Kafka Producer Protocol

2017-02-07 Thread Magnus Edenhill
Hi Steve, this is probably what you are looking for: http://kafka.apache.org/protocol.html /Magnus 2017-02-07 15:57 GMT+01:00 Hopson, Stephen : > Can anyone point me to documentation for the protocol exchanges between a > producer client and Kafka? I do not mean the message formats, I have thos

Kafka Producer Protocol

2017-02-07 Thread Hopson, Stephen
Can anyone point me to documentation for the protocol exchanges between a producer client and Kafka? I do not mean the message formats, I have those. Thanks. Steve

Re: [DISCUSS] KIP-120: Cleanup Kafka Streams builder API

2017-02-07 Thread Mathieu Fenniak
On Mon, Feb 6, 2017 at 2:35 PM, Matthias J. Sax wrote: > - adding KStreamBuilder#topologyBuilder() seems like be a good idea to > address any concern with limited access to TopologyBuilder and DSL/PAPI > mix-and-match approach. However, we should try to cover as much as > possible with #process()

Re: SASL Security Roadmap in Kafka Connect

2017-02-07 Thread Stephane Maarek
Hi Ismael, That’s great news, is there anywhere in the docs showing an example? Thanks, Stephane On 7 February 2017 at 9:44:46 pm, Ismael Juma (ism...@juma.me.uk) wrote: Hi Stephane, Kafka 0.10.2.0 has removed this restriction. RC0 has been announced, maybe you can try it and see if it works f

Re: SASL Security Roadmap in Kafka Connect

2017-02-07 Thread Ismael Juma
Hi Stephane, Kafka 0.10.2.0 has removed this restriction. RC0 has been announced, maybe you can try it and see if it works for you? Ismael On Mon, Feb 6, 2017 at 10:34 PM, Stephane Maarek < steph...@simplemachines.com.au> wrote: > Hi, > > As written here: > http://docs.confluent.io/3.1.2/connec

Re: KIP-121 [Discuss]: Add KStream peek method

2017-02-07 Thread Michael Noll
Many thanks for the KIP and the PR, Steven! My opinion, too, is that we should consider including this. One thing that I would like to see clarified is the difference between the proposed peek() and existing functions map() and foreach(), for instance. My understanding (see also the Java 8 links

Re: Need help in understanding bunch of rocksdb errors on kafka_2.10-0.10.1.1

2017-02-07 Thread Damian Guy
Hi Sachin, Sorry i misunderstood what you had said. You are running 3 instances, one per machine? I thought you said you were running 3 instances on each machine. Regarding partitions: you are better off having more partitions as this effects the maximum degree of parallelism you can achieve in t

Re: KIP-121 [Discuss]: Add KStream peek method

2017-02-07 Thread Damian Guy
Hi Steven, Thanks for the KIP. I think this is a worthy addition to the API. Thanks, Damian On Tue, 7 Feb 2017 at 09:30 Eno Thereska wrote: > Hi, > > I like the proposal, thank you. I have found it frustrating myself not to > be able to understand simple things, like how many records have been

Re: KIP-121 [Discuss]: Add KStream peek method

2017-02-07 Thread Eno Thereska
Hi, I like the proposal, thank you. I have found it frustrating myself not to be able to understand simple things, like how many records have been currently processed. The peek method would allow those kinds of diagnostics and debugging. Gwen, it is possible to do this with the existing functio