Re: Console producer can connect locally but not locally

2017-01-25 Thread Peter Kopias
Hi. This looks like some network / config issue. 1. Check that zookeeper (zkA) is available and reachable from both hosts (and it can connect). (check etc/hosts on both machines, there might be some 127.0.0.1 issues there). 2. Verify, that the process bound to the public network interface:9092

Stack Overflow Questions Jobs Documentation Tags Users Badges Ask Question Does the number of App instances and Zookeeper servers should be the same?

2017-01-25 Thread kant kodali
Does the number of App instances and Zookeeper servers should be the same? I understand the requirement of 2F+1 to tolerate F failures but this is to tolerate failures of Zookeeper instances itself. But how about the number of App instances ? For example say I have 3 zookeeper servers and I have 2

Re: Reg: Kafka ACLS

2017-01-25 Thread Manikumar
Yes, we can use Kafka ACL's with SASL/PLAIN mechanism. On Thu, Jan 26, 2017 at 2:38 AM, BigData dev wrote: > Hi, > I have a question, can we use Kafka ACL's with only SASL/PLAIN mechanism. > Because after I enabled, still I am able to produce/consume from topics. > > And one more observation is

Console producer can connect locally but not locally

2017-01-25 Thread Zac Harvey
I have a single Kafka node at, say, IP address 1.2.3.4. If I SSH into that node from 2 different terminal windows, and run the console consumer from 1 terminal, and the console producer from another terminal, everything works great: # Run the consumer from terminal 1 kafka-console-consumer.s

Reg: Kafka ACLS

2017-01-25 Thread BigData dev
Hi, I have a question, can we use Kafka ACL's with only SASL/PLAIN mechanism. Because after I enabled, still I am able to produce/consume from topics. And one more observation is in kafka-_jaas.conf, there is no client section, will get an WARN as below, as we dont have this kind of mechanisim wit

Re: Confluent platform for Kafka 0.10.1.1

2017-01-25 Thread Meghana Narasimhan
Awesome ! Thank you. On Wed, Jan 25, 2017 at 2:25 PM, Hans Jespersen wrote: > Today! Confluent 3.1.2 supports Kafka 0.10.1.1 > > https://www.confluent.io/blog/confluent-delivers-upgrades- > clients-kafka-streams-brokers-apache-kafka-0-10-1-1/ < > https://www.confluent.io/blog/confluent-delivers-

Re: Track Latest Count - Unique Key

2017-01-25 Thread Nick DeCoursin
@Damian Thank you!! On 25 January 2017 at 13:37, Damian Guy wrote: > You can have a look at > https://github.com/apache/kafka/blob/trunk/streams/src/ > main/java/org/apache/kafka/streams/kstream/internals/ > KGroupedStreamImpl.java#L150 > for an example. Obviously exactly-once semantics for Kafk

Re: Confluent platform for Kafka 0.10.1.1

2017-01-25 Thread Hans Jespersen
Today! Confluent 3.1.2 supports Kafka 0.10.1.1 https://www.confluent.io/blog/confluent-delivers-upgrades-clients-kafka-streams-brokers-apache-kafka-0-10-1-1/ -hans > On Jan 25, 201

Re: Kafka Connect Consumer throwing ILLEGAL_GENERATION when committing offsets and going into re-balance state

2017-01-25 Thread Srikrishna Alla
I am seeing this in the server logs. Looks like the GroupCoordinator has the connect customer group in a constant state of rebalance. Can someone please check this and let me know what is going wrong here. [2017-01-20 16:45:33,187] INFO [GroupCoordinator 1001]: Preparing to restabilize group alert

Confluent platform for Kafka 0.10.1.1

2017-01-25 Thread Meghana Narasimhan
Hi, As far as I understand, the current CP v3.1.1 supports Kafka 0.10.1.0. When is the next Confluent platform release supporting Kafka 0.10.1.1 planned for ? Thanks, Meghana

Re: Track Latest Count - Unique Key

2017-01-25 Thread Damian Guy
You can have a look at https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KGroupedStreamImpl.java#L150 for an example. Obviously exactly-once semantics for Kafka haven't been completed, so this would be at-least-once. On Wed, 25 Jan 2017 at

Track Latest Count - Unique Key

2017-01-25 Thread Nick DeCoursin
>From the documentation : The counting operation for record streams is trivial to implement: you can > maintain a local state store that tracks the latest count for each key, > and, upon receiving a new record, update the correspondi