Re: ConsumerGroupCommand and ConsumerOffsetChecker

2016-02-18 Thread Amoxicillin
I tried as you suggested, but still no output of any group info. On Fri, Feb 19, 2016 at 2:45 PM, tao xiao wrote: > That is what I mean alive. If you use new consumer connecting to broker > you should use --new-consumer option to list all consumer groups > > kafka-run-class.sh kafka.admin.Consu

Re: Java Client connection errors with Kafka 0.9.0.0 when SSL is enabled

2016-02-18 Thread Harsha
Did you try what Adam is suggesting in the earlier email. Also to quickly check you can try remove keystore and key.password configs from client side. -Harsha On Thu, Feb 18, 2016, at 02:49 PM, Srikrishna Alla wrote: > Hi, > > We are getting the below error when trying to use a Java new producer

Re: ConsumerGroupCommand and ConsumerOffsetChecker

2016-02-18 Thread tao xiao
That is what I mean alive. If you use new consumer connecting to broker you should use --new-consumer option to list all consumer groups kafka-run-class.sh kafka.admin.ConsumerGroupCommand --list --new-consumer --bootstrap-server localhost:9092 On Fri, 19 Feb 2016 at 14:18 Amoxicillin wrote: >

Re: ConsumerGroupCommand and ConsumerOffsetChecker

2016-02-18 Thread Amoxicillin
How to confirm the consumer groups are alive? I have one consumer in the group running at the same time, and could get messages correctly. On Fri, Feb 19, 2016 at 1:40 PM, tao xiao wrote: > when using ConsumerGroupCommand you need to make sure your consumer groups > are alive. It only queries of

Re: ConsumerGroupCommand and ConsumerOffsetChecker

2016-02-18 Thread tao xiao
when using ConsumerGroupCommand you need to make sure your consumer groups are alive. It only queries offsets for consumer groups that are currently connecting to brokers On Fri, 19 Feb 2016 at 13:35 Amoxicillin wrote: > Hi, > > I use kafka.tools.ConsumerOffsetChecker to view the consumer offse

ConsumerGroupCommand and ConsumerOffsetChecker

2016-02-18 Thread Amoxicillin
Hi, I use kafka.tools.ConsumerOffsetChecker to view the consumer offset status, and can get the correct output, along with a warning: ConsumerOffsetChecker is deprecated and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand instead. But when I altered to bin/kafka-run-class.s

Re: Kafka Streams: Possible to achieve at-least-once delivery with Streams?

2016-02-18 Thread Jay Kreps
Yeah I didn't mean to imply that we committed after each poll, but rather that when it was time to commit, this would happen on the next poll call and hence only commit processed messages. -Jay On Thu, Feb 18, 2016 at 2:21 PM, Avi Flax wrote: > On Thu, Feb 18, 2016 at 4:26 PM, Jay Kreps wrote:

Re: Kafka Streams: Possible to achieve at-least-once delivery with Streams?

2016-02-18 Thread Jason Gustafson
The consumer is single-threaded, so we only trigger commits in the call to poll(). As long as you consume all the records returned from each poll call, the committed offset will never get ahead of the consumed offset, and you'll have at-lest-once delivery. Note that the implication is that " auto.c

Java Client connection errors with Kafka 0.9.0.0 when SSL is enabled

2016-02-18 Thread Srikrishna Alla
Hi, We are getting the below error when trying to use a Java new producer client. Please let us know the reason for this error - Error message: [2016-02-18 15:41:06,182] DEBUG Accepted connection from /10.**.***.** on /10.**.***.**:9093. sendBufferSize [actual|requested]: [102400|102400] recvB

Re: Java Client connection errors with Kafka 0.9.0.0 when SSL is enabled

2016-02-18 Thread Srikrishna Alla
That was a typo. I did remove that and still same error. Thanks, Sri > On Feb 18, 2016, at 4:21 PM, Adam Kunicki wrote: > > Ha! nice catch Gwen! > >> On Thu, Feb 18, 2016 at 3:20 PM, Gwen Shapira wrote: >> >> props.put("ssl.protocal", "SSL"); <- looks like a typo. >> >> On Thu, Feb 18,

kafka conenct - HDFS sink connector issue

2016-02-18 Thread Venkatesh Rudraraju
Hi, I tried using the HDFS connector sink with kafka-connect and works as described-> http://docs.confluent.io/2.0.0/connect/connect-hdfs/docs/index.html My Scenario : I have plain Json data in a kafka topic. Can I still use HDFS connector sink to read data from kafka-topic and write to HDFS in

Re: Java Client connection errors with Kafka 0.9.0.0 when SSL is enabled

2016-02-18 Thread Adam Kunicki
Ha! nice catch Gwen! On Thu, Feb 18, 2016 at 3:20 PM, Gwen Shapira wrote: > props.put("ssl.protocal", "SSL"); <- looks like a typo. > > On Thu, Feb 18, 2016 at 2:49 PM, Srikrishna Alla < > srikrishna.a...@aexp.com.invalid> wrote: > > > Hi, > > > > We are getting the below error when trying

Re: Java Client connection errors with Kafka 0.9.0.0 when SSL is enabled

2016-02-18 Thread Adam Kunicki
Just to be thorough, it seems you have client authentication enabled as well. This means that each broker must have your client's public certificate in its truststore. I felt like it might be easier to draw a diagram than write it out, but this is what your setup should look like: [image: Inline

Re: Java Client connection errors with Kafka 0.9.0.0 when SSL is enabled

2016-02-18 Thread Gwen Shapira
props.put("ssl.protocal", "SSL"); <- looks like a typo. On Thu, Feb 18, 2016 at 2:49 PM, Srikrishna Alla < srikrishna.a...@aexp.com.invalid> wrote: > Hi, > > We are getting the below error when trying to use a Java new producer > client. Please let us know the reason for this error - > > Err

Re: Kafka Streams: Possible to achieve at-least-once delivery with Streams?

2016-02-18 Thread Avi Flax
On Thu, Feb 18, 2016 at 4:26 PM, Jay Kreps wrote: > The default semantics of the new consumer with auto commit are > at-least-once-delivery. Basically during the poll() call the commit will be > triggered and will commit the offset for the messages consumed during the > previous poll call. This is

Re: Kafka Streams: Possible to achieve at-least-once delivery with Streams?

2016-02-18 Thread Avi Flax
On Thu, Feb 18, 2016 at 4:26 PM, Jay Kreps wrote: > The default semantics of the new consumer with auto commit are > at-least-once-delivery. Basically during the poll() call the commit will be > triggered and will commit the offset for the messages consumed during the > previous poll call. This is

Re: Cassandra connector

2016-02-18 Thread Liquan Pei
Hi Andrew, That is really awesome. I would be interested in taking a look! Best, Liquan On Thu, Feb 18, 2016 at 10:56 AM, Andrew Stevenson < and...@datamountaineer.com> wrote: > Hi Guys, > > I posted on the Confluent mailing list about my Cassandra Connect sink. > > Comments please. Be gentle!

Re: Wiki Karma

2016-02-18 Thread Joel Koshy
You should have access now. On Thu, Feb 18, 2016 at 12:09 PM, Christian Posta wrote: > Can someone add Karma to my user id for contributing to the wiki/docs? > userid is 'ceposta' > > thanks! > > -- > *Christian Posta* > twitter: @christianposta > http://www.christianposta.com/blog > http://fabr

Re: Kafka Streams: Possible to achieve at-least-once delivery with Streams?

2016-02-18 Thread Jay Kreps
The default semantics of the new consumer with auto commit are at-least-once-delivery. Basically during the poll() call the commit will be triggered and will commit the offset for the messages consumed during the previous poll call. This is an advantage over the older scala consumer where the consu

Kafka Streams: Possible to achieve at-least-once delivery with Streams?

2016-02-18 Thread Avi Flax
Hello all, I have a question about Kafka Streams, which I’m evaluating for a new project. (I know it’s still a work in progress but it might work anyway for this project.) I’m new to Kafka in particular and distributed systems in general so please forgive me I’m confused about any of these concept

Re: Does kafka.common.QueueFullException indicate back pressure in Kafka?

2016-02-18 Thread Alex Loddengaard
Hi John, I should preface this by saying I've never used Storm and KafkaBolt and am not a streaming expert. However, if you're running out of buffer in the producer (as is what's happening in the other thread you referenced), you can possibly alleviate this by adding more producers, or by tuning

Wiki Karma

2016-02-18 Thread Christian Posta
Can someone add Karma to my user id for contributing to the wiki/docs? userid is 'ceposta' thanks! -- *Christian Posta* twitter: @christianposta http://www.christianposta.com/blog http://fabric8.io

Re: Kafka response ordering guarantees

2016-02-18 Thread Ivan Dyachkov
Thanks Joel, it is clear now. /Ivan - Original message - From: Joel Koshy To: "users@kafka.apache.org" Subject: Re: Kafka response ordering guarantees Date: Thu, 18 Feb 2016 11:24:48 -0800 > > Does this mean that when a client is sending more than one in-flight >> request per connectio

Re: Kafka response ordering guarantees

2016-02-18 Thread Joel Koshy
> > Does this mean that when a client is sending more than one in-flight >> request per connection, the server does not guarantee that responses will >> be sent in the same order as requests? > > > No - the server does provide this guarantee - i.e., responses will always > be sent in the same order

Re: Kafka response ordering guarantees

2016-02-18 Thread Joel Koshy
> Does this mean that when a client is sending more than one in-flight > request per connection, the server does not guarantee that responses will > be sent in the same order as requests? No - the server does provide this guarantee - i.e., responses will always be sent in the same order as reques

Re: Resetting Kafka Offsets -- and What are offsets.... exactly?

2016-02-18 Thread Leo Lin
Hi John, Kafka offsets are sequential id numbers that identify messages in each partition. It might not be sequential within a topic (which can have multiple partition). Offsets don't necessarily start at 0 since messages are deleted. .bin/kafka-run-class.sh kafka.tools.GetOffsetShell is pretty

Cassandra connector

2016-02-18 Thread Andrew Stevenson
Hi Guys, I posted on the Confluent mailing list about my Cassandra Connect sink. Comments please. Be gentle! https://github.com/andrewstevenson/stream-reactor/tree/master/kafka-connect Regards Andrew

Re: Enable Kafka Consumer 0.8.2.1 Reconnect to Zookeeper

2016-02-18 Thread Alexis Midon
A Connection and a ZK session are two different things. The ZK master keeps track of a client session validity. When a client connection gets interrupted, its associated session goes into Disconnected state, after a while it will be Expired, but if a new connection is established before the time

Re: kafka.common.QueueFullException

2016-02-18 Thread John Yost
Hi Alex, Great info, thanks! I asked a related question this AM--is a full queue possibly a symptom of back pressure within Kafka? --John On Thu, Feb 18, 2016 at 12:38 PM, Alex Loddengaard wrote: > Hi Saurabh, > > This is occurring because the produce message queue is full when a produce > req

Re: Periodic Disconnects

2016-02-18 Thread John Bickerstaff
This may not be helpful, but the first thing I've learned to check in similar situations is whether there is significant time-drift between VMs and actual hardware. Some combination of time-drift and a time-sensitive security check could be causing this. IIRC, CentOS has a funky issue with gettin

Re: 0.9 client AbstractCoordinator - Attempt to join group failed due to obsolete coordinator information

2016-02-18 Thread Jason Gustafson
Hi Gary, The coordinator is a special broker which is chosen for each consumer group to manage its state. It facilitates group membership, partition assignment and offset commits. If the coordinator is shutdown, then Kafka will choose another broker to assume the role. The log message might be a l

Re: Consumer seek on 0.9.0 API

2016-02-18 Thread Jason Gustafson
Woops. Looks like Alex got there first. Glad you were able to figure it out. -Jason On Thu, Feb 18, 2016 at 9:55 AM, Jason Gustafson wrote: > Hi Robin, > > It would be helpful if you posted the full code you were trying to use. > How to seek largely depends on whether you are using new consumer

Re: Consumer seek on 0.9.0 API

2016-02-18 Thread Jason Gustafson
Hi Robin, It would be helpful if you posted the full code you were trying to use. How to seek largely depends on whether you are using new consumer in "simple" or "group" mode. In simple mode, when you know the partitions you want to consume, you should just be able to do something like the follow

Re: kafka.common.QueueFullException

2016-02-18 Thread Alex Loddengaard
Hi Saurabh, This is occurring because the produce message queue is full when a produce request is made. The size of the queue is configured via queue.buffering.max.messages. You can experiment with increasing this (which will require more JVM heap space), or fiddling with queue.enqueue.timeout.ms

Re: Consumer seek on 0.9.0 API

2016-02-18 Thread Alex Loddengaard
Hi Robin, Glad it's working. I'll explain: When a consumer subscribes to one or many topics using subscribe(), the consumer group coordinator is responsible for assigning partitions to each consumer in the consumer group, to ensure all messages in the topic are being consumed. The coordinator han

How to modify partition to a topic ??

2016-02-18 Thread EricLiu
my kafka version : kafka_2.9.1-0.8.2.2 I want add partition and replication-factor for a topic,but i haven’t kafka-add-partitions.sh。 [cid:40200CA8-05DA-40FF-A762-538E2DE74DC9] ——— TD技术部(Technology Department) 刘浩(Eric Liu) Senior Technical Development Manager TEL: 010-51292727 MP:

Re: Kafka response ordering guarantees

2016-02-18 Thread Ivan Dyachkov
Thanks Ben. As I mentioned, I'm developing a kafka library and not using standard java producer. My question is really about protocol guarantees. /Ivan - Original message - From: Ben Stopford To: users@kafka.apache.org Subject: Re: Kafka response ordering guarantees Date: Wed, 17 Feb

Periodic Disconnects

2016-02-18 Thread Paul Jensen
Running on Centos with Kafka 0.8.2.1 I’m experiencing periodic disconnects. I’ve enabled keepalives, though I don’t think that’s related to the issue. The fact that this happens at granularity of 5 minutes and always at 45 seconds after is interesting. This may be some internal network issue,

Does kafka.common.QueueFullException indicate back pressure in Kafka?

2016-02-18 Thread John Yost
Hi Everyone, I am encountering this exception similar to Saurabh's report earlier today as I try to scale up a Storm -> Kafka output via the KafkaBolt (i.e., add more KafkaBolt executors). Question...does this necessarily indicate back pressure from Kafka where the Kafka writes cannot keep up wit

kafka.common.QueueFullException

2016-02-18 Thread Saurabh Kumar
Hi, We have a Kafka server deployment shared between multiple teams and i have created a topic with multiple partitions on it for pushing some JSON data. We have multiple such Kafka producers running from different machines which produce/push data to a Kafka topic. A lot of times i see the follow

Re: Consumer seek on 0.9.0 API

2016-02-18 Thread Péricé Robin
Hi, Ok I did a poll() before my seek() and poll() again and now my consumer starts at offset. Thanks you a lot ! But I don't really understand why I have to do that, if anyone can explain me. Regards, Robin 2016-02-17 20:39 GMT+01:00 Alex Loddengaard : > Hi Robin, > > I believe seek() needs t