what does --max-messages argument mean ?

2017-09-12 Thread mingleizhang
Hello all, I wrote a bash script to get some message from kafka. And I set the argument --max-messages like --max-messages 1000. What I have got is just ONE message from the log! And the file size seems 1000 bytes instead. I want to know what is happening ? I expect that I will get 1000

Metrics for Kafka Connect

2017-09-12 Thread Jeff Klukas
The Kafka docs on Monitoring don't mention anything specific for Kafka Connect. Are metrics for connectors limited to just the standard consumer/producer metrics? Do I understand correctly that the Connect API doesn't provide any hooks for custom connector-specific metrics? If not, is that somethin

Re: Identifying Number of Kafka Consumers

2017-09-12 Thread David Garcia
Consumers can be split up based on partitions. So, you can tell a consumer group to listen to several topics and it will divvy up the work. Your use case sounds very canonical. I would take a look at Kafka connect (if you’re using the confluent stack). -Daivd http://docs.confluent.io/curren

Reg: Read External API Response Kafka Producer

2017-09-12 Thread Mohan, Prithiv
Hi, I am investigating Kafka as a message bus service for my MANO Monitoring module. I have a python producer that will take the message from a file and will put it in the message bus in json format. I have a consumer that reads the message and does some operation in the external OpenStack API.

Re: Metrics for Kafka Connect

2017-09-12 Thread Viktor Somogyi
Hi Jeff, There is a KIP going on, would this answer your question? https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework Viktor On Tue, Sep 12, 2017 at 2:38 PM, Jeff Klukas wrote: > The Kafka docs on Monitoring don't mention anything specific for K

Re: Identifying Number of Kafka Consumers

2017-09-12 Thread Nishanth S
Thanks David!. We are not using confluent at the moment .Since the work that needs to be done by each consumer is the same (read > write to hdfs) I am guessing my consumer code will just look the same and will need just one consumer group. Thanks, Nishanth On Tue, Sep 12, 2017 at 8:53 AM, D

Reliably producing records to remote cluster: what are my options?

2017-09-12 Thread Philip Schmitt
Hi! We want to reliably produce events into a remote Kafka cluster in (mostly) near real-time. We have to provide an at-least-once guarantee. Examples are a "Customer logged in" event, that will be consumed by a data warehouse for reporting (numbers should be correct) or a "Customer unsubscri

1 topic that keeps growing

2017-09-12 Thread Wim Van Leuven (BB.io)
Hello, we have a 3 node kafka cluster setup, with quite a bunch of topics, that have a nice life, regular cleans, compact, etc. 1 topic however keeps growing, new segments are recreated at regular intervals, but old segments are never deleted. Our setup is based on Confluent 3.2.0 OSS, hence Apac

1 topic that keeps growing indefinitely

2017-09-12 Thread Wim Van Leuven
Hello, we have a 3 node kafka cluster setup with quite a bunch of topics that have a nice life: regular cleans, compact, etc. 1 topic however keeps growing indefinitely. New segments are recreated at regular intervals, but old segments are never deleted. Our setup is based on Confluent 3.2.0 OSS,

Kafka Operational Wierdness

2017-09-12 Thread Eric Kolotyluk
The last few days I have been seeing a problem I do not know how to explain. For months I have been successfully running Kafka/Zookeeper under docker, and my application seems to work fine. Lately, when I run Kafka under either docker-compose on my developer system, or 'docker stack deploy' on

Re: Kafka 11 | Stream Application crashed the brokers

2017-09-12 Thread Guozhang Wang
Hi Sameer, If no clients has transactions turned on the `__transaction_state` internal topic would not be created at all. So I still suspect that some of your clients (maybe not your Streams client, but your Producer client that is sending data to the source topic?) has transactions turned on. BT

Re: Kafka streams application failed after a long time due to rocks db errors

2017-09-12 Thread Guozhang Wang
Hi Sachin, Debugging wise, unfortunately today RocksDB JNI does not provide better stack traces. However, in newer versions (0.11.0+) we have been using a new version of RocksDB so it will print the error message instead of empty "org.rocksdb.RocksDBException:" or garbage like "org.rocksdb.RocksDB

Re: Upgrading Kafka to 11.0

2017-09-12 Thread kiran kumar
Thanks Mani. On Tue, Sep 12, 2017 at 12:27 PM, Manikumar wrote: > Hi, > > Yes, you can replace bin and libs folders. or you can untar to a new folder > and > update config/server.properties config file. > > On Tue, Sep 12, 2017 at 12:21 PM, kiran kumar > wrote: > > > [re-posting] > > > > Hi All

Re: Kafka 11 | Stream Application crashed the brokers

2017-09-12 Thread Sameer Kumar
Hi Guozhang, The producer sending data to this topic is not running concurrently with the stream processing. I had first ingested the data from another cluster and then have the stream processing ran on it. The producer code is written by me and it doesnt have transactions on by default. I will d