Hello all,
I wrote a bash script to get some message from kafka. And I set the
argument --max-messages like --max-messages 1000. What I have got is just ONE
message from the log! And the file size seems 1000 bytes instead. I want to
know what is happening ? I expect that I will get 1000
The Kafka docs on Monitoring don't mention anything specific for Kafka
Connect. Are metrics for connectors limited to just the standard
consumer/producer metrics? Do I understand correctly that the Connect API
doesn't provide any hooks for custom connector-specific metrics? If not, is
that somethin
Consumers can be split up based on partitions. So, you can tell a consumer
group to listen to several topics and it will divvy up the work. Your use case
sounds very canonical. I would take a look at Kafka connect (if you’re using
the confluent stack).
-Daivd
http://docs.confluent.io/curren
Hi,
I am investigating Kafka as a message bus service for my MANO Monitoring
module. I have a python producer that will take the message from a file and
will put it in the message bus in json format. I have a consumer that reads the
message and does some operation in the external OpenStack API.
Hi Jeff,
There is a KIP going on, would this answer your question?
https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework
Viktor
On Tue, Sep 12, 2017 at 2:38 PM, Jeff Klukas wrote:
> The Kafka docs on Monitoring don't mention anything specific for K
Thanks David!. We are not using confluent at the moment .Since the work
that needs to be done by each consumer is the same (read > write to hdfs)
I am guessing my consumer code will just look the same and will need just
one consumer group.
Thanks,
Nishanth
On Tue, Sep 12, 2017 at 8:53 AM, D
Hi!
We want to reliably produce events into a remote Kafka cluster in (mostly) near
real-time. We have to provide an at-least-once guarantee.
Examples are a "Customer logged in" event, that will be consumed by a data
warehouse for reporting (numbers should be correct) or a "Customer unsubscri
Hello,
we have a 3 node kafka cluster setup, with quite a bunch of topics, that
have a nice life, regular cleans, compact, etc. 1 topic however keeps
growing, new segments are recreated at regular intervals, but old segments
are never deleted.
Our setup is based on Confluent 3.2.0 OSS, hence Apac
Hello,
we have a 3 node kafka cluster setup with quite a bunch of topics that have
a nice life: regular cleans, compact, etc. 1 topic however keeps growing
indefinitely. New segments are recreated at regular intervals, but old
segments are never deleted.
Our setup is based on Confluent 3.2.0 OSS,
The last few days I have been seeing a problem I do not know how to explain.
For months I have been successfully running Kafka/Zookeeper under
docker, and my application seems to work fine. Lately, when I run Kafka
under either docker-compose on my developer system, or 'docker stack
deploy' on
Hi Sameer,
If no clients has transactions turned on the `__transaction_state` internal
topic would not be created at all. So I still suspect that some of your
clients (maybe not your Streams client, but your Producer client that is
sending data to the source topic?) has transactions turned on.
BT
Hi Sachin,
Debugging wise, unfortunately today RocksDB JNI does not provide better
stack traces. However, in newer versions (0.11.0+) we have been using a new
version of RocksDB so it will print the error message instead of empty
"org.rocksdb.RocksDBException:"
or garbage like "org.rocksdb.RocksDB
Thanks Mani.
On Tue, Sep 12, 2017 at 12:27 PM, Manikumar
wrote:
> Hi,
>
> Yes, you can replace bin and libs folders. or you can untar to a new folder
> and
> update config/server.properties config file.
>
> On Tue, Sep 12, 2017 at 12:21 PM, kiran kumar
> wrote:
>
> > [re-posting]
> >
> > Hi All
Hi Guozhang,
The producer sending data to this topic is not running concurrently with
the stream processing. I had first ingested the data from another cluster
and then have the stream processing ran on it. The producer code is written
by me and it doesnt have transactions on by default.
I will d
14 matches
Mail list logo