Hi,
Yes, this needs to be handled more elegantly. Can you please file a JIRA
here
https://issues.apache.org/jira/projects/KAFKA/issues
Thanks,
Harsha
On Mon, Apr 1, 2019, at 1:52 AM, jorg.heym...@gmail.com wrote:
> Hi,
>
> We have our brokers secured with these standard properties
>
> l
Team,
Can anyone help me share the configs to be set to achieve the below
security in Kafka systems?
- Broker-Broker should be PLAINTEXT(No Authentication and Authorization
between brokers)
- Zookeeper-Broker should be PLAINTEXT(No Authentication and
Authorization between brokers and
Yes, the stream transformation of `topic-1` to `topic-2` is a
heavyweight operation producing completely different information on
topic-2 than is contained on topic-1 (the cardinality is 1-n as well,
not 1-1). The schema evolution I am attempting to perform should have
captured the data at time of
Note you shouldn't cross-post to both the users and dev list -- this
kind of question belongs on the user list.
The fundamental things you need to go investigate:
* Kubernetes Stateful Sets
* Kafka packaged for use on Kubernetes -- I have been happy with
https://github.com/Yolean/kubernetes-kafka.
thanks :)
On Thu, Apr 4, 2019 at 11:11 AM Dimitry Lvovsky wrote:
> You can detect state changes in your streaming app by implementing
> KafkaStreams.StateListener,
> and then registering that with your KafkaStreeams Object e.g new
> KafkaStreams(...).setStateListener();
>
> Hope this helps.
>
>
Hello,
The question might trigger people to reply with "Confluent" - but it's not
related to confluent as the kubernetes offering is not for publi/community
edition. So, discussing Helm charts and intro to Confluent isn't our
objective here.
What I am trying to understand is how does the log fil
You can detect state changes in your streaming app by implementing
KafkaStreams.StateListener,
and then registering that with your KafkaStreeams Object e.g new
KafkaStreams(...).setStateListener();
Hope this helps.
On Thu, Apr 4, 2019 at 10:52 AM Pierre Coquentin
wrote:
> Hi,
>
> We have a cach
Hi,
We have a cache in a processor based on assigned partitions and when Kafka
revokes those partitions we would like to flush the cache. The method
punctuate is neet to do that except that as a client of Kafka stream, I am
not notified during a revoke.
I found the same question on StackOverflow
h