Hi,
I am new to the Kafka Streams framework. I have the following streams use case:
State store A
State store B
Processor A
Processor B
State store A is only written to by Processor A but also needs to be read by
Processor B. State store B needs to be written to by both Processor A and
Proces
There is no listener to topic mappings right now. But you can run two listeners
one PLAINTEXT and another SASL. Configure your authorizer to allow anonymous
read/write on topics that are public and the topics you want to protect give a
explicit ACL to principal names. This will protect any rea
Thanks Sam! Please feel free to assign the ticket to yourself and I will
review your PR if you created one:
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes#ContributingCodeChanges-PullRequest
On Tue, Jul 17, 2018 at 6:29 PM, Sam Lendle wrote:
> https://issues.apache.
https://issues.apache.org/jira/browse/KAFKA-7176
If I have a change I will give trunk a try.
On 7/16/18, 2:14 PM, "Guozhang Wang" wrote:
Hmm.. this seems new to me. Checked on the source code it seems right to me.
Could you try out the latest trunk (build from source code) and see
I see -- sorry for miss-understanding initially.
I agree that it would be possible to detect. Feel free to file a Jira
for this improvement and maybe pick it up by yourself :)
-Matthias
On 7/17/18 3:01 PM, Vasily Sulatskov wrote:
> Hi,
>
> I do understand that in a general case it's not possib
Hi,
I have an existing Kafka Cluster that is configured as PLAINTEXT. We want
to enable SASL (GSSAPI) as an additional listener.
Is there a way to force specific topics to only accept traffic
(publish/consume) from a certain listener?
e.g. if i create a topic and set ACLS, how do i stop a client
Hi,
I do understand that in a general case it's not possible to guarantee
that newValue and oldValue parts of a Change message arrive to the
same partitions, and I guess that's not really in the plans, but if I
correctly understand how it works, it should be possible to detect if
both newValue and
I would say the SessionWindow is exactly the behavior you are describing,
so the code you put should produce the expected result.
Also check about timestamp extractor to be sure that you use timestamps
that make sense for your use case :
https://kafka.apache.org/11/documentation/streams/developer-g
Hi Vincent,
Thanks a bunch lot for the explanation.
>you can choose when you want to produce your output event based on
the state
How do I achieve this? I want to hook into the session expiration to
call my report function with the accumulated state once the inactivity
gap for the given key is e
Hello,
Following is an issue we are trying to debug at the moment :
Running org.apache.kafka.tools.ProducerPerformance code to performance test
the kafka cluster. As a trial cluster has only one broker and zookeeper
with 12GB of heap space.
Running 6 producers on 3 machines with same transacti
Hi
Kafka streams sounds like a good solution there.
The first step is to properly partition your event topics, based on the
session key so all events for the same session will goes to the same
partition.
Then you could build your kafka streams application, that will maintains a
state (manually man
Hi,
My use case includes consuming events for sessions and once an
inactivity gap is over creating a detailed report on them. From the
docs (using 1.0.1 currently) it is not clear what is the best way to
achieve this, it seems actions like reduce and aggregate create
results with the same type as
12 matches
Mail list logo