Hello Kafka users, developers and client-developers,
Sorry for a bit of delay, but I've now prepared the first candidate for
release of Apache Kafka 1.0.1.
This is a bugfix release for the 1.0 branch that was first released with
1.0.0 about 3 months ago. We've fixed 46 significant issues since th
I was somehow not aware of this:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-63%3A+Unify+store+and+downstream+caching+in+streams
... :/
On Thu, Feb 1, 2018 at 11:57 PM, Dmitry Minkovsky
wrote:
> Thank you Guozhang.
>
> > related to your consistency requirement of the store? Do you mean
Hello Tony,
Could you share your Streams config values so that people can help further
investigating your issue?
Guozhang
On Mon, Feb 5, 2018 at 12:00 AM, Tony John wrote:
> Hi All,
>
> I have been running a streams application for sometime. The application
> runs fine for sometime but afte
Hi everyone,
I have design a kafka api solution which has define some topics, these
topics are produced by other api connect with Cassandra data model.
So I need to fixed the sensical order and sequential order of the events,
it means i need to guarantee that create, update and delete events will
Your observation is correct and by design.
The operation after the flatMap() that read from the repartition topic,
always needs to read from this topic. And the record was already
successfully written into this topic.
Thus, processing the input record is "finished" from the point of view
of the f
It will be two transactions.
Note, that Kafak transactions are not the same thing as transactions in
RDBMS. There is no notion of ACID guarantees. Kafka's transactions only
guarantee that you get exactly-once processing semantics.
-Matthias
On 2/5/18 3:27 AM, Pegerto Fernandez Torres wrote:
> H
Hello all,
Can someone help me to understand the throught operation?
At the documentation, the operation is equivalent to use #to# and define a
#stream#. And according to the code that is exactly what it does, invoke
to and return a stream.
Now I want to store a intermediate result of the topol
Hi everyone;
I have a question about the record that is currently processed by the kafka
stream app when this app stop (suddenly or not).
When restarting the app, the last record that was processed before the
shutdown is replayed, but I noticed that the topology don't replay the
entire DAG for th
Its good tool for your requirement.
Probably you need to look at kafka conncet/ Kafka streams APIs.
Thank you,
Naresh
On Fri, Feb 2, 2018 at 8:50 PM, Matan Givo wrote:
> Hi,
>
> My name is Matan Givoni and I am a team leader in a small startup company.
>
> We are starting a development on a c
Hey there,
I`m quite new to Kafka itself and the Streams API, so pardon any shortcomings.
What I need is a Sliding Window giving me access to the data of e. g. the last
half hour in relation to the current data element. As far as I understand, the
Streams API does not provide such a window.
T
The consumers in consumer group 'X' do not have a regex subscription
matching the newly created topic 'C'. They simply subscribe with
the subscribe(java.util.Collection topics) method on
topics 'A' and 'B'.
Shouldn't the consumer group have a different state from "Stable" during a
rebalancing rega
Dear All,
In our project(as part of kafka failover evaluation), we have a single cluster
with five kafka nodes (five partition), three consumers (attached to single
group) and single Zookeeper node. As soon as cluster startups, we see leader
election per partition and each consumers discovers
Hi All,
I have been running a streams application for sometime. The application
runs fine for sometime but after a day or two I see the below log getting
printed continuously on to the console.
WARN 2018-02-05 02:50:04.060 [kafka-producer-network-thread | producer-1]
org.apache.kafka.clients.Net
13 matches
Mail list logo