Hello Kafka users,
working on a Kafka-Streams stateless application; we want to implement some
healthchecks so that whenever connection to Kafka is lost for more than a
threshold, marke the instance as unhealthy, so that our cluster manager
(could be K8S or AWS-ECS) kills that instance and starts
It's a known issue: https://issues.apache.org/jira/browse/KAFKA-6520
On 2/20/19 3:25 AM, Javier Arias Losada wrote:
> Hello Kafka users,
>
> working on a Kafka-Streams stateless application; we want to implement some
> healthchecks so that whenever connection to Kafka is lost for more than a
> t
Hello Javier,
Matthias is right it is a known issue, not only in Streams, but in the
underlying producer / consumer clients.
For you own healthcheck monitoring, I'd suggest you can consider some
following alternatives:
1) Monitor on consumer offsets, and alert when it did not change for a long
t
Since we've seen quite a lot of questions recently about EOS on the mailing
list. I think it worth adding an FAQ entry here:
https://cwiki.apache.org/confluence/display/KAFKA/FAQ
So that we can refer future questions to the page than answering them
repeatedly. @Matthias J Sax : would you like to
Hello Trey,
Just to clarify on your solution, the "calls-store" is the same one
materialized from calls table as "val callsTable = builder.table("calls",
... Materialized.as(.. "call-store"))". So basically you are using the
transformer to update the original materialized store for the callsTable
Glad to hear that. I think it worth adding an FAQ entry as it seems to be a
common scenarios that users forgot to config on the final consumption stage.
Guozhang
On Tue, Feb 19, 2019 at 4:28 AM Xander Uiterlinden
wrote:
> Thanks for your reply. I figured out what was wrong, and it turned out t
Hello Taylor,
Sorry for the late reply! And thanks for the updated information.
I'd recommend overriding some consumer configs via `StreamsConfig` (you can
use the StreamsConfig#restoreConsumerPrefix for that) for the following
props:
1) increase RECEIVE_BUFFER_CONFIG (64K may cause poll to retu
Yes, that is exactly correct. I *assume* I won't run into any concurrency
issues here - that another thread will not be writing to the same store for
the same key while this is being read.
If there are no concurrency issues here (again, it's the same key so I
doubt it), then a similar approach wou
There should be no concurrency issues since all of the processor nodes
above should be within the same sub-topology and hence the same set of
tasks, and a single task should be accessed by a single stream thread at
any given time.
Guozhang
On Wed, Feb 20, 2019 at 10:30 AM Trey Hutcheson
wrote:
Ok, this is the kotlin implementation of the writeThrough functionality.
It's not very sophisticated and performs no guards or exception handling,
but it gets the point across. I apologize if this does not format correctly:
fun KStream.writeThrough(storeName: String): KStream {
return this.tr
just a general question about the rocksdb in the Kafka stream, I see there
is a folder at /tmp/kafka-stream/, which is used by the rocksdb in the
kafka stream. so when a stream app get restarted, can the store data
directly loaded from this folder? because I see there is very heavy traffic
on the n
> when a stream app get restarted, can the store data
>> directly loaded from this folder?
Yes. That's the purpose of local state.
> because I see there is very heavy traffic
>> on the network to read from broker, assuming it's trying to rebuild the
>> store.
This should only happen if the loca
Done. Feel free to extend/correct/complete etc.
-Matthias
On 2/20/19 9:56 AM, Guozhang Wang wrote:
> Since we've seen quite a lot of questions recently about EOS on the
> mailing list. I think it worth adding an FAQ entry here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/FAQ
>
> So th
I have an exactly-once stream that reads a topic, transforms it, and writes
new messages into the same topic as well as other topics. I am using Kafka
2.1.0. The stream applications run in Kubernetes.
I did a k8s deployment of the application with minor changes to the code --
absolutely no changes
Thanks for reporting the issue!
Are you able to reproduce it? If yes, can you maybe provide broker and
client logs in DEBUG level?
-Matthias
On 2/20/19 7:07 PM, Raman Gupta wrote:
> I have an exactly-once stream that reads a topic, transforms it, and writes
> new messages into the same topic as
15 matches
Mail list logo