I'm not sure, one thing I know for sure is that on the cloud control panel,
in the consumer lag page, the offset didn't reset on the input topic, so it
was probably something after that.
Anyway, thanks a lot for helping, if we experience that again I'll try to
add more verbose logging to better un
Hi Mohan,
This error means that an exception was thrown during sending a record
from a Streams producer to a broker.
Best,
Bruno
On Thu, Jun 6, 2019 at 7:56 PM Parthasarathy, Mohan wrote:
>
> After changing the log level for Kafka streams from warn to error, I don’t
> see this message. These m
Honestly I cannot think of an issue that fixed in 2.2.1 but not in 2.2.0
which could be correlated to your observations:
https://issues.apache.org/jira/issues/?filter=-1&jql=project%20%3D%20KAFKA%20AND%20resolution%20%3D%20Fixed%20AND%20fixVersion%20%3D%202.2.1%20AND%20component%20%3D%20streams%20
Yes that's right,
could that be the problem? Anyway, so far after upgrading to 2.2.1 from
2.2.0 we didn't experience that problem anymore.
Regards
--
Alessandro Tagliapietra
On Thu, Jun 6, 2019 at 10:50 AM Guozhang Wang wrote:
> That's right, but local state is used as a "materialized view"
I am a little confused by what you say. I can see how it has to build the state
when it is not available on restart but i don’t think it will process old
messages from input topics. It should start from the last committed offset
whatever that is before the crash. Could you confirm ? I thought th
After changing the log level for Kafka streams from warn to error, I don’t see
this message. These messages were huge blobs of numbers preceded by the subject
line of this message. This can be a big pain in production where you will run
out of space.
Can someone tell me what this error means ?
That's right, but local state is used as a "materialized view" of your
changelog topics: if you have nothing locally, then it has to bootstrap
from the beginning of your changelog topic.
But I think your question was about the source "sensors-input" topic, not
the changelog topic. I looked at the
Isn't the windowing state stored in the additional state store topics that
I had to additionally create?
Like these I have here:
sensors-pipeline-KSTREAM-AGGREGATE-STATE-STORE-01-changelog
sensors-pipeline-KTABLE-SUPPRESS-STATE-STORE-04-changelog
Thank you
--
Alessandro Tagliapi
If you deploy your streams app into a docker container, you'd need to make
sure local state directories are preserved, since otherwise whenever you
restart all the state would be lost and Streams then has to bootstrap from
scratch. E.g. if you are using K8s for cluster management, you'd better use
Pieter,
KIP-360 should be able to fix it, yes, but it won't be completed soon, and
the earliest release it may get in is in 2.4. At the mean time, I thought
that be increasing the segment size, hence reducing the frequency that
records being truncated and hence causing producer ID to be removed to
Hello,
We also found this earlier email in the archives which looks very much like
what we are experiencing:
http://mail-archives.apache.org/mod_mbox/kafka-users/201811.mbox/%3CCAM0VdefApmc5wBZQaJmQtbcnZ_OOgGv84qCuPoJS-KU4B=e...@mail.gmail.com%3E
So it seems like:
* It only happens with EXACT
Hi Guozhang, Matthias,
@Guozhang The brokers are on version 2.2.0-cp2 and the clients are version
2.2.1. We have upgraded yesterday from 2.1.1 to this version to see if it would
make a difference, but unfortunately not. After restarting the brokers and
streams applications the broker errors and
The aim of having transactional.id configured for the producer is, in
my understanding, to fence off a zombie producer and to proactively
abort its transactions to avoid the need to wait for a timeout.
What I'm interested in doing is to be able to continue the
transaction. For example:
producer.b
Hi,
I have been running kafka with zk 3.5.4 for quite a long time without issue.
I am testing with 3.5.5 but I don't expect any problem.
Hope that helps
Enrico
Il gio 6 giu 2019, 01:52 Sebastian Schmitz <
sebastian.schm...@propellerhead.co.nz> ha scritto:
> Thx Mark for clarification. That's wha
14 matches
Mail list logo