Hi Evgeny,
In your configuration file the *advertised.listeners* property is set to
the new port number.
For Kafka you need to set the *listeners* property to listen on the port
number.
The *advertised.listeners* property is used to specify another name/port
number in case you use routing services
Hi Richard,
I use *listeners* now only and it's working, thank you!
I wondered why *advertised.listeners* works only with default port 9092...
Best regards,
Evgeny
Business Application Support
VTB Capital
Telephone.: +7 (495) 960 (ext.264423)
Mobile: +7 (916) 091-8939
-Original Messag
Thanks Luca
This is exactly what I was looking for.
On a related note let's say I stop and restart my application. What would I
have to do so that the I do not re process events?
I am still working through the kstreams 101 tutorial. I have not gotten to the
DSL tutorials yet
Andy
On 5/30/22
But there is no guarantee that the onPartitionsLost callback will be called
before a zombie producer coming back to life tries to continue with the
transaction, e.g. sending offsets or committing, so I should handle the
exception first and I could directly create a new producer there instead of
doi
The CommitFailedException should be expected, since the fencing happens at
the consumer coordinator. I.e. we can only fence the consumer-producer pair
by the consumer's generation, but we cannot do so since there's no other
producer who has just grabbed the same txn.id and bumped the producer epoch
Hi,
I can't understand why i see too much "fetch, version=xx request on grafana
dashboard. Do you have any idea? Because I see over 20M req/s request
fetch version=13
Best Regards
31 May 2022 Sal 21:55 tarihinde aydemir kala şunu
yazdı:
> Hi,
> I can't understand why i see too much "fetch, ver
Hi Andy,
The defaults are sensible enough that, under normal operational conditions,
your app should pick up from where it left. To dig a little more into this, I
suggest you look into `auto.offset.reset` and `enable.auto.commit` options.
In case, you do need to reprocess everything, kafka stre
Hi All,
I have a KStreams application running inside a Docker container which uses a
persistent key-value store.
I have configured state.dir with a value of /tmp/kafka-streams (which is the
default).
When I start this container using "docker run", I mount /tmp/kafka-streams to a
directory on
Hi Neeraj,
Thanks for all that detail! Your expectation is correct. You should see the
checkpoint files after a _clean_ shutdown, and then you should not see it
bootstrap from the beginning of the changelog on the next startup.
How are you shutting down the application? You'll want to call
Kaf