Hello,
This isn't resource intensive at all, it merely just changes à value in the
zookeeper instance and share it across the cluster.
Le dim. 12 avr. 2020 à 05:56, KhajaAsmath Mohammed
a écrit :
>
> Thanks Senthil. This is helpful but I am worried about doing it with
> standalone process as our
Thanks Senthil. This is helpful but I am worried about doing it with standalone
process as our data is huge.
Is there a way to do the same thing using kstream and utilize cluster resources
instead of doing with standalone client process ?
Sent from my iPhone
> On Apr 11, 2020, at 7:27 PM, Se
If cannot (or don't want to) modify your code, you can also stop the
whole application and use `bin/kafka-consumer-groups.sh` to set a new
start offset per partition. Afterward, you can just restart the
application and it will pick up the corresponding start offsets.
-Matthias
On 4/11/20 6:56 AM,
Hi, We can re-consume the data from particular point using consumer.seek()
and consumer.assign() API [1]. Pls check out documentation.
If you have used timestamp at the time producing records , You can use
particular timestamp to consume records [2].
https://kafka.apache.org/24/javadoc/index.htm
Hi,
We have lost some data while processing and would like to reprocess it. May I
know the procedure to do it . I have offsets numbers that I need to process.
Any suggestions please. Would be really helpful.
Thanks,
Asmath
Sent from my iPhone