Hello Spark Dev Community,
Reaching out for the below problem statement.
Thanks,
Megh
On Mon, May 19, 2025, 13:16 megh vidani wrote:
> Hello Spark Community, I have a structured streaming job in which I'm
> consuming a topic with the same name in two different kafka clusters and
> then creatin
I'm aware that Spark does not rely on the kafka committed offsets. It is
purely for monitoring purposes.
Thanks,
Megh
On Mon, May 19, 2025, 18:46 megh vidani wrote:
> Hi Prashant,
>
> I would like to do it so that I can monitor the consumer group along with
> my other consumer groups.
>
> Thank
Hi Prashant,
I would like to do it so that I can monitor the consumer group along with
my other consumer groups.
Thanks,
Megh
On Mon, May 19, 2025, 18:21 Prashant Sharma wrote:
> Spark does not rely on Kafka's commit, in fact it tracks the stream
> progress itself and reads via offsets (e.g. f
Spark does not rely on Kafka's commit, in fact it tracks the stream
progress itself and reads via offsets (e.g. from last read points). Why do
you want to commit?
On Mon, May 19, 2025 at 5:58 PM megh vidani wrote:
> Hello Spark Dev Community,
>
> Reaching out for the below problem statement.
>
>
Hello Spark Community, I have a structured streaming job in which I'm
consuming a topic with the same name in two different kafka clusters and
then creating a union of these two streams. I've developed a custom query
listener to commit the offsets back to the kafka clusters once every batch
is comp