Hi guys,
is there any plans to support Kafka 0.11 integration for Spark streaming
applications? I see it doesn't support yet. If there is any way how can I
help/contribute, I'll be happy if you point me right direction so that I can
give a hand.
Sincerely,
Matus Cimerman
--
Sent from: http://a
Unsubscribe
--
С уважением,
Павел Гладков
Hi,
I have the same use-case, can you please let me know the implementation for
the same.
--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/
-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
Hi,
I think I know where the issue surfaces. This is with groupBy
aggregation with Append output mode.
What should happen when a state expires for a event time (in groupBy)
with the new rows for the expired key in a streaming batch exactly
when watermark has been moved up and thus expired the sta