Please try adding an other option of starting offset. I have done the same
thing many times with different versions of spark that supports structured
streaming.
The other I am seeing is could be something that it could be at write time.
Can you please confirm it be doing printSchema function after load and then
converting it json and writing todesired location.
.

option("startingOffsets", "latest")


On Fri, Jul 27, 2018 at 3:39 PM, dddaaa <danv...@gmail.com> wrote:

> This is a mistake in the code snippet I posted.
>
> The right code that is actually running and producing the error is:
>
> / df = spark \
>        .readStream \
>        .format("kafka") \
>        .option("kafka.bootstrap.servers", "kafka_broker") \
>        .option("subscribe", "test_hdfs3") \
>        .load() \
>        .select(from_json(col("value").cast("string"), schema)/
>
>
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


-- 
Regards,
Arbab Khalil
Software Design Engineer

Reply via email to