Thanks Amiya/TD for responding.

@TD,
Thanks for letting us know about this new foreachBatch api, this handle of
per batch dataframe should be useful in many cases.

@Amiya,
The input source will be read twice, entire dag computation will be done
twice. Not limitation but resource utilisation and performance.

Regards,
Chandan



On Fri, Jul 6, 2018 at 2:42 PM Amiya Mishra <amiya.mis...@bitwiseglobal.com>
wrote:

> Hi Tathagata,
>
> Is there any limitation of below code while writing to multiple file ?
>
> val inputdf:DataFrame =
>
> sparkSession.readStream.schema(schema).format("csv").option("delimiter",",").csv("src/main/streamingInput")
>     query1 =
>
> inputdf.writeStream.option("path","first_output").option("checkpointLocation","checkpointloc").format("csv").start()
>     query2 =
>
> inputdf.writeStream.option("path","second_output").option("checkpointLocation","checkpoint2").format("csv").start()
>     sparkSession.streams.awaitAnyTermination()
>
>
> And what will be the release date of spark 2.4.0 ?
>
> Thanks
> Amiya
>
>
>
>
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

-- 
Chandan Prakash

Reply via email to