Part 2
In this case, we are simply counting the number of rows to be ingested once
before SSS terminates. This is shown in the above method
batchId is 0
Total records processed in this run = 3107
wrote to DB
So it shows batchId (0) and the total records count() and writes to
BigQuery table a
This message is in two parts
Hi,
I did some tests on these. The idea being running Spark Structured
Streaming (SSS) on a collection of records since the last run of SSS and
shutdown SSS job.
Some parts of this approach has been described in the following databricks
blog
Running Streaming Jobs