Yes, trigger (once=True) set to all streaming sources and it will treat as
a batch mode. Then you can use any scheduler (e.g airflow) to run it
whatever time window. With checkpointing, in the next run it will start
processing files from the last checkpoint.
On Fri, Apr 23, 2021 at 8:13 AM Mich Ta
Interesting.
If we go back to classic Lambda architecture on premise, you could Flume
API to Kafka to add files to HDFS in time series bases.
Most higher CDC vendors do exactly that. Oracle GoldenGate (OGG) classic
gets data from Oracle redo logs and sends them to subscribers. One can
deploy OGC
Hi
In one of the spark summit demo, it is been alluded that we should think
batch jobs in streaming pattern, using "run once" in a schedule.
I find this idea very interesting and I understand how this can be achieved
for sources like kafka, kinesis or similar. in fact we have implemented
this mode