A question for the runner implementers:

The Beam model is stream vs batch agnostic. But I have use cases where we
replay history (from BigTable or BigQuery) but then transition into
streaming.

Now with Splittable DoFn's it's easier to create inputs that start batch,
then go streaming. But I have the impression that the runners work either
streaming or batch. I don't think the model for the runners support going
from massive batch processing into streaming mode right?

So if you have an Unbounded input anywhere the runner will work streaming,
even processing the batch workload?

Is it something the community is thinking about?

 _/
_/ Alex Van Boxel

Reply via email to