Hey Zach! Sounds like a great use case.
On Wed, Feb 17, 2016 at 3:16 PM, Zach Cox <zcox...@gmail.com> wrote: > However, the savepoint docs state that the job parallelism cannot be changed > over time [1]. Does this mean we need to use the same, fixed parallelism=n > during reprocessing and going forward? Are there any tricks or workarounds > we could use to still make changes to parallelism and take advantage of > savepoints? Yes, currently you have to keep the parallelism fixed. Dynamic scale in and out of programs will have very high priority after the 1.0 release [1]. Unfortunately, I'm not aware of any work arounds to overcome this at the moment. – Ufuk [1] https://flink.apache.org/news/2015/12/18/a-year-in-review.html (at the end of the post there is a road map for 2016)