Hi Dan,
it's not entirely clear to me how you want to write your tests, but it's
possible with your setup (we have a couple of thousand tests in Flink that
do that).
What you usually try to use is a test source that is finite (e.g. file
source that is not scanning for new files), such that the st
I changed the test to use ExecutionMode.BATCH in v1.11 and it still doesn't
work. How did devs write minicluster tests before for similar code? Did
they not?
On Sat, Feb 6, 2021 at 5:38 PM Dan Hill wrote:
> Ah looks like I need to use 1.12 for this. I'm still on 1.11.
>
> On Fri, Feb 5, 2021,
Ah looks like I need to use 1.12 for this. I'm still on 1.11.
On Fri, Feb 5, 2021, 08:37 Dan Hill wrote:
> Thanks Aljoscha!
>
> On Fri, Feb 5, 2021 at 1:48 AM Aljoscha Krettek
> wrote:
>
>> Hi Dan,
>>
>> I'm afraid this is not easily possible using the DataStream API in
>> STREAMING execution
Thanks Aljoscha!
On Fri, Feb 5, 2021 at 1:48 AM Aljoscha Krettek wrote:
> Hi Dan,
>
> I'm afraid this is not easily possible using the DataStream API in
> STREAMING execution mode today. However, there is one possible solution
> and we're introducing changes that will also make this work on STRE
Hi Dan,
I'm afraid this is not easily possible using the DataStream API in
STREAMING execution mode today. However, there is one possible solution
and we're introducing changes that will also make this work on STREAMING
mode.
The possible solution is to use the `FileSink` instead of the
`St
Hi Flink user group,
*Background*
I'm changing a Flink SQL job to use Datastream. I'm updating an existing
Minicluster test in my code. It has a similar structure to other tests in
flink-tests. I call StreamExecutionEnvironment.execute. My tests sink
using StreamingFileSink Bulk Formats to tmp