Hi Jark,

thanks for quick answer - I strongly suspected there is a hack like that somewhere - but couldn't find it easily in the maze of old and new scala and java APIs :D

For my current experiments it's ok, I'm sure in next releases everything will be cleaned up :)


best,

maciek



On 05/04/2020 06:04, Jark Wu wrote:
Hi Maciek,

This will be supported in the future.
Currently, you can create a `StreamTableEnvironmentImpl` by yourself using the constructor (the construct does'n restrict batch mode).
SQL CLI also does in the same way [1] (even though it's a hack).

Best,
Jark

[1]: https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java#L419

On Sat, 4 Apr 2020 at 15:42, Maciek Próchniak <m...@touk.pl <mailto:m...@touk.pl>> wrote:

    Hello,

    I'm playing around with Table/SQL API (Flink 1.9/1.10) and I was
    wondering how I can do the following:

    1. read batch data (e.g. from files)

    2. sort them using Table/SQL SortOperator

    3. perform further operations using "normal" DataStream API
    (treating my
    batch as finite stream) - to reuse the code I have developed for
    stream
    processing cases.


    Now, to perform step 2. I understand I should use Blink planner in
    batch
    mode, but then - although there is StreamExecutionEnvironment
    underneath
    - there seems to be no easy

    (or at least documented ;)) way of going from Table to DataStream.

    The toAppendStream/toRetractStream are restricted to stream mode,
    and if
    I use it I cannot use SortOperator easily.

    Of course, I can write results to some external output like files,
    but
    I'd like to avoid that...

    Is there any nice way to do this? And if not - are there plans to
    make
    it possible in the future?


    thanks,

    maciek


    ps. the new Table/SQL stuff is really, really cool!

Reply via email to