Hi!
I don't quite understand this question, but I suppose you first run the
table program and then run the data stream program and you want the results
of the two programs to be identical?
If this is the case, the job will run twice as Flink will not cache the
result of a job, so in each run the
I have a question about how the conversion from Table API to Datastream API
actually works under the covers.
If I have a Table API operation that creates a random id, like:
SELECT id, CAST(UUID() AS VARCHAR) as random_id FROM table
...then I convert this table to a datastream with
t_env.to_retr