In the current Flink version, the OVERWRITE should be added to every
INSERT INTO statement. It is not part of the connector anymore. Maybe we
can introduce an option in the future to define the default connector
behavior (feel free to open an issue for this if you think this is
required).
How
Great! Thanks for the detailed answer TImo! I think I'll wait for the
migration to finish before updating my code.
However, does the usage of a catalog solve the problem of CSV override as
well? I can't find a way to use INSERT OVERRIDE with a CSV sink using the
executeSql.
Best,
Flavio
On Mon, J
Hi Flavio,
FLIP-129 will update the connect() API with a programmatic way of
defining tables. In the API we currently only support the DDL via
executeSql.
I would recommend to implement the Catalog interface. This interface has
a lot of methods, but you only need to implement a couple of met
Any advice on how to fix those problems?
Best,
Flavio
On Thu, Jan 21, 2021 at 4:03 PM Flavio Pompermaier
wrote:
> Hello everybody,
> I was trying to get rid of the deprecation warnings about
> using BatchTableEnvironment.registerTableSink() but I don't know how to
> proceed.
>
> My current code
Hello everybody,
I was trying to get rid of the deprecation warnings about
using BatchTableEnvironment.registerTableSink() but I don't know how to
proceed.
My current code does the following:
BatchTableEnvironment benv = BatchTableEnvironment.create(env);
benv.registerTableSink("outUsers", getFie