Hi. I think you can write a udf[1] to process some fields and then insert
into the sink.

Best.
Shengkai

[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/functions/udfs/

<pod...@gmx.com> 于2022年9月15日周四 22:10写道:

> What's the most effective way (performance) to update big no of rows?
> Sure this will be probably "INSERT INTO table (column1) SELECT column1
> FROM ...". Anyway, I do not see any "UPDATE" in Flink?
> But sometimes SQL is not enough.
> Suppose I have code:
>
> TableResult tableResult1 = tEnv.executeSql("SELECT * FROM SomeTable");
> try (org.apache.flink.util.CloseableIterator<Row> it =
> tableResult1.collect()) {
>         while(it.hasNext()) {
>                 Row row = it.next();
>                 //Treat row:
>                 String x_field = row.getField("some_column").toString();
>                 //Do something with x_field
>                 ...
>                 tEnv.executeSql("INSERT INTO AnotherTable (column) VALUES
> ('new_value')");
>         }
> }
>
> But this INSERT will be probably performance killer...
>
> Any suggestion how to do it in a smart way?
>
> Mike
>

Reply via email to