Hi all,

I am trying to use flink sql to run hive task. I use tEnv.sqlUpdate to
execute my sql which looks like "insert overtwrite ... select ...". But I
find the parallelism of sink is always 1, it's intolerable for large data.
Why it happens? Otherwise, Is there any guide to decide the memory of
taskmanager when I have two huge table to hashjoin, for example, each table
has several TB data?

Thanks,
Faaron

Reply via email to