Hi Jiongsong


    Thanks for your reply.


    It seems that to wrap fields is a feasible way for me now. And there 
already exists another JIRA FLINK-8921 try to improve this.


Thanks,
Simon
On 06/26/2019 19:21,JingsongLee<lzljs3620...@aliyun.com> wrote:
Hi Simon:
Does your code include the PR[1]? 
If include: try set TableConfig.setMaxGeneratedCodeLength smaller (default 
64000)?
If exclude: Can you wrap some fields to a nested Row field to reduce field 
number.


1.https://github.com/apache/flink/pull/5613 


------------------------------------------------------------------
From:Simon Su <barley...@163.com>
Send Time:2019年6月26日(星期三) 17:49
To:user <user@flink.apache.org>
Subject:Best Flink SQL length proposal


Hi all, 
    Currently I faced a problem caused by a long Flink SQL. 
    My sql is like “insert into tableA select a, b, c …….from sourceTable”, I 
have more than 1000 columns in select target, so that’s the problem, flink code 
generator will generate a RichMapFunction class and contains a map function 
which exceed the JVM max method limit (64kb). It throws the exception like:
    Caused by: java.lang.RuntimeException: Compiling 
"DataStreamSinkConversion$3055": Code of method 
"map(Ljava/lang/Object;)Ljava/lang/Object;" of class 
"DataStreamSinkConversion$3055" grows beyond 64 KB
    So is there any best practice for this ?


Thanks,
Simon





Reply via email to