Re: Re: fink sql client not able to read parquet format table

2020-04-12 Thread Jingsong Li
correct it if I write something wrong. > > Thanks, > Lei > > -- > wangl...@geekplus.com.cn > > > *From:* Jingsong Li > *Date:* 2020-04-10 11:03 > *To:* wangl...@geekplus.com.cn > *CC:* Jark Wu ; lirui ; user > > *Subject:* Re:

Re: Re: fink sql client not able to read parquet format table

2020-04-10 Thread wangl...@geekplus.com.cn
; user Subject: Re: Re: fink sql client not able to read parquet format table Hi lei, I think the reason is that our `HiveMapredSplitReader` not supports name mapping reading for parquet format. Can you create a JIRA for tracking this? Best, Jingsong Lee On Fri, Apr 10, 2020 at 9:42 AM wangl

Re: Re: fink sql client not able to read parquet format table

2020-04-09 Thread Jingsong Li
om.cn > > > *From:* Jingsong Li > *Date:* 2020-04-09 21:45 > *To:* wangl...@geekplus.com.cn > *CC:* Jark Wu ; lirui ; user > > *Subject:* Re: Re: fink sql client not able to read parquet format table > Hi lei, > > Which hive version did you use? > Can you share

Re: Re: fink sql client not able to read parquet format table

2020-04-09 Thread wangl...@geekplus.com.cn
sql client not able to read parquet format table Hi lei, Which hive version did you use? Can you share the complete hive DDL? Best, Jingsong Lee On Thu, Apr 9, 2020 at 7:15 PM wangl...@geekplus.com.cn wrote: I am using the newest 1.10 blink planner. Perhaps it is because of the method i used

Re: Re: fink sql client not able to read parquet format table

2020-04-09 Thread Jingsong Li
Hi lei, Which hive version did you use? Can you share the complete hive DDL? Best, Jingsong Lee On Thu, Apr 9, 2020 at 7:15 PM wangl...@geekplus.com.cn < wangl...@geekplus.com.cn> wrote: > > I am using the newest 1.10 blink planner. > > Perhaps it is because of the method i used to write the pa

Re: Re: fink sql client not able to read parquet format table

2020-04-09 Thread wangl...@geekplus.com.cn
I am using the newest 1.10 blink planner. Perhaps it is because of the method i used to write the parquet file. Receive kafka message, transform each message to a Java class Object, write the Object to HDFS using StreamingFileSink, add the HDFS path as a partition of the hive table No matter