For Json format, you only need to define the parital columns to be selected in 
Flink DDL. 
But for csv format, it's not supported. In csv file, if there's no header, how 
can you mapping the incomplete columns defined in Flink DDL to the origin 
fields in the csv file? Thus, you need to write the all columns so that we can 
do the mapping. If there's a header, we can do the mapping, and it should meet 
your requirement. However, the current implementation haven't consider such 
case. 



Best regards, 
Yuxia 


发件人: "podunk" <pod...@gmx.com> 
收件人: "User" <user@flink.apache.org> 
发送时间: 星期二, 2022年 7 月 12日 下午 5:13:05 
主题: Re: Re: Does Table API connector, csv, has some option to ignore some 
columns 

This is really surprising. 
When you import data from a file, you really rarely need to import everything 
from that file. Most often it is several columns. 
So the program that reads the file should be able to do this - this is the ABC 
of working with data. 
Often the suggestion is "you can write your script". Sure. I can. I can write 
the entire program here - from scratch. 
But I use a ready-made program to avoid writing my scripts. 
Sent: Tuesday, July 12, 2022 at 12:24 AM 
From: "Alexander Fedulov" <alexan...@ververica.com> 
To: pod...@gmx.com 
Cc: "user" <user@flink.apache.org> 
Subject: Re: Re: Does Table API connector, csv, has some option to ignore some 
columns 
Hi podunk, 
no, this is currently not possible: 
> Currently, the CSV schema is derived from table schema. [1] 
So the Table schema is used to define how Jackson CSV parses the lines and 
hence needs to be complete. 
[1] [ 
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/formats/csv/
 | 
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/formats/csv/
 ] 
Best, 
Alexander Fedulov 
On Mon, Jul 11, 2022 at 5:43 PM < [ mailto:pod...@gmx.com | pod...@gmx.com ] > 
wrote: 



No, I did not mean. 
I said 'Does Table API connector, CSV, has some option to ignore some columns 
in source file?' 
Sent: Monday, July 11, 2022 at 5:28 PM 
From: "Xuyang" < [ mailto:xyzhong...@163.com | xyzhong...@163.com ] > 
To: [ mailto:pod...@gmx.com | pod...@gmx.com ] 
Cc: [ mailto:user@flink.apache.org | user@flink.apache.org ] 
Subject: Re:Re: Does Table API connector, csv, has some option to ignore some 
columns 


Hi, did you mean `insert into table1 select col1, col2, col3 ... from table2`? 



If this doesn't meet your requirement, what about using UDF to custom what you 
want in runtime. 




-- 
Best! 
Xuyang 




在 2022-07-11 16:10:00, [ mailto:pod...@gmx.com | pod...@gmx.com ] 写道: 
BQ_BEGIN

I want to control what I insert in table not what I get from table. 
Sent: Monday, July 11, 2022 at 3:37 AM 
From: "Shengkai Fang" < [ mailto:fskm...@gmail.com | fskm...@gmail.com ] > 
To: [ mailto:pod...@gmx.com | pod...@gmx.com ] 
Cc: "user" < [ mailto:user@flink.apache.org | user@flink.apache.org ] > 
Subject: Re: Does Table API connector, csv, has some option to ignore some 
columns 
Hi. 
In Flink SQL, you can select the column that you wants in the query. For 
example, you can use 
``` 
SELECT col_a, col_b FROM some_table; 
``` 
Best, 
Shengkai 
< [ mailto:pod...@gmx.com | pod...@gmx.com ] > 于2022年7月9日周六 01:48写道: 

BQ_BEGIN

Does Table API connector, CSV, has some option to ignore some columns in source 
file? 
For instance read only first, second, nine... but not the others? 
Or any other trick? 
CREATE TABLE some_table ( some_id BIGINT , ... ) WITH ( 'format' = 'csv' , ... 
) 




BQ_END


BQ_END


Reply via email to