[
https://issues.apache.org/jira/browse/FLINK-38132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18042960#comment-18042960
]
huyuliang edited comment on FLINK-38132 at 12/5/25 3:48 AM:
------------------------------------------------------------
This is a clone of FLINK-10684, I shouldn't open it, you can go there and take
a look
was (Author: JIRAUSER309438):
这是FLINK-10684的克隆,我不应该打开它,你可以去FLINK-10684看看
> CLONE - Improve the CSV reading process
> ---------------------------------------
>
> Key: FLINK-38132
> URL: https://issues.apache.org/jira/browse/FLINK-38132
> Project: Flink
> Issue Type: Improvement
> Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
> Affects Versions: 2.0.0, 1.19.3, 1.20.2
> Reporter: huyuliang
> Priority: Minor
> Labels: CSV, auto-deprioritized-major, auto-deprioritized-minor
> Original Estimate: 360h
> Remaining Estimate: 360h
>
> CSV is one of the most commonly used file formats in data wrangling. To load
> records from CSV files, Flink has provided the basic {{CsvInputFormat}}, as
> well as some variants (e.g., {{RowCsvInputFormat}} and
> {{PojoCsvInputFormat}}). However, it seems that the reading process can be
> improved. For example, we could add a built-in util to automatically infer
> schemas from CSV headers and samples of data. Also, the current bad record
> handling method can be improved by somehow keeping the invalid lines (and
> even the reasons for failed parsing), instead of logging the total number
> only.
> This is an umbrella issue for all the improvements and bug fixes for the CSV
> reading process.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)