Hello,
In Scala case classes can store huge count of fields, it's really helpful for reading wide csv files, but It uses only in table api. what about this issue (https://issues.apache.org/jira/browse/FLINK-2186), should we use table api in machine learning library? To solve the issue #readCsvFile can generate RowInputFormat. For commodity I added another one constructor in RowTypeInfo (https://github.com/apache/flink/compare/master...tonycox:FLINK-2186-x) What do you think about add some scala and moving Row to Flink core?