I agree that we should improve RowTypeInfo. But why not to keep it in Scala?
In case flink-2186 that the "Row" is a "Product" is a reason of supporting wide 
columns indeed.
Just for example I tried to move the "Row" to flink-scala module
(https://github.com/apache/flink/compare/master...tonycox:FLINK-2186-x)
(https://travis-ci.org/tonycox/flink/builds/178846355)

-----Original Message-----
From: Flavio Pompermaier [mailto:pomperma...@okkam.it] 
Sent: Friday, November 25, 2016 5:59 PM
To: dev@flink.apache.org
Subject: Re: Move Row, RowInputFormat to core package

Fully agree with Timo :)

On Fri, Nov 25, 2016 at 2:30 PM, Timo Walther <twal...@apache.org> wrote:

> Hi Anton,
>
> I would also support the idea of moving Row and RowTypeInfo to Flink core.
> I think there are many real-world use cases where a variable-length 
> record that supports null values is required. However, I think that 
> those classes needs to be reworked before. They should not depend on 
> Scala-related things.
>
> RowTypeInfo should not inherit from CaseClassTypeInfo, the current 
> solution with the dummy field names is a hacky solution anyway. Row 
> should not inherit from Scala classes.
>
> Regards,
> Timo
>
> Am 24/11/16 um 16:46 schrieb Anton Solovev:
>
>> Hello,
>>
>>
>>
>> In Scala case classes can store huge count of fields, it's really 
>> helpful for reading wide csv files, but It uses only in table api.
>>
>> what about this issue 
>> (https://issues.apache.org/jira/browse/FLINK-2186),
>> should we use table api in machine learning library?
>>
>> To solve the issue #readCsvFile can generate RowInputFormat.
>>
>> For commodity I added another one constructor in RowTypeInfo (
>> https://github.com/apache/flink/compare/master...tonycox:FLINK-2186-x
>> )
>>
>> What do you think about add some scala and moving Row to Flink core?
>>
>>

Reply via email to