autophagy opened a new pull request, #26403:
URL: https://github.com/apache/flink/pull/26403

   ## What is the purpose of the change
   
   If you call `TableEnvironment.from_elements` where one of the fields in the 
row contains a `Row, for example where one of the values you pass in is:
   
   ```
   [
       Row("pyflink1A", "pyflink2A", "pyflink3A"),
       Row("pyflink1B", "pyflink2B", "pyflink3B"),
       Row("pyflink1C", "pyflink2C", "pyflink3C"),
   ],
   ```
   
   where the schema for the field is:
   
   ```
   DataTypes.ARRAY(
       DataTypes.ROW(
           [
               DataTypes.FIELD("a", DataTypes.STRING()),
               DataTypes.FIELD("b", DataTypes.STRING()),
               DataTypes.FIELD("c", DataTypes.STRING()),
           ]
       )
   ),
   ```
   
   When you call `execute().collect()` on the table, the array is returned as:
   
   ```
   [
       <Row(['pyflink1a', 'pyflink2a', 'pyflink3a'])>,
       <Row(['pyflink1b', 'pyflink2b', 'pyflink3b'])>,
       <Row(['pyflink1c', 'pyflink2c', 'pyflink3c'])>
   ]
   ```
   
   Instead of each `Row` having 3 values, the collected row only has 1 value, 
which is now a list of the actual values in the row. The input and output rows 
are no longer equal (as their internal `_values` collection are no longer 
equal, one being a list of strings and the other being a list of a list of 
strings). The `len()` of the source Row is correctly returned as 3, but the 
collected row incorrectly reports a `len()` of 1.
   
   The constructor for `Row` is such that it takes an arbitrary number of 
parameters as values, so that:
   
   ```
   Row("hello", "world', 1, 2, 3, ["hello", "world"])
   ```
   
   Produces a `Row` whose internal `_values` would be a list of those values: 
`["hello", "world", 1, 2, 3, ["hello", "world"]`
   
   The `pickled_bytes_to_python_converter` builds a list of fields from the 
pickled bytes, but then passes this list to the `Row` constructor as one, 
single argument - essentially constructing the Row with just one field (when it 
should have all the fields that were collected). This list of fields should be 
unpacked with `*` when passing them into the `Row` constructor, such that each 
field acts as a parameter to the constructor.
   
   
   ## Brief change log
   
    - *Fixed issue when converting a pickled row field to a `Row` type during 
collection.*
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   *(example:)*
     - *Modified existing 
`StreamTableEnvironmentTests.test_collect_for_all_data_types` test to properly 
construct source `Row`s, and to check that the `Row`s returned from 
`execute().collect()` have the same internal structure as the source `Row`s.*
     - 
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (*no*)
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (*maybe?*)
     - The serializers: (*no*)
     - The runtime per-record code paths (performance sensitive): (*no*)
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (*no*)
     - The S3 file system connector: (*no*)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (*no*)
     - If yes, how is the feature documented? (*not applicable*)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to