KevinyhZou opened a new pull request #15068:
URL: https://github.com/apache/flink/pull/15068


   ## What is the purpose of the change
   
   Bug fix for array out of bounds exception while running a hive streaming job 
with partitioned table source, the partition feilds is not found in the fields 
provided by the context(HiveContinuousPartitionFetcherContext) , so we add the 
field names and types to it.
   
   ## Brief change log
   
   Get the partiton field name and types from catalog base table, and put them 
into the context (HiveContinuousPartitionFetcherContext) while to get hive 
partitions.
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
     - Tested manually by running a flink streaming job to hive partitioned 
table source
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency):   no
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:  no
     - The serializers:  no
     - The runtime per-record code paths (performance sensitive): no
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper:  no
     - The S3 file system connector:  no
   
   ## Documentation
   
     - Does this pull request introduce a new feature?  no
     - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to