[ 
https://issues.apache.org/jira/browse/HIVE-16719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16017655#comment-16017655
 ] 

Zsolt Fekete commented on HIVE-16719:
-------------------------------------

Note that during testing my refactor and modification I found that it is likely 
that updating TABLE_PARAMS, SD_PARAMS, SERDE_PARAMS did not work previously, 
because these calls in ObjectStore: {{mSerde.getParameters().put(serdeProp, 
newSchemaLoc);}} does not have any affect. One should create a new map and call 
{{mSerde.setParameters(..);}} with that.

So I also fixed this issue.

> HiveMetaTool fails when the data does not fit in memory
> -------------------------------------------------------
>
>                 Key: HIVE-16719
>                 URL: https://issues.apache.org/jira/browse/HIVE-16719
>             Project: Hive
>          Issue Type: Bug
>          Components: Metastore
>            Reporter: Zsolt Fekete
>            Assignee: Zsolt Fekete
>         Attachments: HIVE-16719.1.patch
>
>
> Currently HiveMetaTool reads full tables (as DataNucleus entities) into 
> memory by calling PersistenceManager's retrieveAll(). 
> See these methods of ObjectStore: updateMDatabaseURI, updateTblPropURI, 
> updateMStorageDescriptorTblPropURI, updateMStorageDescriptorTblURI, 
> updateSerdeURI.
> This might cause failure when the affected tables (SDS, DBS, TABLE_PARAMS, 
> SD_PARAMS, SERDES, SERDE_PARAMS) are too big.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to