satishkotha opened a new pull request #1355: [HUDI-633] limit archive file 
block size by number of bytes
URL: https://github.com/apache/incubator-hudi/pull/1355
 
 
   ## What is the purpose of the pull request
   
   With large clean files, archival process results in OOM. See HUDI-633. Limit 
archive file block size by number of bytes
   
   ## Brief change log
   
   - Add option to limit  archival batch size by number of bytes in block in 
addition to maximum number of records allowed in a batch
   - This does not prevent OOM if a single record is larger than jvm heap size. 
 
   - Note that in worst case, a single instant details can take up entire 
block. This likely has higher metadata overhead. But I think marginal increase 
in storage is acceptable for metadata. 
   
   ## Verify this pull request
   
   This change added tests and can be verified as follows:
   - Run TestHoodieCommitArchiveLog#testArchiveTableWithLargeCleanFiles
   - Verified large clean file that caused OOM can be archived with specified 
config.
   
   ## Committer checklist
   
    - [ ] Has a corresponding JIRA in PR title & commit
    
    - [ ] Commit message is descriptive of the change
    
    - [ ] CI is green
   
    - [ ] Necessary doc changes done or have another open PR
          
    - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to