[ 
https://issues.apache.org/jira/browse/FLINK-9061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417964#comment-16417964
 ] 

Steve Loughran commented on FLINK-9061:
---------------------------------------

[~greghogan]

I cut the link as it was just a duplicate of the one in the header.

your link is new; something I think I'd seen somewhere else too. It's 
unfortunate that most of our knowledge here is superstition and stack traces, 
but I think that's a deliberate attempt to avoid making any commitment about 
future behaviour. 

 

Here's my beliefs

#.Once you write enough data down a path, a partition is somehow triggered, and 
data split across s3 shards
# that partitioning event is counted as part of your load for the bucket/the IO 
which a partitioning path can sustain is reduced
# so the overall IO rate at that point drops. Maybe that's raising the 500 error

I don't think it partitions based purely on load. That would be fun given you 
can just issue many delete requests to a path and get throttled.


> S3 checkpoint data not partitioned well -- causes errors and poor performance
> -----------------------------------------------------------------------------
>
>                 Key: FLINK-9061
>                 URL: https://issues.apache.org/jira/browse/FLINK-9061
>             Project: Flink
>          Issue Type: Bug
>          Components: FileSystem, State Backends, Checkpointing
>    Affects Versions: 1.4.2
>            Reporter: Jamie Grier
>            Priority: Critical
>
> I think we need to modify the way we write checkpoints to S3 for high-scale 
> jobs (those with many total tasks).  The issue is that we are writing all the 
> checkpoint data under a common key prefix.  This is the worst case scenario 
> for S3 performance since the key is used as a partition key.
>  
> In the worst case checkpoints fail with a 500 status code coming back from S3 
> and an internal error type of TooBusyException.
>  
> One possible solution would be to add a hook in the Flink filesystem code 
> that allows me to "rewrite" paths.  For example say I have the checkpoint 
> directory set to:
>  
> s3://bucket/flink/checkpoints
>  
> I would hook that and rewrite that path to:
>  
> s3://bucket/[HASH]/flink/checkpoints, where HASH is the hash of the original 
> path
>  
> This would distribute the checkpoint write load around the S3 cluster evenly.
>  
> For reference: 
> https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-performance-improve/
>  
> Any other people hit this issue?  Any other ideas for solutions?  This is a 
> pretty serious problem for people trying to checkpoint to S3.
>  
> -Jamie
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to