I think we need to modify the way we write checkpoints to S3 for high-scale
jobs (those with many total tasks).  The issue is that we are writing all
the checkpoint data under a common key prefix.  This is the worst case
scenario for S3 performance since the key is used as a partition key.

In the worst case checkpoints fail with a 500 status code coming back from
S3 and an internal error type of TooBusyException.

One possible solution would be to add a hook in the Flink filesystem code
that allows me to "rewrite" paths.  For example say I have the checkpoint
directory set to:

s3://bucket/flink/checkpoints

I would hook that and rewrite that path to:

s3://bucket/[HASH]/flink/checkpoints, where HASH is the hash of the
original path

This would distribute the checkpoint write load around the S3 cluster
evenly.

For reference:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-performance-improve/

Any other people hit this issue?  Any other ideas for solutions?  This is a
pretty serious problem for people trying to checkpoint to S3.

-Jamie

Reply via email to