[
https://issues.apache.org/jira/browse/BEAM-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17134898#comment-17134898
]
Beam JIRA Bot commented on BEAM-9752:
-------------------------------------
This issue is P2 but has been unassigned without any comment for 60 days so it
has been labeled "stale-P2". If this issue is still affecting you, we care!
Please comment and remove the label. Otherwise, in 14 days the issue will be
moved to P3.
Please see https://beam.apache.org/contribute/jira-priorities/ for a detailed
explanation of what these priorities mean.
> Too many shards in GCS
> ----------------------
>
> Key: BEAM-9752
> URL: https://issues.apache.org/jira/browse/BEAM-9752
> Project: Beam
> Issue Type: Improvement
> Components: sdk-py-core, sdk-py-harness
> Reporter: Ankur Goenka
> Priority: P2
> Labels: stale-P2
>
> We have observed case where the data was spread very thinly over
> automatically computed number of shards.
> This caused wait for the buffers to fill before sending the data over to gcs
> causing upload timeout as we did not upload any data for while waiting.
> However, by setting an explicit number of shards (1000 in my case) solved
> this problem potentially because all the shards had enough data to fill the
> buffer write avoiding timeout.
>
> We can improve the sharding logic so that we don't create too many shards.
> Alternatively, we can improve connection handling so that the connection does
> not timeout.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)