[ https://issues.apache.org/jira/browse/FLINK-35232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17840862#comment-17840862 ]
Galen Warren commented on FLINK-35232: -------------------------------------- In case it's relevant, reading/writing to GCS happens in two different ways in Flink. First, the basic connector wraps a Google-provided Hadoop connector and leverages Flink's support for reading/writing to Hadoop file systems. Second, the RecoverableWriter support leverages the Google Java library directly – the extra features associated with RecoverableWriter require this. Just mentioning this, because the linked issue says that Hadoop libraries aren't used, which isn't always the case. I'm not sure which mode you're using the connector, from your description. > Support for retry settings on GCS connector > ------------------------------------------- > > Key: FLINK-35232 > URL: https://issues.apache.org/jira/browse/FLINK-35232 > Project: Flink > Issue Type: Improvement > Components: Connectors / FileSystem > Affects Versions: 1.15.3, 1.16.2, 1.17.1, 1.19.0, 1.18.1 > Reporter: Vikas M > Assignee: Ravi Singh > Priority: Major > > https://issues.apache.org/jira/browse/FLINK-32877 is tracking ability to > specify transport options in GCS connector. While setting the params enabled > here reduced read timeouts, we still see 503 errors leading to Flink job > restarts. > Thus, in this ticket, we want to specify additional retry settings as noted > in > [https://cloud.google.com/storage/docs/retry-strategy#customize-retries.|https://cloud.google.com/storage/docs/retry-strategy#customize-retries] > We want > [these|https://cloud.google.com/java/docs/reference/gax/latest/com.google.api.gax.retrying.RetrySettings#methods] > methods available for Flink users so that they can customize their > deployment. -- This message was sent by Atlassian Jira (v8.20.10#820010)