[ https://issues.apache.org/jira/browse/SOLR-15681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425657#comment-17425657 ]
Houston Putman commented on SOLR-15681: --------------------------------------- We should also make sure we are using the best practices for our file uploads, so that the retry logic works with them: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/best-practices.html > Customization of S3 client retry/throttling logic > ------------------------------------------------- > > Key: SOLR-15681 > URL: https://issues.apache.org/jira/browse/SOLR-15681 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - S3 Repository > Reporter: Houston Putman > Priority: Major > > Currently there are very few configuration options for users to customize how > the s3-repository module interacts with S3. > One such option that would be very beneficial, especially given how many > files Solr backups can use, would be retry and throttling logic. The AWS > client provides [a few > options|https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/using.html#using-retries] > to customize the number of retries, and backoff logic, when requests do not > succeed. > We don't want to give users a 1000 options to configure the S3 client in the > solr.xml, but we can definitely give a few popular options that would help > optimize for their use cases. Retries and throttling backoff logic seem like > 2 good options to start with. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org