Steve Loughran created HADOOP-19181: ---------------------------------------
Summary: IAMCredentialsProvider throttle failures Key: HADOOP-19181 URL: https://issues.apache.org/jira/browse/HADOOP-19181 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.4.0 Reporter: Steve Loughran Tests report throttling errors in IAM being remapped to noauth and failure Again, impala tests, but with multiple processes on same host. this means that HADOOP-18945 isn't sufficient as even if it ensures a singleton instance for a process * it doesn't if there are many test buckets (fixable) * it doesn't work across processes (not fixable) we may be able to * use a singleton across all filesystem instances * once we know how throttling is reported, handle it through retries + error/stats collection {code} 2024-02-17T18:02:10,175 WARN [TThreadPoolServer WorkerProcess-22] fs.FileSystem: Failed to initialize fileystem s3a://impala-test-uswest2-1/test-warehouse/test_num_values_def_levels_mismatch_15b31ddb.db/too_many_def_levels: java.nio.file.AccessDeniedException: impala-test-uswest2-1: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : software.amazon.awssdk.core.exception.SdkClientException: Unable to load credentials from system settings. Access key must be specified either via environment variable (AWS_ACCESS_KEY_ID) or system property (aws.accessKeyId). 2024-02-17T18:02:10,175 ERROR [TThreadPoolServer WorkerProcess-22] utils.MetaStoreUtils: Got exception: java.nio.file.AccessDeniedException impala-test-uswest2-1: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : software.amazon.awssdk.core.exception.SdkClientException: Unable to load credentials from system settings. Access key must be specified either via environment variable (AWS_ACCESS_KEY_ID) or system property (aws.accessKeyId). java.nio.file.AccessDeniedException: impala-test-uswest2-1: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : software.amazon.awssdk.core.exception.SdkClientException: Unable to load credentials from system settings. Access key must be specified either via environment variable (AWS_ACCESS_KEY_ID) or system property (aws.accessKeyId). at org.apache.hadoop.fs.s3a.AWSCredentialProviderList.maybeTranslateCredentialException(AWSCredentialProviderList.java:351) ~[hadoop-aws-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:201) ~[hadoop-aws-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124) ~[hadoop-aws-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376) ~[hadoop-aws-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468) ~[hadoop-aws-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372) ~[hadoop-aws-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347) ~[hadoop-aws-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$2(S3AFileSystem.java:972) ~[hadoop-aws-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:543) ~[hadoop-common-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:524) ~[hadoop-common-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:445) ~[hadoop-common-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2748) ~[hadoop-aws-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:970) ~[hadoop-aws-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.doBucketProbing(S3AFileSystem.java:859) ~[hadoop-aws-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:715) ~[hadoop-aws-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452) ~[hadoop-common-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162) ~[hadoop-common-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557) ~[hadoop-common-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504) ~[hadoop-common-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522) ~[hadoop-common-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) ~[hadoop-common-3.1.1.7.2.18.0-620.jar:?] at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:124) ~[hive-standalone-metastore-3.1.3000.7.2.18.0-620.jar:3.1.3000.7.2.18.0-620] at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:132) ~[hive-standalone-metastore-3.1.3000.7.2.18.0-620.jar:3.1.3000.7.2.18.0-620] at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:860) ~[hive-standalone-metastore-3.1.3000.7.2.18.0-620.jar:3.1.3000.7.2.18.0-620] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:2573) ~[hive-standalone-metastore-3.1.3000.7.2.18.0-620.jar:3.1.3000.7.2.18.0-620] -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org