[ 
https://issues.apache.org/jira/browse/FLINK-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16215451#comment-16215451
 ] 

Stephan Ewen commented on FLINK-7905:
-------------------------------------

I accidentally expired the AWS access keys which guard the S3 bucket used for 
testing.
Now keys are present (the condition which guards the tests), but the keys are 
not valid, which makes the tests fail.

Have a patch coming up introducing new encrypted access credentials...

> HadoopS3FileSystemITCase failed on travis
> -----------------------------------------
>
>                 Key: FLINK-7905
>                 URL: https://issues.apache.org/jira/browse/FLINK-7905
>             Project: Flink
>          Issue Type: Bug
>          Components: FileSystem, Tests
>    Affects Versions: 1.4.0
>         Environment: https://travis-ci.org/zentol/flink/jobs/291550295
> https://travis-ci.org/tillrohrmann/flink/jobs/291491026
>            Reporter: Chesnay Schepler
>            Assignee: Stephan Ewen
>              Labels: test-stability
>
> The {{HadoopS3FileSystemITCase}} is flaky on Travis because its access got 
> denied by S3.
> {code}
> -------------------------------------------------------
>  T E S T S
> -------------------------------------------------------
> Running org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase
> Tests run: 3, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 3.354 sec <<< 
> FAILURE! - in org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase
> testDirectoryListing(org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase)  
> Time elapsed: 0.208 sec  <<< ERROR!
> java.nio.file.AccessDeniedException: 
> s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/testdir: 
> getFileStatus on 
> s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/testdir: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
> S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> 9094999D7456C589), S3 Extended Request ID: 
> fVIcROQh4E1/GjWYYV6dFp851rjiKtFgNSCO8KkoTmxWbuxz67aDGqRiA/a09q7KS6Mz1Tnyab4=
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1579)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1249)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
>       at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4141)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1256)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1232)
>       at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:904)
>       at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1553)
>       at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:117)
>       at 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.getFileStatus(HadoopFileSystem.java:77)
>       at org.apache.flink.core.fs.FileSystem.exists(FileSystem.java:509)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase.testDirectoryListing(HadoopS3FileSystemITCase.java:163)
> testSimpleFileWriteAndRead(org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase)
>   Time elapsed: 0.275 sec  <<< ERROR!
> java.nio.file.AccessDeniedException: 
> s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/test.txt: 
> getFileStatus on 
> s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/test.txt: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
> S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> B3D8126BE6CF169F), S3 Extended Request ID: 
> T34sn+a/CcCFv+kFR/UbfozAkXXtiLDu2N31Ok5EydgKeJF5I2qXRCC/MkxSi4ymiiVWeSyb8FY=
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1579)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1249)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
>       at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4141)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1256)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1232)
>       at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:904)
>       at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1553)
>       at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.delete(S3AFileSystem.java:1234)
>       at 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.delete(HadoopFileSystem.java:134)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase.testSimpleFileWriteAndRead(HadoopS3FileSystemITCase.java:147)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to