[ https://issues.apache.org/jira/browse/FLINK-10736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16681562#comment-16681562 ]
ASF GitHub Bot commented on FLINK-10736: ---------------------------------------- azagrebin opened a new pull request #7077: [FLINK-10736][E2E tests] Use already uploaded to s3 file in shaded s3 e2e tests URL: https://github.com/apache/flink/pull/7077 ## What is the purpose of the change Remove s3 put/delete from s3 shaded e2e tests and use already uploaded, never #deleted ## Verifying this change run s3 shaded e2e tests ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (no) - The S3 file system connector: (no, just tests) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Shaded Hadoop S3A end-to-end test failed on Travis > -------------------------------------------------- > > Key: FLINK-10736 > URL: https://issues.apache.org/jira/browse/FLINK-10736 > Project: Flink > Issue Type: Bug > Components: E2E Tests > Affects Versions: 1.7.0 > Reporter: Till Rohrmann > Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.7.0 > > > The {{Shaded Hadoop S3A end-to-end test}} failed on Travis because it could > not find a file stored on S3: > {code} > org.apache.flink.client.program.ProgramInvocationException: Job failed. > (JobID: f28270bedd943ed6b41548b60f5cea73) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:268) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:487) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:475) > at > org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62) > at > org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:85) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427) > at > org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813) > at > org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287) > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213) > at > org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050) > at > org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) > at > org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126) > Caused by: org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:265) > ... 21 more > Caused by: java.io.IOException: Error opening the Input Split > s3://[secure]/flink-end-to-end-test-shaded-s3a [0,44]: No such file or > directory: s3://[secure]/flink-end-to-end-test-shaded-s3a > at > org.apache.flink.api.common.io.FileInputFormat.open(FileInputFormat.java:824) > at > org.apache.flink.api.common.io.DelimitedInputFormat.open(DelimitedInputFormat.java:470) > at > org.apache.flink.api.common.io.DelimitedInputFormat.open(DelimitedInputFormat.java:47) > at > org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:170) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.FileNotFoundException: No such file or directory: > s3://[secure]/flink-end-to-end-test-shaded-s3a > at > org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2255) > at > org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2149) > at > org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2088) > at > org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.open(S3AFileSystem.java:699) > at > org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.open(FileSystem.java:950) > at > org.apache.flink.fs.s3.common.hadoop.HadoopFileSystem.open(HadoopFileSystem.java:120) > at > org.apache.flink.fs.s3.common.hadoop.HadoopFileSystem.open(HadoopFileSystem.java:37) > at > org.apache.flink.api.common.io.FileInputFormat$InputSplitOpenThread.run(FileInputFormat.java:996) > {code} > https://api.travis-ci.org/v3/job/448770093/log.txt > The solution could to harden this test case. -- This message was sent by Atlassian JIRA (v7.6.3#76005)