Hi Craig,

Thanks for providing that information (and apologies for the delay in 
responding).

1) I was able to create a Step that downloaded my jar, thanks.

2) I was able to create another Step that passed my jar to the flink script, 
thanks.

3) However, I wasn’t successful with my attempt at simply using the following:

AmazonS3ClientBuilder builder = AmazonS3ClientBuilder.standard();
builder.setCredentials(DefaultAWSCredentialsProviderChain());
return builder.build();

I also tried the following:

return AmazonS3ClientBuilder.defaultClient();

However, I didn’t bother setting up AWS_ACCESS_KEY and AWS_SECRET_KEY (or 
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI), since your post suggested to me that 
doing so was unnecessary. For example, I can download a file from S3 to the EC2 
master node.

I’ve attached the output from this below.

Do you have any further suggestions?

Thanks,

- Chris
Error running CrawlTool: tried to access class 
com.amazonaws.services.s3.AmazonS3ClientConfigurationFactory from class 
com.amazonaws.services.s3.AmazonS3Builder
java.lang.IllegalAccessError: tried to access class 
com.amazonaws.services.s3.AmazonS3ClientConfigurationFactory from class 
com.amazonaws.services.s3.AmazonS3Builder
        at 
com.amazonaws.services.s3.AmazonS3Builder.<clinit>(AmazonS3Builder.java:26)
        at 
com.scaleunlimited.flinkcrawler.utils.S3Utils.makeS3Client(S3Utils.java:67)
        at 
com.scaleunlimited.flinkcrawler.utils.S3Utils.fileExists(S3Utils.java:26)
        at 
com.scaleunlimited.flinkcrawler.sources.SeedUrlSource.<init>(SeedUrlSource.java:58)
        at 
com.scaleunlimited.flinkcrawler.tools.CrawlTool.main(CrawlTool.java:186)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:528)
        at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:419)
        at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:339)
        at 
org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:831)
        at org.apache.flink.client.CliFrontend.run(CliFrontend.java:256)
        at 
org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1073)
        at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1120)
        at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1117)
        at 
org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40)
        at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1117)
Command exiting with ret '255'

> On Jun 7, 2017, at 7:28 PM, user-digest-h...@flink.apache.org wrote:
> From: "Foster, Craig" <foscr...@amazon.com <mailto:foscr...@amazon.com>>
> Subject: Re: How to run a Flink job in EMR?
> Date: June 7, 2017 at 7:27:53 PM PDT
> To: Chris Schneider <cschnei...@scaleunlimited.com 
> <mailto:cschnei...@scaleunlimited.com>>, "user@flink.apache.org 
> <mailto:user@flink.apache.org>" <user@flink.apache.org 
> <mailto:user@flink.apache.org>>
> 
> 
> Ah, maybe (1) wasn’t entirely clear so here’s the copy/pasted example with 
> what I suggested:
>  
>     HadoopJarStepConfig copyJar = new HadoopJarStepConfig()
>       .withJar("command-runner.jar")
>       .withArgs("bash","-c", "aws s3 cp s3://mybucket/myjar.jar 
> <s3://mybucket/myjar.jar> /home/hadoop"
>     );
>  
>  
> From: "Foster, Craig" <foscr...@amazon.com <mailto:foscr...@amazon.com>>
> Date: Wednesday, June 7, 2017 at 7:21 PM
> To: Chris Schneider <cschnei...@scaleunlimited.com 
> <mailto:cschnei...@scaleunlimited.com>>, "user@flink.apache.org 
> <mailto:user@flink.apache.org>" <user@flink.apache.org 
> <mailto:user@flink.apache.org>>
> Subject: Re: How to run a Flink job in EMR?
>  
> 1)      Since the jar is only required on the master node you should be able 
> to just run a step with a very simple script like ‘bash –c “aws s3 cp 
> s3://mybucket/myjar.jar <s3://mybucket/myjar.jar> .”’
> So if you were to do that using the step similar to outlined in the EMR 
> documentation, but replacing withArgs with the above command as args (I think 
> there’s an example of this on that same EMR docs page you refer to).
> Then add another step after that which actually runs the flink job. The jar 
> will be located in /home/hadoop. In the future, I’m hoping this can just be 
> simplified to flink run -yn 2 -p 4 s3://mybucket/myjar.jar 
> <s3://mybucket/myjar.jar> … but it doesn’t seem to be the case right now.
> 2)      If you ran this as a step, you should be able to see the error the 
> Flink driver gives in the step’s logs.
> 3)      Provided your S3 bucket and EMR cluster EC2 IAM role/”instance 
> profile” belong to the same account (or at least the permissions are setup 
> such that you can download a file from S3 to your EC2 instances), you should 
> be able to use the DefaultAWSCredentialsProviderChain 
> <http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html>,
>  which won’t require you enter any credentials as it uses the EC2 instance 
> profile credentials provider.
>  
>  
> Hope that helps.
>  
> Thanks,
> Craig
>  
>  
> From: Chris Schneider <cschnei...@scaleunlimited.com 
> <mailto:cschnei...@scaleunlimited.com>>
> Date: Wednesday, June 7, 2017 at 6:16 PM
> To: "user@flink.apache.org <mailto:user@flink.apache.org>" 
> <user@flink.apache.org <mailto:user@flink.apache.org>>
> Subject: How to run a Flink job in EMR?
>  
> Hi Gang, 
>  
> I’ve been trying to get some Flink code running in Amazon Web Services’s 
> Elastic MapReduce, but so far the only success I’ve had required me to log 
> into the master node, download my jar from S3 to there, and then run it on 
> the master node from the command line using something like the following:
>  
> % bin/flink run -m yarn-cluster -yn 2 -p 4 <my jar name> <my main program 
> arguments>
>  
> The two other approaches I’ve tried (based on the AWS EMR Flink documentation 
> <http://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-flink.html>) that 
> didn’t work were:
>  
> 1) Add an EMR Step to launch my program as part of a Flink session - I 
> couldn’t figure out how to get my job jar deployed as part of the step, and I 
> couldn’t successfully configure a Bootstrap Action 
> <http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-bootstrap.html#bootstrapCustom>
>  to deploy it before running that step.
>  
> 2) Start a Long-Running Flink Session via an EMR Step (which worked) and then 
> use the Flink Web UI to upload my job jar from my workstation - It killed the 
> ApplicationMaster that was running the Flink Web UI without providing much 
> interesting logging. I’ve appended both the container log output and the 
> jobmanager.log contents to the end of this email.
> In addition, it would be nice to gain access to S3 resources using 
> credentials. I’ve tried using an AmazonS3ClientBuilder, and passing an 
> EnvironmentVariableCredentialsProvider 
> <http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/EnvironmentVariableCredentialsProvider.html>
>  to its setCredentials method. I’d hoped that this might pick up the 
> credentials I set up on my master node in the $AWS_ACCESS_KEY_ID and 
> $AWS_SECRET_KEY environment variables I've exported, but I’m guessing that 
> the shell this code is running in (on the slaves?) doesn’t have access to 
> those variables.
> Here’s a list of interesting version numbers:
>  
> flink-java-1.2.0.jar
> flink-core-1.2.0.jar
> flink-annotations-1.2.0.jar
> emr-5.4.0 with Flink 1.2.0 installed
>  
> Any help would be greatly appreciated. I’m lusting after an example showing 
> how to deploy a simple Flink jar from S3 to a running EMR cluster and then 
> get Flink to launch it with an arbitrary set of Flink and user arguments. 
> Bonus points for setting up an AmazonS3 Java client object without including 
> those credentials within my Java source code.
>  
> Best Regards,
>  
> - Chris
>  
> Here’s the container logging from my attempt to submit my job via the Flink 
> web UI:
> Application application_1496707031947_0002 failed 1 times due to AM Container 
> for appattempt_1496707031947_0002_000001 exited with exitCode: 255
> For more detailed output, check application tracking 
> page:http://ip-10-85-61-122.ec2.internal:8088/cluster/app/application_1496707031947_0002
>  
> <http://ip-10-85-61-122.ec2.internal:8088/cluster/app/application_1496707031947_0002>
>  Then, click on links to logs of each attempt.
> Diagnostics: Exception from container-launch.
> Container id: container_1496707031947_0002_01_000001
> Exit code: 255
> Stack trace: ExitCodeException exitCode=255:
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
> at org.apache.hadoop.util.Shell.run(Shell.java:479)
> at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> Container exited with a non-zero exit code 255
> Failing this attempt. Failing the application.
> Application
> There's a bunch of startup messages in the jobmanager.log, but only the 
> following output was generated by my attempt to submit my Flink job:
> 2017-06-06 00:41:55,332 INFO org.apache.flink.runtime.blob.BlobServer - 
> Stopped BLOB server at 0.0.0.0:44948
> 2017-06-06 00:41:55,332 INFO 
> org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Removing web 
> dashboard root cache directory 
> /tmp/flink-web-f3dde9b2-2384-49ce-b7a2-4c93bb1f5b6a
> 2017-06-06 00:41:55,336 INFO 
> org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Removing web 
> dashboard jar upload directory 
> /tmp/flink-web-71b2e41d-b08d-43e8-bd0d-b6fb5cc329a2

-----------------------------------------
Chris Schneider
http://www.scaleunlimited.com <http://www.scaleunlimited.com/>
custom big data solutions
-----------------------------------------

Reply via email to