[ 
https://issues.apache.org/jira/browse/FLINK-3964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15350768#comment-15350768
 ] 

ASF GitHub Bot commented on FLINK-3964:
---------------------------------------

GitHub user mxm opened a pull request:

    https://github.com/apache/flink/pull/2168

    [FLINK-3964] add hint to job submission timeout exception message

    


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/mxm/flink FLINK-3964

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/2168.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2168
    
----
commit 5964b821f3ecb3a8a4f038a995850a2c3f646552
Author: Maximilian Michels <m...@apache.org>
Date:   2016-06-27T10:30:12Z

    [FLINK-3964] add hint to job submission timeout exception message

----


> Job submission times out with recursive.file.enumeration
> --------------------------------------------------------
>
>                 Key: FLINK-3964
>                 URL: https://issues.apache.org/jira/browse/FLINK-3964
>             Project: Flink
>          Issue Type: Bug
>          Components: Batch Connectors and Input/Output Formats, DataSet API
>    Affects Versions: 1.0.0
>            Reporter: Juho Autio
>
> When using "recursive.file.enumeration" with a big enough folder structure to 
> list, flink batch job fails right at the beginning because of a timeout.
> h2. Problem details
> We get this error: {{Communication with JobManager failed: Job submission to 
> the JobManager timed out}}.
> The code we have is basically this:
> {code}
> val env = ExecutionEnvironment.getExecutionEnvironment
> val parameters = new Configuration
> // set the recursive enumeration parameter
> parameters.setBoolean("recursive.file.enumeration", true)
> val parameter = ParameterTool.fromArgs(args)
> val input_data_path : String = parameter.get("input_data_path", null )
> val data : DataSet[(Text,Text)] = env.readSequenceFile(classOf[Text], 
> classOf[Text], input_data_path)
> .withParameters(parameters)
> data.first(10).print
> {code}
> If we set {{input_data_path}} parameter to {{s3n://bucket/path/date=*/}} it 
> times out. If we use a more restrictive pattern like 
> {{s3n://bucket/path/date=20160523/}}, it doesn't time out.
> To me it seems that time taken to list files shouldn't cause any timeouts on 
> job submission level.
> For us this was "fixed" by adding {{akka.client.timeout: 600 s}} in 
> {{flink-conf.yaml}}, but I wonder if the timeout would still occur if we have 
> even more files to list?
> ----
> P.S. Is there any way to set {{akka.client.timeout}} when calling {{bin/flink 
> run}} instead of editing {{flink-conf.yaml}}. I tried to add it as a {{-yD}} 
> flag but couldn't get it working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to