Hello,

Once I got the message, few seconds, I received your email. Well, this just to cast a need for a fix.

Happy to feel the dynamism of the work. Great work.

On 14.04.2015 21:50, Stephan Ewen wrote:

You are on the latest snapshot version? I think there is an inconsistency in there. Will try to fix that toning.

Can you actually use the milestone1 version? That one should be good.

Greetings,
Stephan

Am 14.04.2015 20:31 schrieb "Fotis P" <fotis...@gmail.com <mailto:fotis...@gmail.com>>:

    Hello everyone,

    I am getting this weird exception while running some simple
    counting jobs in Flink.

    Exception in thread "main"
    org.apache.flink.runtime.client.JobTimeoutException: Lost
    connection to JobManager
        at
    
org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:164)
        at
    
org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:198)
        at
    
org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:188)
        at
    org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:179)
        at
    org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:54)
        at
    
trackers.preprocessing.ExtractInfoFromLogs.main(ExtractInfoFromLogs.java:133)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
    
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
    
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at
    com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
    Caused by: java.util.concurrent.TimeoutException: Futures timed
    out after [100000 milliseconds]
        at
    scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
        at
    scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
        at
    scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
        at
    
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
        at scala.concurrent.Await$.result(package.scala:107)
        at scala.concurrent.Await.result(package.scala)
        at
    
org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:143)
        ... 10 more


    The only call above which comes from my code is
    ExtractInfoFromLogs.java:133 which is the environment.execute()
    method.

    This exception comes when dealing with largish files (>10GB). No
    exception is thrown when I am working with a smaller subset of my
    data.
    Also I would swear that it was working fine until a few days ago,
    and the code has not been changed :S Only change was a re-import
    of maven dependencies.

    I am unsure what other information I could provide that would help
    you help me :)

    I am running everything locally through the intelij IDE. Maven
    dependency is set to 0.9-SNAPSHOT.
    I have an 8-core Ubuntu 14.04 machine.

    Thanks in advance :D


--
Regards, Grüße, Cordialement, Recuerdos, Saluti, προσρήσεις, 问候, تحياتي. Mohamed Nadjib Mami
PhD Student - EIS Department - Bonn University, Germany.
About me! <http://www.strikingly.com/mohamed-nadjib-mami>
LinkedIn

Reply via email to