I don't think it is possible for anyone to debug your exception without the
source code. Storing the adjacency list within the Vertex value is not
scalable. Can you share a basic description of the algorithm you are
working to implement?

On Mon, Jun 12, 2017 at 5:47 AM, Kaepke, Marc <marc.kae...@haw-hamburg.de>
wrote:

> It seems Flink used a different exception graph outside of my IDE
> (intellij)
>
> The job anatomy is:
> load data from csv and build an initial graph => reduce that graph (remove
> loops and combine multi edges) => extend the modified graph by a new vertex
> value => run a gather-scatter iteration
>
> I have to extend the vertex value, because each vertex need its incident
> edges inside the iteration. My CustomVertexValue is able to hold all
> incident edges. Vertex<Double, CustomVertexValue> vertex
>
> Flink is try to optimize the execution graph, but that’s the issue.
> Maybe Flink provides an influence by the programmer?
>
>
> Best and thanks
> Marc
>
>
> Am 10.06.2017 um 00:49 schrieb Greg Hogan <c...@greghogan.com>:
>
> Have you looked at org.apache.flink.gelly.GraphExtension.
> CustomVertexValue.createInitSemiCluster(CustomVertexValue.java:51)?
>
>
> On Jun 9, 2017, at 4:53 PM, Kaepke, Marc <marc.kae...@haw-hamburg.de>
> wrote:
>
> Hi everyone,
>
> I don’t have any exceptions if I execute my Gelly job in my IDE (local)
> directly.
> The next step is an execution with a real kubernetes cluster (1 JobManager
> and 3 TaskManager on dedicated machines). The word count example is running
> without exceptions. My Gelly job throws following exception and I don’t
> know why.
>
> org.apache.flink.client.program.ProgramInvocationException: The program
> execution failed: Job execution failed.
> at org.apache.flink.client.program.ClusterClient.run(
> ClusterClient.java:478)
> at org.apache.flink.client.program.StandaloneClusterClient.submitJob(
> StandaloneClusterClient.java:105)
> at org.apache.flink.client.program.ClusterClient.run(
> ClusterClient.java:442)
> at org.apache.flink.client.program.ClusterClient.run(
> ClusterClient.java:429)
> at org.apache.flink.client.program.ContextEnvironment.
> execute(ContextEnvironment.java:62)
> at org.apache.flink.api.java.ExecutionEnvironment.execute(
> ExecutionEnvironment.java:926)
> at org.apache.flink.api.java.DataSet.collect(DataSet.java:410)
> at org.apache.flink.gelly.Algorithm.SemiClusteringPregel.run(
> SemiClusteringPregel.java:84)
> at org.apache.flink.gelly.Algorithm.SemiClusteringPregel.run(
> SemiClusteringPregel.java:34)
> at org.apache.flink.graph.Graph.run(Graph.java:1850)
> at org.apache.flink.gelly.job.GellyMain.main(GellyMain.java:128)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.flink.client.program.PackagedProgram.callMainMethod(
> PackagedProgram.java:528)
> at org.apache.flink.client.program.PackagedProgram.
> invokeInteractiveModeForExecution(PackagedProgram.java:419)
> at org.apache.flink.client.program.ClusterClient.run(
> ClusterClient.java:381)
> at org.apache.flink.client.CliFrontend.executeProgram(
> CliFrontend.java:838)
> at org.apache.flink.client.CliFrontend.run(CliFrontend.java:259)
> at org.apache.flink.client.CliFrontend.parseParameters(
> CliFrontend.java:1086)
> at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1133)
> at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1130)
> at org.apache.flink.runtime.security.HadoopSecurityContext$1.run(
> HadoopSecurityContext.java:43)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1657)
> at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(
> HadoopSecurityContext.java:40)
> at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1129)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job
> execution failed.
> at org.apache.flink.runtime.jobmanager.JobManager$$
> anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply$
> mcV$sp(JobManager.scala:933)
> at org.apache.flink.runtime.jobmanager.JobManager$$
> anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:876)
> at org.apache.flink.runtime.jobmanager.JobManager$$
> anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:876)
> at scala.concurrent.impl.Future$PromiseCompletingRunnable.
> liftedTree1$1(Future.scala:24)
> at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(
> Future.scala:24)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(
> AbstractDispatcher.scala:397)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.
> runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(
> ForkJoinPool.java:1979)
> at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(
> ForkJoinWorkerThread.java:107)
> Caused by: java.lang.NullPointerException
> at org.apache.flink.gelly.GraphExtension.CustomVertexValue.
> createInitSemiCluster(CustomVertexValue.java:51)
> at org.apache.flink.gelly.PreModification.IncidentEdgesCollector.
> iterateEdges(IncidentEdgesCollector.java:37)
> at org.apache.flink.graph.Graph$ApplyCoGroupFunctionOnAllEdges
> .coGroup(Graph.java:1252)
> at org.apache.flink.runtime.operators.CoGroupDriver.run(
> CoGroupDriver.java:159)
> at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:490)
> at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:355)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
> at java.lang.Thread.run(Thread.java:748)
>
> I guess the trigger is the coGroup function, but I’m not sure and need
> your help.
>
>
> Best,
>
> Marc
>
>
>
>

Reply via email to