Also pay attention to the Flink version you are using. The configuration
link you have provided points to an old version (0.8). Gelly wasn't part of
Flink then :)
You probably need to look in [1].

Cheers,
-Vasia.

[1]:
https://ci.apache.org/projects/flink/flink-docs-release-1.1/setup/config.html

On 20 October 2016 at 17:53, Greg Hogan <c...@greghogan.com> wrote:

> By default Flink only allocates 2048 network buffers (64 MiB at 32
> KiB/buffer). Have you increased the value for 
> taskmanager.network.numberOfBuffers
> in flink-conf.yaml?
>
> On Thu, Oct 20, 2016 at 11:24 AM, otherwise777 <wou...@onzichtbaar.net>
> wrote:
>
>> I got this error in Gelly, which is a result of flink (i believe)
>>
>> Exception in thread "main"
>> org.apache.flink.runtime.client.JobExecutionException: Job execution
>> failed.
>>         at
>> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$
>> handleMessage$1$$anonfun$applyOrElse$8.apply$mcV$sp(JobManager.scala:822)
>>         at
>> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$
>> handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:768)
>>         at
>> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$
>> handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:768)
>>         at
>> scala.concurrent.impl.Future$PromiseCompletingRunnable.lifte
>> dTree1$1(Future.scala:24)
>>         at
>> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(F
>> uture.scala:24)
>>         at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>>         at
>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.
>> exec(AbstractDispatcher.scala:401)
>>         at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.
>> java:260)
>>         at
>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(
>> ForkJoinPool.java:1339)
>>         at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPoo
>> l.java:1979)
>>         at
>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinW
>> orkerThread.java:107)
>> Caused by: java.lang.IllegalArgumentException: Too few memory segments
>> provided. Hash Table needs at least 33 memory segments.
>>         at
>> org.apache.flink.runtime.operators.hash.CompactingHashTable.
>> <init>(CompactingHashTable.java:206)
>>         at
>> org.apache.flink.runtime.operators.hash.CompactingHashTable.
>> <init>(CompactingHashTable.java:191)
>>         at
>> org.apache.flink.runtime.iterative.task.IterationHeadTask.in
>> itCompactingHashTable(IterationHeadTask.java:175)
>>         at
>> org.apache.flink.runtime.iterative.task.IterationHeadTask.
>> run(IterationHeadTask.java:272)
>>         at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTas
>> k.java:351)
>>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
>>         at java.lang.Thread.run(Thread.java:745)
>>
>> I found a related topic:
>> http://mail-archives.apache.org/mod_mbox/flink-dev/201503.mb
>> ox/%3CCAK5ODX4KJ9TB4yJ=BcNwsozbOoXwdB7HM9qvWoa1P9HK-Gb-Dg@
>> mail.gmail.com%3E
>> But i don't think the problem is the same,
>>
>> The code is as follows:
>>
>>         ExecutionEnvironment env =
>> ExecutionEnvironment.getExecutionEnvironment();
>>         DataSource twitterEdges =
>> env.readCsvFile("./datasets/out.munmun_twitter_social").fieldDelimiter("
>> ").ignoreComments("%").types(Long.class, Long.class);
>>         Graph graph = Graph.fromTuple2DataSet(twitterEdges, new
>> testinggraph.InitVertices(), env);
>>         DataSet verticesWithCommunity = (DataSet)graph.run(new
>> LabelPropagation(1));
>>         System.out.println(verticesWithCommunity.count());
>>
>> And it has only a couple of edges.
>>
>> I tried adding a config file in the project to add a couple of settings
>> found here:
>> https://ci.apache.org/projects/flink/flink-docs-release-0.8/config.html
>> but
>> that didn't work either
>>
>> I have no idea how to fix this atm, it's not just the LabelPropagation
>> that
>> goes wrong, all gelly methods give this exact error if it's using an
>> iteration.
>>
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-flink-user-maili
>> ng-list-archive.2336050.n4.nabble.com/Flink-error-Too-few
>> -memory-segments-provided-tp9657.html
>> Sent from the Apache Flink User Mailing List archive. mailing list
>> archive at Nabble.com.
>>
>
>

Reply via email to