Perfect! It worked! Thanks a lot for the help!
On 18 February 2015 at 22:13, Fabian Hueske wrote:
> 2048 is the default. So you didn't actually increase the number of buffers
> ;-)
>
> Try 4096 or so.
>
> 2015-02-18 22:59 GMT+01:00 Yiannis Gkoufas :
>
>> Hi!
>>
>> thank you for your replies!
>>
2048 is the default. So you didn't actually increase the number of buffers
;-)
Try 4096 or so.
2015-02-18 22:59 GMT+01:00 Yiannis Gkoufas :
> Hi!
>
> thank you for your replies!
> I increased the number of network buffers:
>
> taskmanager.network.numberOfBuffers: 2048
>
> but I am still getting
Hi!
thank you for your replies!
I increased the number of network buffers:
taskmanager.network.numberOfBuffers: 2048
but I am still getting the same error:
Insufficient number of network buffers: required 120, but only 2 of 2048
available.
Thanks a lot!
On 18 February 2015 at 20:27, Fabian H
Thank you for the information you provided.
Yes, it runs an iterative algorithm on a graph and feeds the result of one
iteration to the next.
The getting stuck issue disappears when increasing the maximal iterations in
the algorithm
ex. increase to 1000 vertex centric iterations in the algorithm,
Hi Hung,
I am under the impression that circular dependencies like the one you are
describing are not allowed in the Flink execution graph. I would actually
expect something like this to cause an error.
Maybe someone else can elaborate on that?
In any case, the proper way to write iterative prog
Hi Yiannis,
if you scale Flink to larger setups you need to adapt the number of network
buffers.
The background section of the configuration reference explains the details
on that [1].
Let us know, if that helped to solve the problem.
Best, Fabian
[1] http://flink.apache.org/docs/0.8/config.htm
Hi Yiannis!
You need to increase the number of buffers for your setup. Here is a FAQ
entry with a few pointers:
http://flink.apache.org/docs/0.8/faq.html#i-get-an-error-message-saying-that-not-enough-buffers-are-available-how-do-i-fix-this
Greetings,
Stephan
Am 18.02.2015 21:21 schrieb "Yiannis
Hi there,
I have a cluster of 10 nodes with 12 CPUs each.
This is my configuration:
jobmanager.rpc.port: 6123
jobmanager.heap.mb: 4024
taskmanager.heap.mb: 8096
taskmanager.numberOfTaskSlots: 12
parallelization.degree.default: 120
I have been getting the following error:
java.lang.Exception
Thank you for your reply.
The dataset:
The 1MB dataset is 38831 nodes and 99565 edges which doesn't get stuck.
The 30MB dataset is 1,134,890 nodes and 2,987,624 edges which gets stuck.
Our code works like the following logic:
do{
filteredGraph = graph.run(algorithm);
// Get sub-graph for next
Hi Hung,
can you share some details on your algorithm and dataset?
I could not reproduce this by just running a filterOnVertices on large
input.
Thank you,
Vasia.
On 18 February 2015 at 19:03, HungChang wrote:
> Hi,
>
> I have a question about generating the sub-graph using Spargel API.
> We u
Hi,
I have a question about generating the sub-graph using Spargel API.
We use filterOnVertices to generate it.
With 30MB edges, the code gets stuck at Join(Join at filterOnVertices)
With 2MB edges, the code doesn't have this issue.
Log
11 matches
Mail list logo