ihail Vieru
> wrote:
>
>> Hi,
>>
>> great! Thanks!
>>
>> I really need this bug fixed because I'm laying the groundwork for my
>> Diplom thesis and I need to be sure that the Gelly API is reliable and can
>> handle large datasets as intended.
&g
et the
> Exception.
>
> -vmargs
> -Xmx2048m
> -Xms100m
> -XX:MaxPermSize=512m
>
> Best,
> Mihail
>
>
> On 17.03.2015 10:11, Robert Waury wrote:
>
> Hi,
>
> can you tell me how much memory your job has and how many workers you are
> running?
Hi,
can you tell me how much memory your job has and how many workers you are
running?
>From the trace it seems the internal hash table allocated only 7 MB for the
graph data and therefore runs out of memory pretty quickly.
Skewed data could also be an issue but with a minimum of 5 pages and a
m
after the job finished. This seemed to have
caused or at least worsened the problem.
Cheers,
Robert
On Thu, Feb 5, 2015 at 1:14 PM, Ufuk Celebi wrote:
> On Thu, Feb 5, 2015 at 11:23 AM, Robert Waury > wrote:
>
>> Hi,
>>
>> I can reproduce the error on my cluster.
&
b cache and push to 0.8/master.
>
> – Ufuk
>
> On 05 Feb 2015, at 09:27, Robert Waury
> wrote:
>
> > I compiled from the release-0.8 branch.
> >
> > On Thu, Feb 5, 2015 at 8:55 AM, Stephan Ewen wrote:
> > Hey Robert!
> >
> > On which version are
I compiled from the release-0.8 branch.
On Thu, Feb 5, 2015 at 8:55 AM, Stephan Ewen wrote:
> Hey Robert!
>
> On which version are you? 0.8 or 0.9- SNAPSHOT?
> Am 04.02.2015 14:49 schrieb "Robert Waury" :
>
> Hi,
>>
>> I'm suddenly getting FileNo
Hi,
I'm suddenly getting FileNotFoundExceptions because the blobStore cannot
find files in /tmp
The job used work in the exact same setup (same versions, same cluster,
same input files).
Flink version: 0.8 release
HDFS: 2.3.0-cdh5.1.2
Flink trace:
http://pastebin.com/SKdwp6Yt
Any idea what cou