Hey!

I am actually a bit puzzled how these segfaults could come, unless via a
native library, or a JVM bug.

Can you try how it behaves when not using RocksDB or using a newer JVM
version?

Stephan


On Sun, Jan 22, 2017 at 7:51 PM, Gyula Fóra <gyf...@apache.org> wrote:

> Hey All,
>
> I am trying to port some of my streaming jobs to Flink 1.2 from 1.1.4 and I
> have noticed some very strange segfaults. (I am running in test
> environments - with the minicluster)
> It is a fairly complex job so I wouldnt go into details but the interesting
> part is that adding/removing a simple filter in the wrong place in the
> topology (such as (e -> true)  or anything actually ) seems to cause
> frequent segfaults during execution.
>
> Basically the key part looks something like:
>
> ...
> DataStream stream = source.map().setParallelism(1)..uid("AssignFieldIds").
> name("AssignFieldIds").startNewChain();
> DataStream filtered = input1.filter(t -> true).setParallelism(1)
> IterativeStream itStream = filtered.iterate(...)
> ...
>
> Some notes before the actual error: replacing the filter with a map or
> other chained transforms also leads to this problem. If the filter is not
> chained there is no error (or if I remove the filter).
>
> The error I get looks like this:
> https://gist.github.com/gyfora/da713b99b1e85a681ff1a4182f001242
>
> I wonder if anyone has seen something like this before, or have some ideas
> how to debug it. The simple work around is to not chain the filter but it's
> still very strange.
>
> Regards,
> Gyula
>

Reply via email to