I found that in the cluster I was using a release version of a dependency
that has changed..so now I have the error also in the cluster :)
This is caused by the addition of the setParent() method to TreeNode:
public void setParent(TreeNode parent) {
this.parent = parent;
}
withou
Yes I am
On Tue, May 17, 2016 at 3:45 PM, Robert Metzger wrote:
> Are you using 1.0.2 on the cluster as well?
>
> On Tue, May 17, 2016 at 3:40 PM, Flavio Pompermaier
> wrote:
>
>> I tried to debug my application from Eclipse and I got an infinite
>> recursive call in the TypeExtractor during th
Are you using 1.0.2 on the cluster as well?
On Tue, May 17, 2016 at 3:40 PM, Flavio Pompermaier
wrote:
> I tried to debug my application from Eclipse and I got an infinite
> recursive call in the TypeExtractor during the analysis of TreeNode (I'm
> using Flink 1.0.2):
>
> Exception in thread "ma
I tried to debug my application from Eclipse and I got an infinite
recursive call in the TypeExtractor during the analysis of TreeNode (I'm
using Flink 1.0.2):
Exception in thread "main" java.lang.StackOverflowError
at
org.apache.flink.api.java.typeutils.TypeExtractor.privateGetForClass(TypeEx
Don't worry Robert,
I know how hard is to debug such errors :)
I hope that maybe the combination of these 3 errors is somehow
related...However these are the answers:
- The job (composed of 16 sub-jobs) fails randomly but, usually, the
first subjob after the start restart run successfully
The last one is C or A?
How often is it failing (every nth run?) Is it always failing at the same
execute() call, or at different ones?
Is it always the exact same exception or is it different ones?
Does the error behave differently depending on the input data?
Sorry for asking so many questions,
Ah sorry, I forgot to mention that I don't use any custom kryo serializers..
On Tue, May 17, 2016 at 12:39 PM, Flavio Pompermaier
wrote:
> I got those exceptions running 3 different types of jobs..I could have
> tracked the job and the error...my bad!
> However, the most problematic job is the l
I got those exceptions running 3 different types of jobs..I could have
tracked the job and the error...my bad!
However, the most problematic job is the last one, where I run a series of
jobs one after the other (calling env.execute() in a for loop)..
I you want I can share with you my code (in priv
Hi Flavio,
thank you for providing additional details.
I don't think that missing hashCode / equals() implementations cause such
an error. They can cause wrong sorting or partitioning of the data, but the
serialization should still work properly.
I suspect the issue somewhere in the serialization s
Hi Robert,
in this specific case the interested classes are:
- Tuple3 (IndexAttributeToExpand
is a POJO extending another class and both of them doesn't implement equals
and hashcode)
- Tuple3>>>
(TreeNode is a POJO containing other TreeNode and it doesn't
implement equals and h
Hi Flavio,
which datatype are you using?
On Tue, May 17, 2016 at 11:42 AM, Flavio Pompermaier
wrote:
> Hi to all,
> during these days we've run a lot of Flink jobs and from time to time
> (apparently randomly) a different Exception arise during their executions...
> I hope one of them could hel
Hi to all,
during these days we've run a lot of Flink jobs and from time to time
(apparently randomly) a different Exception arise during their executions...
I hope one of them could help in finding the source of the problem..This
time the exception is:
An error occurred while reading the next rec
12 matches
Mail list logo