> ngAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingA
>>>>>>>>>> daptiveSpanningRecordDeserializer.java:123)
>>>>>>>>>> at org.apache.flink.runtime.io.network.api.reader.AbstractRecor
>>>>>>>>>> dReader.getNextRecor
etwork.api.reader.MutableRecord
>>>>>>>>> Reader.next(MutableRecordReader.java:42)
>>>>>>>>> at org.apache.flink.runtime.operators.util.ReaderIterator.next(
>>>>>>>>> ReaderIterator.java:59)
>>>>>>>>>
ntime.operators.sort.UnilateralSortMerger
>>>>>>>> $ThreadBase.run(UnilateralSortMerger.java:796)
>>>>>>>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 32768
>>>>>>>> at org.apache.flink.core.memory.HeapMemorySegment.get(Hea
HeapMemorySegment.get(
>>>>>>> HeapMemorySegment.java:104)
>>>>>>> at org.apache.flink.runtime.io.network.api.serialization.
>>>>>>> SpillingAdaptiveSpanningRecordDeserializer$
>>>>>>> NonSpanningWrapper.readByte(Spilling
nningRecordDeserializer.java:231)
>>>>>> at org.apache.flink.types.StringValue.readString(StringValue.java:770)
>>>>>> at
>>>>>> org.apache.flink.api.common.typeutils.base.StringSerializer.deserialize(StringSerializer.java:69)
>>>>&g
> at org.apache.flink.api.common.typeutils.base.StringSerializer.
>>>>> deserialize(StringSerializer.java:74)
>>>>> at org.apache.flink.api.common.typeutils.base.StringSerializer.
>>>>> deserialize(StringSerializer.java:28)
>>>
.runtime.RowSerializer.de
>>>> serialize(RowSerializer.java:193)
>>>> at org.apache.flink.api.java.typeutils.runtime.RowSerializer.de
>>>> serialize(RowSerializer.java:36)
>>>> at org.apache.flink.runtime.plugable.ReusingDeserializationDele
>>>&g
;> ngAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingA
>>> daptiveSpanningRecordDeserializer.java:109)
>>> ... 5 more
>>>
>>> On Fri, Apr 21, 2017 at 11:43 AM, Stefano Bortoli <
>>> stefano.bort...@huawei.com> wrote:
>>
fact the old problem was with the KryoSerializer missed
>>> initialization on the exception that would trigger the spilling on disk.
>>> This would lead to dirty serialization buffer that would eventually break
>>> the program. Till worked on it debugging the source code gen
rializer missed initialization
>> on the exception that would trigger the spilling on disk. This would lead
>> to dirty serialization buffer that would eventually break the program. Till
>> worked on it debugging the source code generating the error. Perhaps
>> someone could t
e. If Flavio can make the problem
> reproducible in a shareable program+data.
>
>
>
> Stefano
>
>
>
> *From:* Stephan Ewen [mailto:se...@apache.org]
> *Sent:* Friday, April 21, 2017 10:04 AM
> *To:* user
> *Subject:* Re: UnilateralSortMerger error (again)
>
&
someone could try the
same also this time. If Flavio can make the problem reproducible in a shareable
program+data.
Stefano
From: Stephan Ewen [mailto:se...@apache.org]
Sent: Friday, April 21, 2017 10:04 AM
To: user
Subject: Re: UnilateralSortMerger error (again)
In the past, these errors were
The types I read are:
[String, String, String, String, String, String, String, String, String,
Boolean, Long, Long, Long, Integer, Integer, Long, String, String, Long,
Long, String, Long, String, String, String, String, String, String, String,
String, String, String, String, String, String, String
In the past, these errors were most often caused by bugs in the
serializers, not in the sorter.
What types are you using at that point? The Stack Trace reveals ROW and
StringValue, any other involved types?
On Fri, Apr 21, 2017 at 9:36 AM, Flavio Pompermaier
wrote:
> As suggested by Fabian I se
As suggested by Fabian I set taskmanager.memory.size = 1024 (to force
spilling to disk) and the job failed almost immediately..
On Fri, Apr 21, 2017 at 12:33 AM, Flavio Pompermaier
wrote:
> I debugged a bit the process repeating the job on a sub-slice of the
> entire data (using the id value to
I debugged a bit the process repeating the job on a sub-slice of the entire
data (using the id value to filter data with parquet push down filters) and
all slices completed successfully :(
So I tried to increase the parallelism (from 1 slot per TM to 4) to see if
this was somehow a factor of stress
I could but only if there's a good probability that it fix the
problem...how confident are you about it?
On Wed, Apr 19, 2017 at 8:27 PM, Ted Yu wrote:
> Looking at git log of DataInputDeserializer.java , there has been some
> recent change.
>
> If you have time, maybe try with 1.2.1 RC and see
Looking at git log of DataInputDeserializer.java , there has been some
recent change.
If you have time, maybe try with 1.2.1 RC and see if the error is
reproducible ?
Cheers
On Wed, Apr 19, 2017 at 11:22 AM, Flavio Pompermaier
wrote:
> Hi to all,
> I think I'm again on the weird Exception with
18 matches
Mail list logo