Ah, no problem.
Glad you could resolve your problem :-)
Thanks for reporting back.
Cheers, Fabian
2015-11-12 17:42 GMT+01:00 Kashmar, Ali :
> So the problem wasn’t in Flink after all. It turns out the data I was
> receiving at the socket was not complete. So I went back and looked at the
> way
So the problem wasn’t in Flink after all. It turns out the data I was
receiving at the socket was not complete. So I went back and looked at the
way I’m sending data to the socket and realized that the socket is closed
before sending all data. I just needed to flush the stream before closing
the so
Hi Ali,
Flink uses different serializers for different data types. For example,
(boxed) primitives are serialized using dedicated serializers
(IntSerializer, StringSerializer, etc.) and the ProtocolEvent class is
recognized as a Pojo type and therefore serialized using Flink's
PojoSerializer.
Type
Fabian,
I tried running it again and I noticed there were some more exceptions in
the log. I fixed those and I don’t see the original error but I do see
other ArrayIndexOutofBoundExceptions in the Kryo serializer code (I didn’t
even enable that yet like you suggested). Examples:
1)
10:49:36,331
Hi Ali,
I looked into this issue. This problem seems to be caused because the
deserializer reads more data than it should read.
This might happen because of two reasons:
1) the meta information of how much data is safe to read is incorrect.
2) the serializer and deserializer logic are not in s
Hi Ali,
one more thing. Did that error occur once or is it reproducable?
Thanks for your help,
Fabian
2015-11-11 9:50 GMT+01:00 Ufuk Celebi :
> Hey Ali,
>
> thanks for sharing the code. I assume that the custom
> ProtocolEvent, ProtocolDetailMap, Subscriber type are all PoJos. They
> should not
Hey Ali,
thanks for sharing the code. I assume that the custom
ProtocolEvent, ProtocolDetailMap, Subscriber type are all PoJos. They
should not be a problem. I think this is a bug in Flink 0.9.1.
Is it possible to re-run your program with the upcoming 0.10.0 (RC8)
version and report back?
1) Add
Thanks for reporting this. Are you using any custom data types?
If you can share your code, it would be very helpful in order to debug this.
– Ufuk
On Tuesday, 10 November 2015, Fabian Hueske wrote:
> I agree with Robert. Looks like a bug in Flink.
> Maybe an off-by-one issue (violating index
I agree with Robert. Looks like a bug in Flink.
Maybe an off-by-one issue (violating index is 32768 and the default memory
segment size is 32KB).
Which Flink version are you using?
In case you are using a custom build, can you share the commit ID (is
reported in the first lines of the JobManager l
Hi Ali,
this could be a bug in Flink.
Can you share the code of your program with us to debug the issue?
On Tue, Nov 10, 2015 at 6:25 PM, Kashmar, Ali wrote:
> Hello,
>
> I’m getting this error while running a streaming module on a cluster of 3
> nodes:
>
>
> java.lang.ArrayIndexOutOfBoundsExce
Hello,
I’m getting this error while running a streaming module on a cluster of 3 nodes:
java.lang.ArrayIndexOutOfBoundsException: 32768
at org.apache.flink.core.memory.MemorySegment.get(MemorySegment.java:178)
at
org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRe
11 matches
Mail list logo