@Stephan: I don't think there is a way to deal with this. In my
understanding, the (main) purpose of the user@ list is not to report Flink
bugs. It is a forum for users to help each other.
Flink committers happen to know a lot about the system, so its easy for
them to help users. Also, its a good w
Static fields are not parts of the serialized program (by Java's
definition). Whether the static field has the same value in the cluster
JVMs depends on how the static field is initialized, whether it is
initialized the same way in the shipped code, without the program's main
method.
BTW: We are
Thank you for the answer Robert!
I realize it's a single JVM running, yet I would expect programs to behave
in the same way, i.e. serialization to happen (even if not necessary), in
order to catch this kind of bugs before cluster deployment.
Is this simply not possible or is it a design choice we
It is working in the IDE because there we execute everything in the same
JVM, so the mapper can access the correct value of the static variable.
When submitting a job with the CLI frontend, there are at least two JVMs
involved, and code running in the JM/TM can not access the value from the
static
Hi everyone,
Mihail and I have now solved the issue.
The exception was caused because the array size in question was read from a
static field of the enclosing class, inside an anonymous mapper. Making the
mapper a standalone class and passing the array size to the constructor
solved the issue.
W
Hi Vasia,
/InitVerticesMapper/ is called in the run method of APSP:
/
//@Override//
//public Graph, NullValue> run(GraphTuple2, NullValue> input) {//
//
//VertexCentricConfiguration parameters = new
VertexCentricConfiguration();//
//parameters.setSolutionSetUnmanagedMem
Hi Mihail,
could you share your code or at least the implementations of
getVerticesDataSet() and InitVerticesMapper so I can take a look?
Where is InitVerticesMapper called above?
Cheers,
Vasia.
On 26 June 2015 at 10:51, Mihail Vieru
wrote:
> Hi Robert,
>
> I'm using the same input data, as
Hi Robert,
I'm using the same input data, as well as the same parameters I use in
the IDE's run configuration.
I don't run the job on the cluster (yet), but locally, by starting Flink
with the start-local.sh script.
I will try to explain my code a bit. The /Integer[] /array is
initialized i
Looks like an exception in one of the Gelly functions.
Let's wait for someone from Gelly to jump in...
On Thu, Jun 25, 2015 at 7:41 PM, Mihail Vieru wrote:
> Hi,
>
> I get an ArrayIndexOutOfBoundsException when I run my job from a JAR in
> the CLI.
> This doesn't occur in the IDE.
>
> I've bui
Hi Mihail,
the NPE has been thrown from
*graphdistance.APSP$InitVerticesMapper.map(APSP.java:74)*. I guess that is
code written by you or a library you are using.
Maybe the data you are using on the cluster is different from your local
test data?
Best,
Robert
On Thu, Jun 25, 2015 at 7:41 PM, Mi
10 matches
Mail list logo