This looks suspicious, but it should actually be also a consequence of a
failure or disconnect between the TaskManager and the JobManager.
Can you send us the whole log to have a closer look?
Thanks,
Stephan
On Thu, May 21, 2015 at 10:59 AM, Flavio Pompermaier
wrote:
> Could it be this the ma
Could it be this the main failure reason?
09:45:58,650 WARN akka.remote.ReliableDeliverySupervisor
- Association with remote system [akka.tcp://flink@192.168.234.83:6123]
has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
09:45:58,831 WARN Remoting
- Tried to a
Hi!
Interruptions usually happen as part of cancelling. Has the job failed for
some other reason (and that exception is only a followup) ?
Or it this the root cause of the failure.
Stephan
On Thu, May 21, 2015 at 9:55 AM, Flavio Pompermaier
wrote:
> Now I'm able to run my job but after a whi
Now I'm able to run my job but after a while I get this other exception:
09:43:49,383 INFO org.apache.flink.runtime.taskmanager.TaskManager
- Unregistering task and sending final execution state FINISHED to
JobManager for task CHAIN DataSource (at
createInput(ExecutionEnvironment.java:490)
(
Thank you Stephan!I'll let you know tomorrow!
On May 20, 2015 7:30 PM, "Stephan Ewen" wrote:
> Hi!
>
> I pushed a fix to the master that should solve this.
>
> It probably needs a bit until the snapshot repositories are synced.
>
> Let me know if it fixed your issue!
>
> Greetings,
> Stephan
>
>
Hi!
I pushed a fix to the master that should solve this.
It probably needs a bit until the snapshot repositories are synced.
Let me know if it fixed your issue!
Greetings,
Stephan
On Wed, May 20, 2015 at 1:48 PM, Flavio Pompermaier
wrote:
> Here it is:
>
> java.lang.RuntimeException: Reques
Here it is:
java.lang.RuntimeException: Requesting the next InputSplit failed.
at
org.apache.flink.runtime.taskmanager.TaskInputSplitProvider.getNextInputSplit(TaskInputSplitProvider.java:89)
at
org.apache.flink.runtime.operators.DataSourceTask$1.hasNext(DataSourceTask.java:340)
at
org.apache.flin
This is a bug in the HadoopInputSplit. It does not follow the general class
loading rules in Flink. I think it is pretty straightforward to fix, I'll
give it a quick shot...
Can you send me the entire stack trace (where the serialization call comes
from) to verify this?
On Wed, May 20, 2015 at 12
Now I'm able to run the job but I get another exception..this time it seems
that Flink it's not able to split my Parquet file:
Caused by: java.lang.ClassNotFoundException:
parquet.hadoop.ParquetInputSplit
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(UR
Yes it could be that the jar classes and those on the cluster are not
aligned for some days..Now I'll recompile both sides and if I still have
the error I will change line 42 as you suggested.
Tanks Max
On Wed, May 20, 2015 at 10:53 AM, Maximilian Michels wrote:
> Hi Flavio,
>
> It would be help
Hi Flavio,
It would be helpful, if we knew which class could not be found. In the
ClosureCleaner, can you change line 42 to include the class name in the
error message? Like in this example:
private static ClassReader getClassReader(Class cls) {
String className = cls.getName().replaceFirst("^
Any insight about this..?
On Tue, May 19, 2015 at 12:49 PM, Flavio Pompermaier
wrote:
> Hi to all,
>
> I tried to run my job on a brand new Flink cluster (0.9-SNAPSHOT) from the
> web client UI using the shading strategy of the quickstart example but I
> get this exception:
>
> Caused by: java.l
Hi to all,
I tried to run my job on a brand new Flink cluster (0.9-SNAPSHOT) from the
web client using the shading strategy of the quickstart example but I get
this exception:
Caused by: java.lang.RuntimeException: Could not create ClassReader:
java.io.IOException: Class not found
at
org.apache.f
13 matches
Mail list logo