I am not sure about the root cause, but it seems that you could force the
default NIO-based transport to work around[1].
Add -Denv.java.opts="-Dcom.datastax.driver.FORCE_NIO=true" to your
submission commands.
[1].
https://stackoverflow.com/questions/48762857/java-lang-classcastexception-netty-fail
Hi Yang,
Thanks for the detailed explanation!
Could you add "-Dyarn.per-job-cluster.include-user-jar=DISABLED" to your
> command and have a try? After that, we
> will disable the user jars including in the system classpath.
I tried the following as you suggested:
#!/bin/env bash
export FLINK
Hi Dongwon,
For application mode, the job submission happens in the JobManager side. We
are using an embedded client
to submit the job. So the user jar will be added to distributed cache. When
deploying a task to TaskManager,
it will be downloaded again and run in user classloader even though we
a
Robert,
But if Kafka is really only available in the user jar, then this error
> still should not occur.
I think so too; it should not occur.
I scan through all the jar files in the classpath using `jar tf` but no jar
contains org.apache.kafka.common.serialization.Deserializer with a
different ve
I just added the following option to the script:
-Dclassloader.parent-first-patterns.additional=org.apache.kafka.common.serialization
Now it seems to work.
Why do the application mode and the per-job cluster mode behave differently
when it comes to the classloading?
Is it a bug? or intended?
B
Hi,
I have an artifact which works perfectly fine with Per-Job Cluster Mode
with the following bash script:
#!/bin/env bash
export FLINK_CONF_DIR=./conf
export HADOOP_CLASSPATH=`hadoop classpath`
$FLINK_HOME/bin/flink run -t yarn-per-job myjar.jar myconf.conf
I tried Application Mode [1] usi