My code uses "com.typesafe.config" in order to read configuration values.
Currently, our code uses v1.3.0, whereas Spark uses 1.2.1 internally.

When I initiate a job, the worker process invokes a method in my code but
fails, because it's defined abstract in v1.2.1 whereas in v1.3.0 it is not.
The exception message is:

java.lang.IllegalAccessError: tried to access class
com.typesafe.config.impl.SimpleConfig from class
com.typesafe.config.impl.ConfigBeanImpl
        at
com.typesafe.config.impl.ConfigBeanImpl.createInternal(ConfigBeanImpl.java:40)

My spark-submit command is follows:

spark-submit \
--driver-class-path
config-1.3.0.jar:hadoop-aws-2.7.1.jar:aws-java-sdk-1.10.62.jar" \
--driver-java-options "-Dconfig.file=/classes/application.conf
-Dlog4j.configurationFile=/classes/log4j2.xml -XX:+UseG1GC
-XX:+UseStringDeduplication" \
--conf "spark.streaming.backpressure.enabled=true" \
--conf "spark.executor.memory=5g" \
--conf "spark.driver.memory=5g" \
--conf
"spark.executor.extraClassPath=./config-1.3.0.jar:./hadoop-aws-2.7.1.jar:./aws-java-sdk-1.10.62.jar"
\
--conf "spark.executor.extraJavaOptionsFile=-Dconfig.file=./application.conf
-Dlog4j.configurationFile=./classes/log4j2.xml -XX:+UseG1GC
-XX:+UseStringDeduplication" \
--class SparkRunner spark-job-0.1.1.jar

This still fails, regardless of the spark worker loading. This is what I see
on the worker node:

Fetching http://***/jars/config-1.3.0.jar to
/tmp/spark-cac4dfb9-bf59-49bc-ab81-e24923051c86/executor-1104e522-3fa5-4eff-8e0c-43b3c3b24c65/spark-1ce3b6bd-92f2-4652-993e-4f7054d07d21/fetchFileTemp3242938794803852920.tmp
16/04/03 13:57:20 INFO util.Utils: Copying
/tmp/spark-cac4dfb9-bf59-49bc-ab81-e24923051c86/executor-1104e522-3fa5-4eff-8e0c-43b3c3b24c65/spark-1ce3b6bd-92f2-4652-993e-4f7054d07d21/-13916990391459691836506_cache
to /var/run/spark/work/app-20160403135716-0040/1/./config-1.3.0.jar
16/04/03 13:57:20 INFO executor.Executor: Adding
file:/var/run/spark/work/app-20160403135716-0040/1/./config-1.3.0.jar to
class loader

I have tried setting "spark.executor.userClassPathFirst" to true, but then
it blows up with an error saying a different SLF4J library was already
loaded, and crashes the worker process.

Has anyone had anything similar he had to achieve?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Evicting-a-lower-version-of-a-library-loaded-in-Spark-Worker-tp26664.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to