Hi everyone,

I've a cluster of Standalone spark 2.4.0 (without-hadoop version) which
both of *spark.executor.userClassPathFirst* and
*spark.driver.userClassPathFirst* set true.
This cluster run on HDP (v3.1.0) and set SPARK_DIST_CLASSPATH to $(hadoop
classpath),
My application fails to run because of slf4j which is passed to driver and
executor by me


*How I submit my job:*
*./spark-submit \*
*--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.0 \*
*--master $SPARK_MASTER \*
*--class $MAIN_CLASS \*
*--driver-class-path $FAT_JAR \*
*$FAT_JAR*

*Exception:*
*Exception in thread "main" java.lang.LinkageError: loader constraint
violation: when resolving method
"org.slf4j.impl.StaticLoggerBinder.getLoggerFactory()Lorg/slf4j/ILoggerFactory;"
the class loader (instance of
org/apache/spark/util/ChildFirstURLClassLoader) of the current class,
org/slf4j/LoggerFactory, and the class loader (instance of
sun/misc/Launcher$AppClassLoader) for the method's defining class,
org/slf4j/impl/StaticLoggerBinder, have different Class objects for the
type org/slf4j/ILoggerFactory used in the signature*
*        at
org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:418)*
*        at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)*
*        at com.tap30.combine.Launcher$.<init>(Launcher.scala:17)*
*        at com.tap30.combine.Launcher$.<clinit>(Launcher.scala)*
*        at com.tap30.combine.Launcher.main(Launcher.scala)*
*        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*
*        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)*
*        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*
*        at java.lang.reflect.Method.invoke(Method.java:498)*
*        at
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)*
*        at org.apache.spark.deploy.SparkSubmit.org
<http://org.apache.spark.deploy.SparkSubmit.org>$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)*
*        at
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)*
*        at
org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)*
*        at
org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)*
*        at
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)*
*        at
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)*
*        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)*

I've used *jinfo* to exctract loaded slf4j jars on both executor and driver
which are:


*/opt/spark-2.4.0-bin-without-hadoop/jars/jcl-over-slf4j-1.7.16.jar*
*/opt/spark-2.4.0-bin-without-hadoop/jars/jul-to-slf4j-1.7.16.jar*

*/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar*
*/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-api-1.7.25.jar*

*/usr/hdp/3.1.0.0-78/hadoop/lib/jul-to-slf4j-1.7.25.jar*
*/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar*

But spark-kafka-sql depends on kafka-client v2.0.0 which uses slf4j v1.7.25
and make things go wrong. How to come over this issue?
-- 

Moein Hosseini
Data Engineer
mobile: +98 912 468 1859 <+98+912+468+1859>
site: www.moein.xyz
email: moein...@gmail.com
[image: linkedin] <https://www.linkedin.com/in/moeinhm>
[image: twitter] <https://twitter.com/moein7tl>

Reply via email to