Thanks for prompt response Jeff.

Yep, I was able to build Zeppelin without -Dhadoop.version=2.6.0-cdh5.12.0
.. interesting. I had to use this previously as
zeppelin wouldn't build without that (on older versions).

---
Might be unrelated, but now to start a spark interperter I have to have a
valid kerberos ticket.
SPARK_SUBMIT_OPTIONS's --keytab & --principal was always sufficient.
That might be a Spark 2.2 change though and not Zeppelin.

I had to have a valid local (on the interpreter's servers) kerberos ticket.
Also might be related - https://issues.apache.org/jira/browse/SPARK-19038
I see this would be very inconvenient for users.



[1]

ERROR [2017-08-27 19:46:17,597] ({pool-2-thread-5}
Logging.scala[logError]:91) - Error initializing SparkContext.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation
Token can be issued only with kerberos or web authentication
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7501)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:548)
        at
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getDelegationToken(AuthorizationProviderProxyClientProtocol.java:663)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:981)
        at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)

        at org.apache.hadoop.ipc.Client.call(Client.java:1502)
        at org.apache.hadoop.ipc.Client.call(Client.java:1439)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
        at com.sun.proxy.$Proxy12.getDelegationToken(Unknown Source)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:928)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:260)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
        at com.sun.proxy.$Proxy13.getDelegationToken(Unknown Source)
        at
org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:1085)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:1508)
        at
org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:546)
        at
org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:524)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2292)
        at
org.apache.spark.deploy.yarn.security.HadoopFSCredentialProvider$$anonfun$obtainCredentials$1.apply(HadoopFSCredentialProvider.scala:53)
        at
org.apache.spark.deploy.yarn.security.HadoopFSCredentialProvider$$anonfun$obtainCredentials$1.apply(HadoopFSCredentialProvider.scala:50)
        at scala.collection.immutable.Set$Set1.foreach(Set.scala:94)
        at
org.apache.spark.deploy.yarn.security.HadoopFSCredentialProvider.obtainCredentials(HadoopFSCredentialProvider.scala:50)
        at
org.apache.spark.deploy.yarn.security.ConfigurableCredentialManager$$anonfun$obtainCredentials$2.apply(ConfigurableCredentialManager.scala:82)
        at
org.apache.spark.deploy.yarn.security.ConfigurableCredentialManager$$anonfun$obtainCredentials$2.apply(ConfigurableCredentialManager.scala:80)
        at
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at
scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
        at
scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
        at
scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
        at
org.apache.spark.deploy.yarn.security.ConfigurableCredentialManager.obtainCredentials(ConfigurableCredentialManager.scala:80)
        at
org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:371)
        at
org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:816)
        at
org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169)
        at
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
        at
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
        at
org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
        at
org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
        at
org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
        at scala.Option.getOrElse(Option.scala:121)
        at
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:40)
        at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:35)
        at
org.apache.zeppelin.spark.SparkInterpreter.createSparkSession(SparkInterpreter.java:399)
        at
org.apache.zeppelin.spark.SparkInterpreter.getSparkSession(SparkInterpreter.java:277)
        at
org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:870)
        at
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
        at
org.apache.zeppelin.spark.PySparkInterpreter.getSparkInterpreter(PySparkInterpreter.java:586)
        at
org.apache.zeppelin.spark.PySparkInterpreter.createGatewayServerAndStartScript(PySparkInterpreter.java:218)
        at
org.apache.zeppelin.spark.PySparkInterpreter.open(PySparkInterpreter.java:163)
        at
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
        at
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:499)
        at org.apache.zeppelin.scheduler.Job.run(Job.java:181)
        at
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
        at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)






-- 
Ruslan Dautkhanov

On Sun, Aug 27, 2017 at 6:34 PM, Jeff Zhang <zjf...@gmail.com> wrote:

>
> Can you run this command before ?
>
> It looks like CDH issue, I can build it successfully without specifying
> hadoop.version but just only -Phadoop-2.6
>
>
> Ruslan Dautkhanov <dautkha...@gmail.com>于2017年8月28日周一 上午4:26写道:
>
>> Building from a current Zeppelin snapshot fails with
>> zeppelin build fails with org.apache.maven.plugins.
>> enforcer.DependencyConvergence
>> see details below.
>>
>> Build command
>> /opt/maven/maven-latest/bin/mvn clean package -DskipTests -Pspark-2.2
>> -Dhadoop.version=2.6.0-cdh5.12.0 -Phadoop-2.6 -Pvendor-repo -Pscala-2.10
>> -Psparkr -pl '!*..excluded certain modules..*' -e
>>
>> maven 3.5.0
>>> jdk 1.8.0_141
>>> RHEL 7.3
>>> npm.x86_64                       1:3.10.10-1.6.11.1.1.el7
>>> nodejs.x86_64                    1:6.11.1-1.el7             @epel
>>> latest zeppelin snapshot
>>
>>
>> Any ideas? It's my first attempt to build on rhel7/jdk8 .. never seen
>> this problem before.
>>
>> Thanks,
>> Ruslan
>>
>>
>>
>> [INFO] Scanning for projects...
>> [WARNING]
>> [WARNING] Some problems were encountered while building the effective
>> model for org.apache.zeppelin:zeppelin-spark-dependencies_2.10:jar:0.
>> 8.0-SNAPSHOT
>> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but
>> found duplicate declaration of plugin 
>> com.googlecode.maven-download-plugin:download-maven-plugin
>> @ line 940, column 15
>> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but
>> found duplicate declaration of plugin 
>> com.googlecode.maven-download-plugin:download-maven-plugin
>> @ line 997, column 15
>> [WARNING]
>> [WARNING] Some problems were encountered while building the effective
>> model for org.apache.zeppelin:zeppelin-spark_2.10:jar:0.8.0-SNAPSHOT
>> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but
>> found duplicate declaration of plugin org.scala-tools:maven-scala-plugin
>> @ line 467, column 15
>> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but
>> found duplicate declaration of plugin 
>> org.apache.maven.plugins:maven-surefire-plugin
>> @ line 475, column 15
>> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but
>> found duplicate declaration of plugin 
>> org.apache.maven.plugins:maven-compiler-plugin
>> @ line 486, column 15
>> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but
>> found duplicate declaration of plugin org.scala-tools:maven-scala-plugin
>> @ line 496, column 15
>> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but
>> found duplicate declaration of plugin 
>> org.apache.maven.plugins:maven-surefire-plugin
>> @ line 504, column 15
>> [WARNING]
>> [WARNING] It is highly recommended to fix these problems because they
>> threaten the stability of your build.
>> [WARNING]
>> [WARNING] For this reason, future Maven versions might no longer support
>> building such malformed projects.
>> [WARNING]
>> [WARNING] The project org.apache.zeppelin:zeppelin-web:war:0.8.0-SNAPSHOT
>> uses prerequisites which is only intended for maven-plugin projects but not
>> for non maven-plugin projects. For such purposes you should use the
>> maven-enforcer-plugin. See https://maven.apache.org/
>> enforcer/enforcer-rules/requireMavenVersion.html
>>
>>
>> ... [skip]
>>
>> [INFO] ------------------------------------------------------------
>> ------------
>> [INFO] Building Zeppelin: Zengine 0.8.0-SNAPSHOT
>> [INFO] ------------------------------------------------------------
>> ------------
>> [INFO]
>> [INFO] --- maven-clean-plugin:2.6.1:clean (default-clean) @
>> zeppelin-zengine ---
>> [INFO]
>> [INFO] --- flatten-maven-plugin:1.0.0:clean (flatten.clean) @
>> zeppelin-zengine ---
>> [INFO]
>> [INFO] --- maven-checkstyle-plugin:2.13:check (checkstyle-fail-build) @
>> zeppelin-zengine ---
>> [INFO]
>> [INFO]
>> [INFO] --- maven-resources-plugin:2.7:copy-resources (copy-resources) @
>> zeppelin-zengine ---
>> [INFO] Using 'UTF-8' encoding to copy filtered resources.
>> [INFO] Copying 17 resources
>> [INFO]
>> [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce) @
>> zeppelin-zengine ---
>> [WARNING]
>> Dependency convergence error for 
>> com.fasterxml.jackson.core:jackson-core:2.5.3
>> paths to dependency are:
>> +-org.apache.zeppelin:zeppelin-zengine:0.8.0-SNAPSHOT
>>   +-com.amazonaws:aws-java-sdk-s3:1.10.62
>>     +-com.amazonaws:aws-java-sdk-core:1.10.62
>>       +-com.fasterxml.jackson.core:jackson-databind:2.5.3
>>         +-com.fasterxml.jackson.core:jackson-core:2.5.3
>> and
>> +-org.apache.zeppelin:zeppelin-zengine:0.8.0-SNAPSHOT
>>   +-org.apache.hadoop:hadoop-client:2.6.0-cdh5.12.0
>>     +-org.apache.hadoop:hadoop-aws:2.6.0-cdh5.12.0
>>       +-com.fasterxml.jackson.core:jackson-core:2.2.3
>>
>> [WARNING]
>> Dependency convergence error for 
>> org.codehaus.jackson:jackson-mapper-asl:1.9.13
>> paths to dependency are:
>> +-org.apache.zeppelin:zeppelin-zengine:0.8.0-SNAPSHOT
>>   +-com.github.eirslett:frontend-maven-plugin:1.3
>>     +-com.github.eirslett:frontend-plugin-core:1.3
>>       +-org.codehaus.jackson:jackson-mapper-asl:1.9.13
>> and
>> +-org.apache.zeppelin:zeppelin-zengine:0.8.0-SNAPSHOT
>>   +-org.apache.hadoop:hadoop-client:2.6.0-cdh5.12.0
>>     +-org.apache.hadoop:hadoop-common:2.6.0-cdh5.12.0
>>       +-org.codehaus.jackson:jackson-mapper-asl:1.8.8
>> and
>> +-org.apache.zeppelin:zeppelin-zengine:0.8.0-SNAPSHOT
>>   +-org.apache.hadoop:hadoop-client:2.6.0-cdh5.12.0
>>     +-org.apache.hadoop:hadoop-hdfs:2.6.0-cdh5.12.0
>>       +-org.codehaus.jackson:jackson-mapper-asl:1.9.13
>>
>> ... [skipped a number of other version convergence errors for
>> dependencies]
>>
>>
>>

Reply via email to