Hi,
I am trying to install spark 1.5.2 on Apache hadoop 2.6 and Hive and yarn

spark-env.sh
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop

bash_profile
#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/java-8-oracle/
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
export HADOOP_USER_CLASSPATH_FIRST=true
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export YARN_CONF_DIR=/usr/local/hadoop/etc/hadoop
#HADOOP VARIABLES END

export SPARK_HOME=/usr/local/spark
export HIVE_HOME=/usr/local/hive
export PATH=$PATH:$HIVE_HOME/bin


When I run spark-shell
./bin/spark-shell --master yarn-client

Output:
15/12/17 22:22:07 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
15/12/17 22:22:07 INFO spark.SecurityManager: Changing view acls to: hduser
15/12/17 22:22:07 INFO spark.SecurityManager: Changing modify acls to:
hduser
15/12/17 22:22:07 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hduser); users with modify permissions: Set(hduser)
15/12/17 22:22:07 INFO spark.HttpServer: Starting HTTP Server
15/12/17 22:22:07 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/12/17 22:22:08 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:38389
15/12/17 22:22:08 INFO util.Utils: Successfully started service 'HTTP class
server' on port 38389.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.5.2
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java
1.8.0_66)
Type in expressions to have them evaluated.
Type :help for more information.
15/12/17 22:22:11 WARN util.Utils: Your hostname, eranw-Lenovo-Yoga-2-Pro
resolves to a loopback address: 127.0.1.1; using 10.0.0.1 instead (on
interface wlp1s0)
15/12/17 22:22:11 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind
to another address
15/12/17 22:22:11 INFO spark.SparkContext: Running Spark version 1.5.2
15/12/17 22:22:11 INFO spark.SecurityManager: Changing view acls to: hduser
15/12/17 22:22:11 INFO spark.SecurityManager: Changing modify acls to:
hduser
15/12/17 22:22:11 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hduser); users with modify permissions: Set(hduser)
15/12/17 22:22:11 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/12/17 22:22:11 INFO Remoting: Starting remoting
15/12/17 22:22:12 INFO util.Utils: Successfully started service
'sparkDriver' on port 36381.
15/12/17 22:22:12 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkDriver@10.0.0.1:36381]
15/12/17 22:22:12 INFO spark.SparkEnv: Registering MapOutputTracker
15/12/17 22:22:12 INFO spark.SparkEnv: Registering BlockManagerMaster
15/12/17 22:22:12 INFO storage.DiskBlockManager: Created local directory at
/tmp/blockmgr-139fac31-5f21-4c61-9575-3110d5205f7d
15/12/17 22:22:12 INFO storage.MemoryStore: MemoryStore started with
capacity 530.0 MB
15/12/17 22:22:12 INFO spark.HttpFileServer: HTTP File server directory is
/tmp/spark-955ef002-a802-49c6-b440-0656861f737c/httpd-2127cbe1-97d7-40a5-a96f-75216f115f00
15/12/17 22:22:12 INFO spark.HttpServer: Starting HTTP Server
15/12/17 22:22:12 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/12/17 22:22:12 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:36760
15/12/17 22:22:12 INFO util.Utils: Successfully started service 'HTTP file
server' on port 36760.
15/12/17 22:22:12 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/12/17 22:22:12 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/12/17 22:22:12 INFO server.AbstractConnector: Started
SelectChannelConnector@0.0.0.0:4040
15/12/17 22:22:12 INFO util.Utils: Successfully started service 'SparkUI'
on port 4040.
15/12/17 22:22:12 INFO ui.SparkUI: Started SparkUI at http://10.0.0.1:4040
15/12/17 22:22:12 WARN metrics.MetricsSystem: Using default name
DAGScheduler for source because spark.app.id is not set.
15/12/17 22:22:12 INFO client.RMProxy: Connecting to ResourceManager at /
0.0.0.0:8032
15/12/17 22:22:12 INFO yarn.Client: Requesting a new application from
cluster with 1 NodeManagers
15/12/17 22:22:12 INFO yarn.Client: Verifying our application has not
requested more than the maximum memory capability of the cluster (8192 MB
per container)
15/12/17 22:22:12 INFO yarn.Client: Will allocate AM container, with 896 MB
memory including 384 MB overhead
15/12/17 22:22:12 INFO yarn.Client: Setting up container launch context for
our AM
15/12/17 22:22:12 INFO yarn.Client: Setting up the launch environment for
our AM container
15/12/17 22:22:12 INFO yarn.Client: Preparing resources for our AM container
15/12/17 22:22:13 INFO yarn.Client: Uploading resource
file:/usr/local/spark/lib/spark-assembly-1.5.2-hadoop2.6.0.jar ->
hdfs://localhost:54310/user/hduser/.sparkStaging/application_1450383657924_0003/spark-assembly-1.5.2-hadoop2.6.0.jar
15/12/17 22:22:15 INFO yarn.Client: Uploading resource
file:/tmp/spark-955ef002-a802-49c6-b440-0656861f737c/__spark_conf__3605678380745697769.zip
->
hdfs://localhost:54310/user/hduser/.sparkStaging/application_1450383657924_0003/__spark_conf__3605678380745697769.zip
15/12/17 22:22:15 INFO spark.SecurityManager: Changing view acls to: hduser
15/12/17 22:22:15 INFO spark.SecurityManager: Changing modify acls to:
hduser
15/12/17 22:22:15 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hduser); users with modify permissions: Set(hduser)
15/12/17 22:22:15 INFO yarn.Client: Submitting application 3 to
ResourceManager
15/12/17 22:22:15 INFO impl.YarnClientImpl: Submitted application
application_1450383657924_0003
15/12/17 22:22:16 INFO yarn.Client: Application report for
application_1450383657924_0003 (state: ACCEPTED)
15/12/17 22:22:16 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1450383735512
final status: UNDEFINED
tracking URL:
http://eranw-Lenovo-Yoga-2-Pro:8088/proxy/application_1450383657924_0003/
user: hduser
15/12/17 22:22:17 INFO yarn.Client: Application report for
application_1450383657924_0003 (state: ACCEPTED)
15/12/17 22:22:18 INFO yarn.Client: Application report for
application_1450383657924_0003 (state: ACCEPTED)
15/12/17 22:22:19 INFO yarn.Client: Application report for
application_1450383657924_0003 (state: ACCEPTED)
15/12/17 22:22:20 INFO yarn.Client: Application report for
application_1450383657924_0003 (state: ACCEPTED)
15/12/17 22:22:21 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka.tcp://
sparkYarnAM@10.0.0.1:34719/user/YarnAM#-313490512])
15/12/17 22:22:21 INFO cluster.YarnClientSchedulerBackend: Add WebUI
Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
Map(PROXY_HOSTS -> eranw-Lenovo-Yoga-2-Pro, PROXY_URI_BASES ->
http://eranw-Lenovo-Yoga-2-Pro:8088/proxy/application_1450383657924_0003),
/proxy/application_1450383657924_0003
15/12/17 22:22:21 INFO ui.JettyUtils: Adding filter:
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/12/17 22:22:21 INFO yarn.Client: Application report for
application_1450383657924_0003 (state: RUNNING)
15/12/17 22:22:21 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 10.0.0.1
ApplicationMaster RPC port: 0
queue: default
start time: 1450383735512
final status: UNDEFINED
tracking URL:
http://eranw-Lenovo-Yoga-2-Pro:8088/proxy/application_1450383657924_0003/
user: hduser
15/12/17 22:22:21 INFO cluster.YarnClientSchedulerBackend: Application
application_1450383657924_0003 has started running.
15/12/17 22:22:21 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 33155.
15/12/17 22:22:21 INFO netty.NettyBlockTransferService: Server created on
33155
15/12/17 22:22:21 INFO storage.BlockManagerMaster: Trying to register
BlockManager
15/12/17 22:22:21 INFO storage.BlockManagerMasterEndpoint: Registering
block manager 10.0.0.1:33155 with 530.0 MB RAM, BlockManagerId(driver,
10.0.0.1, 33155)
15/12/17 22:22:21 INFO storage.BlockManagerMaster: Registered BlockManager
15/12/17 22:22:22 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
ApplicationMaster has disassociated: 10.0.0.1:34719
15/12/17 22:22:22 WARN remote.ReliableDeliverySupervisor: Association with
remote system [akka.tcp://sparkYarnAM@10.0.0.1:34719] has failed, address
is now gated for [5000] ms. Reason: [Disassociated]
15/12/17 22:22:22 INFO scheduler.EventLoggingListener: Logging events to
hdfs://localhost:54310/spark-log/application_1450383657924_0003
15/12/17 22:22:22 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
ApplicationMaster has disassociated: 10.0.0.1:34719
15/12/17 22:22:25 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka.tcp://
sparkYarnAM@10.0.0.1:39353/user/YarnAM#1861872417])
15/12/17 22:22:25 INFO cluster.YarnClientSchedulerBackend: Add WebUI
Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
Map(PROXY_HOSTS -> eranw-Lenovo-Yoga-2-Pro, PROXY_URI_BASES ->
http://eranw-Lenovo-Yoga-2-Pro:8088/proxy/application_1450383657924_0003),
/proxy/application_1450383657924_0003
15/12/17 22:22:25 INFO ui.JettyUtils: Adding filter:
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/12/17 22:22:30 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
ApplicationMaster has disassociated: 10.0.0.1:39353
15/12/17 22:22:30 WARN remote.ReliableDeliverySupervisor: Association with
remote system [akka.tcp://sparkYarnAM@10.0.0.1:39353] has failed, address
is now gated for [5000] ms. Reason: [Disassociated]
15/12/17 22:22:30 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
ApplicationMaster has disassociated: 10.0.0.1:39353
15/12/17 22:22:30 ERROR cluster.YarnClientSchedulerBackend: Yarn
application has already exited with state FINISHED!
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/metrics/json,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/api,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/static,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/threadDump,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/json,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/environment/json,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/environment,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/rdd,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/json,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/pool/json,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/pool,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage/json,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/json,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/job/json,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/job,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/json,null}
15/12/17 22:22:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs,null}
15/12/17 22:22:31 INFO ui.SparkUI: Stopped Spark web UI at
http://10.0.0.1:4040
15/12/17 22:22:31 INFO scheduler.DAGScheduler: Stopping DAGScheduler
15/12/17 22:22:31 INFO cluster.YarnClientSchedulerBackend: Shutting down
all executors
15/12/17 22:22:31 INFO cluster.YarnClientSchedulerBackend: Asking each
executor to shut down
15/12/17 22:22:31 INFO cluster.YarnClientSchedulerBackend: Stopped
15/12/17 22:22:36 INFO spark.MapOutputTrackerMasterEndpoint:
MapOutputTrackerMasterEndpoint stopped!
15/12/17 22:22:36 INFO storage.MemoryStore: MemoryStore cleared
15/12/17 22:22:36 INFO storage.BlockManager: BlockManager stopped
15/12/17 22:22:36 INFO storage.BlockManagerMaster: BlockManagerMaster
stopped
15/12/17 22:22:36 INFO spark.SparkContext: Successfully stopped SparkContext
15/12/17 22:22:36 INFO
scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
OutputCommitCoordinator stopped!
15/12/17 22:22:36 INFO remote.RemoteActorRefProvider$RemotingTerminator:
Shutting down remote daemon.
15/12/17 22:22:36 INFO remote.RemoteActorRefProvider$RemotingTerminator:
Remote daemon shut down; proceeding with flushing remote transports.
15/12/17 22:22:36 INFO remote.RemoteActorRefProvider$RemotingTerminator:
Remoting shut down.
15/12/17 22:22:42 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend
is ready for scheduling beginning after waiting
maxRegisteredResourcesWaitingTime: 30000(ms)
15/12/17 22:22:42 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.
15/12/17 22:22:49 INFO hive.HiveContext: Initializing execution hive,
version 1.2.1
15/12/17 22:22:49 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0
15/12/17 22:22:49 INFO client.ClientWrapper: Loaded
org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
15/12/17 22:22:49 INFO metastore.HiveMetaStore: 0: Opening raw store with
implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
15/12/17 22:22:49 INFO metastore.ObjectStore: ObjectStore, initialize called
15/12/17 22:22:50 INFO DataNucleus.Persistence: Property
hive.metastore.integral.jdo.pushdown unknown - will be ignored
15/12/17 22:22:50 INFO DataNucleus.Persistence: Property
datanucleus.cache.level2 unknown - will be ignored
15/12/17 22:22:50 WARN DataNucleus.Connection: BoneCP specified but not
present in CLASSPATH (or one of dependencies)
15/12/17 22:22:50 WARN DataNucleus.Connection: BoneCP specified but not
present in CLASSPATH (or one of dependencies)
15/12/17 22:22:59 INFO metastore.ObjectStore: Setting MetaStore object pin
classes with
hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
15/12/17 22:23:00 INFO DataNucleus.Datastore: The class
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
"embedded-only" so does not have its own datastore table.
15/12/17 22:23:00 INFO DataNucleus.Datastore: The class
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
"embedded-only" so does not have its own datastore table.
15/12/17 22:23:01 INFO DataNucleus.Datastore: The class
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
"embedded-only" so does not have its own datastore table.
15/12/17 22:23:01 INFO DataNucleus.Datastore: The class
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
"embedded-only" so does not have its own datastore table.
15/12/17 22:23:02 INFO metastore.MetaStoreDirectSql: Using direct SQL,
underlying DB is DERBY
15/12/17 22:23:02 INFO metastore.ObjectStore: Initialized ObjectStore
15/12/17 22:23:02 WARN metastore.ObjectStore: Version information not found
in metastore. hive.metastore.schema.verification is not enabled so
recording the schema version 1.2.0
15/12/17 22:23:02 WARN metastore.ObjectStore: Failed to get database
default, returning NoSuchObjectException
15/12/17 22:23:02 INFO metastore.HiveMetaStore: Added admin role in
metastore
15/12/17 22:23:02 INFO metastore.HiveMetaStore: Added public role in
metastore
15/12/17 22:23:03 INFO metastore.HiveMetaStore: No user is added in admin
role, since config is empty
15/12/17 22:23:03 INFO metastore.HiveMetaStore: 0: get_all_databases
15/12/17 22:23:03 INFO HiveMetaStore.audit: ugi=hduser ip=unknown-ip-addr
cmd=get_all_databases
15/12/17 22:23:03 INFO metastore.HiveMetaStore: 0: get_functions:
db=default pat=*
15/12/17 22:23:03 INFO HiveMetaStore.audit: ugi=hduser
ip=unknown-ip-addr cmd=get_functions:
db=default pat=*
15/12/17 22:23:03 INFO DataNucleus.Datastore: The class
"org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as
"embedded-only" so does not have its own datastore table.
15/12/17 22:23:09 INFO session.SessionState: Created local directory:
/tmp/eae419f8-7dad-417b-bc3e-77c048c88337_resources
15/12/17 22:23:09 INFO session.SessionState: Created HDFS directory:
/tmp/hive/hduser/eae419f8-7dad-417b-bc3e-77c048c88337
15/12/17 22:23:09 INFO session.SessionState: Created local directory:
/tmp/hduser/eae419f8-7dad-417b-bc3e-77c048c88337
15/12/17 22:23:09 INFO session.SessionState: Created HDFS directory:
/tmp/hive/hduser/eae419f8-7dad-417b-bc3e-77c048c88337/_tmp_space.db
15/12/17 22:23:09 INFO hive.HiveContext: default warehouse location is
/user/hive/warehouse
15/12/17 22:23:09 INFO hive.HiveContext: Initializing
HiveMetastoreConnection version 1.2.1 using Spark classes.
15/12/17 22:23:09 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0
15/12/17 22:23:09 INFO client.ClientWrapper: Loaded
org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
15/12/17 22:23:10 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
15/12/17 22:23:10 INFO metastore.HiveMetaStore: 0: Opening raw store with
implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
15/12/17 22:23:10 INFO metastore.ObjectStore: ObjectStore, initialize called
15/12/17 22:23:10 INFO DataNucleus.Persistence: Property
hive.metastore.integral.jdo.pushdown unknown - will be ignored
15/12/17 22:23:10 INFO DataNucleus.Persistence: Property
datanucleus.cache.level2 unknown - will be ignored
15/12/17 22:23:10 WARN DataNucleus.Connection: BoneCP specified but not
present in CLASSPATH (or one of dependencies)
15/12/17 22:23:10 WARN DataNucleus.Connection: BoneCP specified but not
present in CLASSPATH (or one of dependencies)
15/12/17 22:23:11 INFO metastore.ObjectStore: Setting MetaStore object pin
classes with
hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
15/12/17 22:23:12 INFO DataNucleus.Datastore: The class
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
"embedded-only" so does not have its own datastore table.
15/12/17 22:23:12 INFO DataNucleus.Datastore: The class
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
"embedded-only" so does not have its own datastore table.
15/12/17 22:23:12 INFO DataNucleus.Datastore: The class
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
"embedded-only" so does not have its own datastore table.
15/12/17 22:23:12 INFO DataNucleus.Datastore: The class
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
"embedded-only" so does not have its own datastore table.
15/12/17 22:23:12 INFO DataNucleus.Query: Reading in results for query
"org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is
closing
15/12/17 22:23:12 INFO metastore.MetaStoreDirectSql: Using direct SQL,
underlying DB is DERBY
15/12/17 22:23:12 INFO metastore.ObjectStore: Initialized ObjectStore
15/12/17 22:23:13 INFO metastore.HiveMetaStore: Added admin role in
metastore
15/12/17 22:23:13 INFO metastore.HiveMetaStore: Added public role in
metastore
15/12/17 22:23:13 INFO metastore.HiveMetaStore: No user is added in admin
role, since config is empty
15/12/17 22:23:13 INFO metastore.HiveMetaStore: 0: get_all_databases
15/12/17 22:23:13 INFO HiveMetaStore.audit: ugi=hduser ip=unknown-ip-addr
cmd=get_all_databases
15/12/17 22:23:13 INFO metastore.HiveMetaStore: 0: get_functions:
db=default pat=*
15/12/17 22:23:13 INFO HiveMetaStore.audit: ugi=hduser
ip=unknown-ip-addr cmd=get_functions:
db=default pat=*
15/12/17 22:23:13 INFO DataNucleus.Datastore: The class
"org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as
"embedded-only" so does not have its own datastore table.
15/12/17 22:23:18 INFO session.SessionState: Created local directory:
/tmp/7f892b9d-38ef-4fab-bc44-c1afaf07f50f_resources
15/12/17 22:23:18 INFO session.SessionState: Created HDFS directory:
/tmp/hive/hduser/7f892b9d-38ef-4fab-bc44-c1afaf07f50f
15/12/17 22:23:18 INFO session.SessionState: Created local directory:
/tmp/hduser/7f892b9d-38ef-4fab-bc44-c1afaf07f50f
15/12/17 22:23:18 INFO session.SessionState: Created HDFS directory:
/tmp/hive/hduser/7f892b9d-38ef-4fab-bc44-c1afaf07f50f/_tmp_space.db
15/12/17 22:23:18 INFO repl.SparkILoop: Created sql context (with Hive
support)..
SQL context available as sqlContext.

scala>

When I am running the following code:
val data = sc.wholeTextFiles("hdfs://localhost:54310/spark-log/*")

I am getting exception:
java.lang.IllegalStateException: Cannot call methods on a stopped
SparkContext
at org.apache.spark.SparkContext.org
$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:104)
at org.apache.spark.SparkContext.defaultParallelism(SparkContext.scala:2063)
at
org.apache.spark.SparkContext.defaultMinPartitions(SparkContext.scala:2076)
at
org.apache.spark.SparkContext.wholeTextFiles$default$2(SparkContext.scala:864)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:21)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:26)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:28)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:32)
at $iwC$$iwC$$iwC.<init>(<console>:34)
at $iwC$$iwC.<init>(<console>:36)
at $iwC.<init>(<console>:38)
at <init>(<console>:40)
at .<init>(<console>:44)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at
org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at
org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
at org.apache.spark.repl.SparkILoop.org
$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
at
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
at
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at
scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org
$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

*What is wrong here? how should I configure spark to use yarn and hive
correctly?*

Eran

Reply via email to