Great stuff.

 

Built the source code for 1.3.1 and generated 
spark-assembly-1.3.1-hadoop2.4.0.jar

 

jar tvf spark-assembly-1.3.1-hadoop2.4.0.jar|grep hive | grep -i -v Archive

 

so no hive there

 

Downloaded prebuilt spark 1.3.1 and started master and slave OK

 

Started hive as usual in debug mode  and did a simple select count(1) from t;

 

Spark app started OK 

 

hduser@rhes564::/usr/lib/spark-1.3.1-bin-hadoop2.6/logs 
<mailto:hduser@rhes564::/usr/lib/spark-1.3.1-bin-hadoop2.6/logs> >

 

-rw-r--r-- 1 hduser hadoop  31562 Dec  5 21:18 
spark-hduser-org.apache.spark.deploy.master.Master-1-rhes564.out

-rw-r--r-- 1 hduser hadoop  19684 Dec  5 21:18 
spark-hduser-org.apache.spark.deploy.worker.Worker-1-rhes564.out

-rwxrwx--- 1 hduser hadoop  60491 Dec  5 21:18 
app-20151205211814-0005.inprogress

 

Now I get some library error

 

5/12/05 21:18:16 [stderr-redir-1]: INFO client.SparkClientImpl: Caused by: 
java.lang.UnsatisfiedLinkError: /tmp/snappy-1.0.5-libsnappyjava.so: 
/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by 
/tmp/snappy-1.0.5-libsnappyjava.so)

 

 

strings /usr/lib/libstdc++.so.6 | grep GLIBC

GLIBCXX_3.4

GLIBCXX_3.4.1

GLIBCXX_3.4.2

GLIBCXX_3.4.3

GLIBCXX_3.4.4

GLIBCXX_3.4.5

GLIBCXX_3.4.6

GLIBCXX_3.4.7

GLIBCXX_3.4.8

GLIBC_2.3

GLIBC_2.0

GLIBC_2.3.2

GLIBC_2.4

GLIBC_2.1

GLIBC_2.1.3

GLIBC_2.2

GLIBCXX_FORCE_NEW

 

Looking into sorting this out. 

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", 
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN: 978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one 
out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This 
message is for the designated recipient only, if you are not the intended 
recipient, you should destroy it immediately. Any information in this message 
shall not be understood as given or endorsed by Peridale Technology Ltd, its 
subsidiaries or their employees, unless expressly so stated. It is the 
responsibility of the recipient to ensure that this email is virus free, 
therefore neither Peridale Ltd, its subsidiaries nor their employees accept any 
responsibility.

 

From: Xuefu Zhang [mailto:xzh...@cloudera.com] 
Sent: 04 December 2015 17:47
To: user@hive.apache.org
Subject: Re: FW: Getting error when trying to start master node after building 
spark 1.3

 

1.3.1 is what officially supported by Hive 1.2.1. 1.3.0 might be okay too.

 

On Fri, Dec 4, 2015 at 9:34 AM, Mich Talebzadeh <m...@peridale.co.uk 
<mailto:m...@peridale.co.uk> > wrote:

Appreciated the response. Just to clarify the build will be spark 1.3 and the 
pre-build download will be 1.3. this is the version I am attempting to make it 
work.

 

Thanks

 

Mich

 

From: Xuefu Zhang [mailto:xzh...@cloudera.com <mailto:xzh...@cloudera.com> ] 
Sent: 04 December 2015 17:03
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: Re: FW: Getting error when trying to start master node after building 
spark 1.3

 

My last attempt:

1. Make sure the spark-assembly.jar from your own build doesn't contain hive 
classes, using "jar -tf spark-assembly.jar | grep hive" command. Copy it to 
Hive's /lib directory. After this, you can forget everything about this build.

2. Download prebuilt tarball from Spark download site and deploy it. Forget 
about Hive for a moment. Make sure the cluster comes up and functions.

3. Unset environment variable SPARK_HOME before you start Hive. Start Hive, and 
run "set spark.home=/path/to/spark/dir" command. Then run other commands as you 
did previously when trying hive on spark.

 

 

 

Reply via email to