I maybe find the answer from the sqlparser.scala file.

Looks like the syntax spark used for insert is different from what we normally 
used for MySQL.

I hope if someone can confirm this. Also I will appreciate if there is a SQL 
reference list available.

Sent from my iPhone

On 21 Jul 2015, at 9:21 pm, "Jack Yang" 
<j...@uow.edu.au<mailto:j...@uow.edu.au>> wrote:

No. I did not use hiveContext at this stage.

I am talking the embedded SQL syntax for pure spark sql.

Thanks, mate.

On 21 Jul 2015, at 6:13 pm, "Terry Hole" 
<hujie.ea...@gmail.com<mailto:hujie.ea...@gmail.com>> wrote:

Jack,

You can refer the hive sql syntax if you use HiveContext: 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML

Thanks!
-Terry

That works! Thanks.
Can I ask you one further question?
How did spark sql support insertion?

That is say, if I did:
sqlContext.sql("insert into newStu values ("10","a",1)

the error is:
failure: ``table'' expected but identifier newStu found
insert into newStu values ('10', aa, 1)

but if I did:
sqlContext.sql(s"insert into Table newStu select * from otherStu")
that works.

Is there any document addressing that?


Best regards,
Jack


From: Terry Hole [mailto:hujie.ea...@gmail.com<mailto:hujie.ea...@gmail.com>]
Sent: Tuesday, 21 July 2015 4:17 PM
To: Jack Yang; user@spark.apache.org<mailto:user@spark.apache.org>
Subject: Re: standalone to connect mysql

Maybe you can try: spark-submit --class "sparkwithscala.SqlApp"  --jars 
/home/lib/mysql-connector-java-5.1.34.jar --master spark://hadoop1:7077 
/home/myjar.jar

Thanks!
-Terry
Hi there,

I would like to use spark to access the data in mysql. So firstly  I tried to 
run the program using:
spark-submit --class "sparkwithscala.SqlApp" --driver-class-path 
/home/lib/mysql-connector-java-5.1.34.jar --master local[4] /home/myjar.jar

that returns me the correct results. Then I tried the standalone version using:
spark-submit --class "sparkwithscala.SqlApp" --driver-class-path 
/home/lib/mysql-connector-java-5.1.34.jar --master spark://hadoop1:7077 
/home/myjar.jar
(the mysql-connector-java-5.1.34.jar i have them on all worker nodes.)
and the error is:

Exception in thread "main" org.apache.spark.SparkException: Job aborted due to 
stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost 
task 0.3 in stage 0.0 (TID 3, 192.168.157.129): java.sql.SQLException: No 
suitable driver found for 
jdbc:mysql://hadoop1:3306/sparkMysqlDB?user=root&password=root

I also found the similar problem before in 
https://jira.talendforge.org/browse/TBD-2244.

Is this a bug to be fixed later? Or do I miss anything?



Best regards,
Jack

Reply via email to