solve the problem by create directory on hdfs before execute the sql.
but i met a new error when i use :
INSERT OVERWRITE LOCAL DIRECTORY '/search/odin/test' row format delimited
FIELDS TERMINATED BY '\t' select vrid, query, url, loc_city from
custom.common_wap_vr where logdate >= '2018073000' an
Spark needs to create a directory first, while hive can automatically create
directory.
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
Spark-2.3.0 support INSERT OVERWRITE DIRECTORY to directly write data into
the filesystem from a query.
I have met a problem with sql
"INSERT OVERWRITE DIRECTORY '/tmp/test-insert-spark' select vrid, query,
url, loc_city from custom.common_wap_vr where logdate >= '2018073000' and
logdate <= '20
I run "spark-sql --master yarn --deploy-mode client -f 'SQLs' " in shell,
The application is stuck when the AM is down and restart in other nodes. It
seems the driver wait for the next sql. Is this a bug?In my opinion,Either
the application execute the failed sql or exit with a failure when the
I run "spark-sql --master yarn --deploy-mode client -f 'SQLs' " in shell,
The application is stuck when the AM is down and restart in other nodes. It
seems the driver wait for the next sql. Is this a bug?In my opinion,Either
the application execute the failed sql or exit with a failure when the