as a table alias (which you are doing). Change the alias or
> place it between backticks and you should be fine.
>
>
> 2016-05-18 23:51 GMT+02:00 JaeSung Jun :
>
>> It's spark 1.6.1 and hive 1.2.1 (spark-sql saying "SET
>> spark.sql.hive.version=1.2.1").
>
It's spark 1.6.1 and hive 1.2.1 (spark-sql saying "SET
spark.sql.hive.version=1.2.1").
Thanks
On 18 May 2016 at 23:31, Ted Yu wrote:
> Which release of Spark / Hive are you using ?
>
> Cheers
>
> On May 18, 2016, at 6:12 AM, JaeSung Jun wrote:
>
> Hi,
&
Hi,
I'm working on custom data source provider, and i'm using fully qualified
table name in FROM clause like following :
SELECT user. uid, dept.name
FROM userdb.user user, deptdb.dept
WHERE user.dept_id = dept.id
and i've got the following error :
MismatchedTokenException(279!=26)
at
org.antlr.
Hi All,
I'm developing custom data source & relation provider based on spark 1.6.1.
Every unit test has its own Spark Context, and it runs successfully when
running one by one.
But when running in sbt(sbt:test), error pops up when initializing spark
contest like followings :
org.apache.spark.rpc.
Hi,
I'm currently working on Iterable type of RDD, which is like :
val keyValueIterableRDD[CaseClass1, Iterable[CaseClass2]] = buildRDD(...)
If there is only one unique key and Iterable is big enough, would this
Iterable be partitioned across all executors like followings ?
(executor1)
(xxx, it
Hi,
I'm working on custom sql processing on top of Spark-SQL, and i'm upgrading
it along with spark 1.4.1.
I've got an error regarding multiple test suites access hive meta store at
the same time like :
Cause: org.apache.derby.impl.jdbc.EmbedSQLException: Another instance of
Derby may have alread
As long as JDBC driver is provided, any database can be used in JDBC
datasource provider.
you can provide driver class in options field like followings :
CREATE TEMPORARY TABLE jdbcTable
USING org.apache.spark.sql.jdbc
OPTIOS(
url "jdbc:oracle:thin:@myhost:1521:orcl"
driver "oracle.jdbc.driver.Ora
e
> https://eradiating.wordpress.com/2015/04/17/using-spark-data-sources-to-load-data-from-postgresql/
>
> --- Original Message ---
>
> From: "JaeSung Jun"
> Sent: April 21, 2015 1:05 AM
> To: dev@spark.apache.org
> Subject: Can't find postgresql jdbc driver when usin
Hi,
I tried to get external data base table running sitting on postgresql.
i've got java.lang.ClassNotFoundException even if i added driver jar using
--jars option like followings :
is it class loader hierarchy problem or any idea?
thanks
-
spark-sql --jars ../lib/postgresql-9.
spark/sql/hive/HiveQl.scala>
>
> On Tue, Apr 14, 2015 at 7:13 AM, JaeSung Jun wrote:
>
>> Hi,
>>
>> Wile I've been walking through spark-sql source code, I typed the
>> following
>> HiveQL:
>>
>> CREATE EXTERNAL TABLE user (uid STRING, age INT
Hi,
Wile I've been walking through spark-sql source code, I typed the following
HiveQL:
CREATE EXTERNAL TABLE user (uid STRING, age INT, gender STRING, job STRING,
ts STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION
'/hive/user';
, and I finally came across ddl.scala after analysin
11 matches
Mail list logo