Hi,
How to Use Spark scala custom UDF in spark sql CLI or Beeline client.
with sqlContext we can register a UDF like this:
sqlContext.udf.register("sample_fn", sample_fn _ )
What is the way to use UDF in Spark sql CLI or beeline client.
Thanks
Pooja
Thank you, I got it.
发件人: Mich Talebzadeh
发送时间: 2016年6月30日 14:52
收件人: Saisai Shao
抄送: Huang Meilong; user@spark.apache.org
主题: Re: deploy-mode flag in spark-sql cli
Yes I forgot that anything with REPL both spark-sql and spark-shell are simple
convenience
AAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> aris
risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss,
med.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.
On 30 June 2016 at 05:16, Huang Meilong wrote:
> Hello,
>
>
> I added deploy-mode flag in spark-sql cli like this:
>
> $ spark-sql --deploy-mode cluster --mas
Hello,
I added deploy-mode flag in spark-sql cli like this:
$ spark-sql --deploy-mode cluster --master yarn -e "select * from mx"
It showed error saying "Cluster deploy mode is not applicable to Spark SQL
shell", but "spark-sql --help" shows "--deploy-mode" option. Is this a bug?
I do not know Postgres but that sounds like a system table much like Oracle
v$instance?
Why running a Hive schema script against a hive schema/DB in Postgres
should impact system schema?
Mine is Oracle
s...@mydb12.mich.LOCAL> SELECT version FROM v$instance;
VERSION
-
12.1.0.2.0
Hi all,
I use PostgreSQL to store the hive metadata.
First, I imported a sql script to metastore database as follows:
psql -U postgres -d metastore -h 192.168.50.30 -f
hive-schema-1.2.0.postgres.sql
Then, when I started $SPARK_HOME/bin/spark-sql, the PostgreSQL gave the
following error
No. That is not my case. Actually I am running spark-sql , which is in
spark-sql cli mode, and execute
sql queries against my hive tables. In spark-sql cli, there seems no exsiting
sqlContext or sparkContext,
only I can run some select/create/insert/delete operations.
Best,
Sun.
fightf
In spark-shell, I can do:
scala> sqlContext.clearCache()
Is that not the case for you ?
On Wed, Feb 3, 2016 at 7:35 PM, fightf...@163.com wrote:
> Hi, Ted
> Yes. I had seen that issue. But it seems that in spark-sql cli cannot do
> command like :
>sqlContext.clearCache()
Hi, Ted
Yes. I had seen that issue. But it seems that in spark-sql cli cannot do
command like :
sqlContext.clearCache()
Is this right ? In spark-sql cli I can only run some sql queries. So I want to
see if there
are any available options to reach this.
Best,
Sun.
fightf...@163.com
Have you looked at
SPARK-5909 Add a clearCache command to Spark SQL's cache manager
On Wed, Feb 3, 2016 at 7:16 PM, fightf...@163.com wrote:
> Hi,
> How could I clear cache (execute sql query without any cache) using spark
> sql cli ?
> Is there any command availabl
Hi,
How could I clear cache (execute sql query without any cache) using spark sql
cli ?
Is there any command available ?
Best,
Sun.
fightf...@163.com
arallel on 1.5.2 and run OOM.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Parquet-block-size-from-spark-sql-cli-tp26097.html
> Sent from the Apache Spark User Li
Can I set the Parquet block size (parquet.block.size) in spark-sql. We are
loading about 80 table partitions in parallel on 1.5.2 and run OOM.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Parquet-block-size-from-spark-sql-cli-tp26097.html
Sent from the
Well , Sorry for late reponse and thanks a lot for pointing out the clue.
fightf...@163.com
From: Akhil Das
Date: 2015-12-03 14:50
To: Sahil Sareen
CC: fightf...@163.com; user
Subject: Re: spark sql cli query results written to file ?
Oops 3 mins late. :)
Thanks
Best Regards
On Thu, Dec 3
can
read more information over here
http://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes
Thanks
Best Regards
On Thu, Dec 3, 2015 at 11:35 AM, fightf...@163.com
wrote:
> HI,
> How could I save the spark sql cli running queries results and write the
> results to some local fil
Did you see: http://spark.apache.org/docs/latest/sql-programming-guide.html
-Sahil
On Thu, Dec 3, 2015 at 11:35 AM, fightf...@163.com
wrote:
> HI,
> How could I save the spark sql cli running queries results and write the
> results to some local file ?
> Is there any avail
HI,
How could I save the spark sql cli running queries results and write the
results to some local file ?
Is there any available command ?
Thanks,
Sun.
fightf...@163.com
ards
>
> On Thu, Dec 3, 2015 at 11:35 AM, fightf...@163.com
> wrote:
>
>> HI,
>> How could I save the spark sql cli running queries results and write the
>> results to some local file ?
>> Is there any available command ?
>>
>> Thanks,
>> Sun.
>>
>> --
>> fightf...@163.com
>>
>
>
can
>> read more information over here
>> http://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes
>>
>>
>>
>> Thanks
>> Best Regards
>>
>> On Thu, Dec 3, 2015 at 11:35 AM, fightf...@163.com
>> wrote:
>>
>
hey guys
I have CDH 5.3.3 with Spark 1.2.0 (on Yarn)
This does not work /opt/cloudera/parcels/CDH/lib/spark/bin/spark-sql
--deploy-mode client --master yarn --driver-memory 1g -e "select j.person_id,
p.first_name, p.last_name, count(*) from (select person_id from
cdr.cdr_mjp_joborder where pers
A workaround trick is found and put in the ticket
https://issues.apache.org/jira/browse/SPARK-4854. Hope this would be useful.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Custom-UDTF-with-Lateral-View-throws-ClassNotFound-exception-in-Spark-SQL-CLI
Can you add this information to the JIRA?
On Mon, Dec 15, 2014 at 10:54 AM, shenghua wrote:
>
> Hello,
> I met a problem when using Spark sql CLI. A custom UDTF with lateral view
> throws ClassNotFound exception. I did a couple of experiments in same
> environment (spark version 1
Hello,
I met a problem when using Spark sql CLI. A custom UDTF with lateral view
throws ClassNotFound exception. I did a couple of experiments in same
environment (spark version 1.1.1):
select + same custom UDTF (Passed)
select + lateral view + custom UDTF (ClassNotFoundException)
select
Sadhan Sood
> wrote:
>
>> We want to run multiple instances of spark sql cli on our yarn cluster.
>> Each instance of the cli is to be used by a different user. This looks
>> non-optimal if each user brings up a different cli given how spark works on
>> yarn by runni
The JDBC server is what you are looking for:
http://spark.apache.org/docs/latest/sql-programming-guide.html#running-the-thrift-jdbc-server
On Wed, Oct 22, 2014 at 11:10 AM, Sadhan Sood wrote:
> We want to run multiple instances of spark sql cli on our yarn cluster.
> Each instance of the
We want to run multiple instances of spark sql cli on our yarn cluster.
Each instance of the cli is to be used by a different user. This looks
non-optimal if each user brings up a different cli given how spark works on
yarn by running executor processes (and hence consuming resources) on
worker
>>>>
>>>> On Mon, Sep 22, 2014 at 6:30 PM, Yin Huai
>>>> wrote:
>>>>
>>>>> Hi Gaurav,
>>>>>
>>>>> Can you put hive-site.xml in conf/ and try again?
>>>>>
>>>>> Thanks,
>
aurav,
>>>>
>>>> Can you put hive-site.xml in conf/ and try again?
>>>>
>>>> Thanks,
>>>>
>>>> Yin
>>>>
>>>> On Mon, Sep 22, 2014 at 4:02 PM, gtinside wrote:
>>>>
>>>>> Hi ,
>
Can you put hive-site.xml in conf/ and try again?
>>>
>>> Thanks,
>>>
>>> Yin
>>>
>>> On Mon, Sep 22, 2014 at 4:02 PM, gtinside wrote:
>>>
>>>> Hi ,
>>>>
>>>> I have been using spark shell to exec
t;>
>>> Hi ,
>>>
>>> I have been using spark shell to execute all SQLs. I am connecting to
>>> Cassandra , converting the data in JSON and then running queries on it,
>>> I
>>> am using HiveContext (and not SQLContext) because of "explode
have been using spark shell to execute all SQLs. I am connecting to
>> Cassandra , converting the data in JSON and then running queries on it, I
>> am using HiveContext (and not SQLContext) because of "explode "
>> functionality in it.
>>
>> I want to see how ca
I
> am using HiveContext (and not SQLContext) because of "explode "
> functionality in it.
>
> I want to see how can I use Spark SQL CLI for directly running the queries
> on saved table. I see metastore and metastore_db getting created in the
> spark bin directory (my hi
Hi ,
I have been using spark shell to execute all SQLs. I am connecting to
Cassandra , converting the data in JSON and then running queries on it, I
am using HiveContext (and not SQLContext) because of "explode "
functionality in it.
I want to see how can I use Spark SQL CLI fo
35 matches
Mail list logo