I create a jira and pull request for this issue.
https://issues.apache.org/jira/browse/SPARK-11100
在 2015年10月13日 16:36, Xiaoyu Wang 写道:
I have the same issue.
I think spark thrift server is not suport HA with zookeeper now.
在 2015年09月01日 18:10, sreeramvenkat 写道:
Hi,
I am trying to setup
I have the same issue.
I think spark thrift server is not suport HA with zookeeper now.
在 2015年09月01日 18:10, sreeramvenkat 写道:
Hi,
I am trying to setup dynamic service discovery for HiveThriftServer in a
two node cluster.
In the thrift server logs, I am not seeing itself registering with zoo
I have the same issue.
I think spark thrift server is not suport HA with zookeeper now.
在 2015年09月01日 18:10, sreeramvenkat 写道:
Hi,
I am trying to setup dynamic service discovery for HiveThriftServer in a
two node cluster.
In the thrift server logs, I am not seeing itself registering with zoo
Hi all!
My sql case is:
insert overwrite table test1 select * From test;
In the job end got move file error.
I see hive-0.13.1 support for viewfs is not good. until hive-1.1.0+
How to upgrade the hive version for spark? Or how to fix the bug on
"org.spark-project.hive".
My version:
Spark versio
> to be turned on explicitly. Try *spark.sql.parquet.**filterPushdown=true
> . *It's off by default
>
> On Mon, Jan 19, 2015 at 3:46 AM, Xiaoyu Wang wrote:
>
>> Yes it works!
>> But the filter can't pushdown!!!
>>
>> If custom parquetinputformat only
Yes it works!
But the filter can't pushdown!!!
If custom parquetinputformat only implement the datasource API?
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/sources/interfaces.scala
2015-01-16 21:51 GMT+08:00 Xiaoyu Wang :
> Thanks yana!
>
ork from my Samsung Galaxy S®4.
>
>
> Original message
> From: Xiaoyu Wang
> Date:01/16/2015 5:09 AM (GMT-05:00)
> To: user@spark.apache.org <mailto:user@spark.apache.org>
> Subject: Why custom parquet format hive table execute "ParquetTableScan"
> physi
Hi all!
In the Spark SQL1.2.0.
I create a hive table with custom parquet inputformat and outputformat.
like this :
CREATE TABLE test(
id string,
msg string)
CLUSTERED BY (
id)
SORTED BY (
id ASC)
INTO 10 BUCKETS
ROW FORMAT SERDE
'*com.a.MyParquetHiveSerDe*'
STORED AS INPUTFORMAT
'*com.
hrift server.
>
>
>
> On Wed, Dec 31, 2014 at 3:55 PM, Xiaoyu Wang wrote:
>
>> Hi all!
>>
>> I use Spark SQL1.2 start the thrift server on yarn.
>>
>> I want to use fair scheduler in the thrift server.
>>
>> I set the prop
Hi all!
I use Spark SQL1.2 start the thrift server on yarn.
I want to use fair scheduler in the thrift server.
I set the properties in spark-defaults.conf like this:
spark.scheduler.mode FAIR
spark.scheduler.allocation.file
/opt/spark-1.2.0-bin-2.4.1/conf/fairscheduler.xml
In the thrift server
stages can be killed, and the job can’t be killed!
Is any way to kill the SQL job in the thrift server?
Xiaoyu Wang
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h
11 matches
Mail list logo