What kind of OOM? Driver or executor side? You can use coredump to find what
cause the OOM.
Thanks.
Zhan Zhang
On Apr 18, 2016, at 9:44 PM, 李明伟
mailto:kramer2...@126.com>> wrote:
Hi Samaga
Thanks very much for your reply and sorry for the delay reply.
Cassandra or Hive is a good sugg
You can try something like below, if you only have one column.
val rdd = parquetFile.javaRDD().map(row => row.getAs[String](0)
Thanks.
Zhan Zhang
On Apr 18, 2016, at 3:44 AM, Ramkumar V
mailto:ramkumar.c...@gmail.com>> wrote:
HI,
Any idea on this ?
Thanks,
[http://www.mylivesign
.
Thanks.
Zhan Zhang
On Apr 20, 2016, at 1:38 AM, 李明伟
mailto:kramer2...@126.com>> wrote:
Hi
the input data size is less than 10M. The task result size should be less I
think. Because I am doing aggregation&reduce on the data
At 2016-04-20 16:18:31, "Jeff Zhang"
mai
You can define your own udf, following is one example
Thanks
Zhan Zhang
val foo = udf((a: Int, b: String) => a.toString + b)
checkAnswer(
// SELECT *, foo(key, value) FROM testData
testData.select($"*", foo('key, 'value)).limit(3),
On Apr 21, 2016, at 8:51
INSERT OVERWRITE will overwrite any existing data in the table or partition
* unless IF NOT EXISTS is provided for a partition (as of Hive
0.9.0<https://issues.apache.org/jira/browse/HIVE-2612>).
Thanks.
Zhan Zhang
On Apr 21, 2016, at 3:20 PM, Bijay Kumar Pathak
mailto:bkpat...@m
You can try this
https://github.com/hortonworks/shc.git
or here
http://spark-packages.org/package/zhzhan/shc
Currently it is in the process of merging into HBase.
Thanks.
Zhan Zhang
On Apr 21, 2016, at 8:44 AM, Benjamin Kim
mailto:bbuil...@gmail.com>> wrote:
Hi Ted,
Can this mod
uct(1, 2). Please check how the Ordering is
implemented in InterpretedOrdering.
The output itself does not have any ordering. I am not sure why the unit test
and the real env have different environment.
Xiao,
I do see the difference between unit test and local cluster run. Do you know
the reaso
make sense to add this feature. It may seems
make user worry about more configuration, but by default we can still do 1 core
per task and only advanced users need to be aware of this feature.
Thanks.
Zhan Zhang
-
To unsubscribe
As Sean mentioned, you cannot referring to the local file in your remote
machine (executors). One walk around is to copy the file to all machines within
same directory.
Thanks.
Zhan Zhang
On Dec 11, 2015, at 10:26 AM, Lin, Hao
mailto:hao@finra.org>> wrote:
of the master node
set if you wan tot
do some performance benchmark.
Thanks.
Zhan Zhang
On Dec 11, 2015, at 9:34 AM, Wei Da mailto:xwd0...@qq.com>>
wrote:
Hi, all
I have done a test in different HW configurations of Spark 1.5.0. A KMeans
algorithm has been ran in four different Spark environments, the
I think you are fetching too many results to the driver. Typically, it is not
recommended to collect much data to driver. But if you have to, you can
increase the driver memory, when submitting jobs.
Thanks.
Zhan Zhang
On Dec 11, 2015, at 6:14 AM, Tom Seddon
mailto:mr.tom.sed...@gmail.com
I noticed that it is configurable in job level spark.task.cpus. Anyway to
support on task level?
Thanks.
Zhan Zhang
On Dec 11, 2015, at 10:46 AM, Zhan Zhang wrote:
> Hi Folks,
>
> Is it possible to assign multiple core per task and how? Suppose we have some
> scenario, i
If you want dataframe support, you can refer to https://github.com/zhzhan/shc,
which I am working on to integrate to HBase upstream with existing support.
Thanks.
Zhan Zhang
On Dec 15, 2015, at 4:34 AM, censj
mailto:ce...@lotuseed.com>> wrote:
hi,fight fate
Did I can in bulkPut() fu
You should be able to get the logs from yarn by “yarn logs -applicationId xxx”,
where you can possible find the cause.
Thanks.
Zhan Zhang
On Dec 15, 2015, at 11:50 AM, Eran Witkon wrote:
> When running
> val data = sc.wholeTextFile("someDir/*") data.count()
>
> I get
There are two cases here. If the container is killed by yarn, you can increase
jvm overhead. Otherwise, you have to increase the executor-memory if there is
no memory leak happening.
Thanks.
Zhan Zhang
On Dec 15, 2015, at 9:58 PM, Eran Witkon
mailto:eranwit...@gmail.com>> wrote:
In what situation, you have such cases? If there is no shuffle, you can
collapse all these functions into one, right? In the meantime, it is not
recommended to collect
all data to driver.
Thanks.
Zhan Zhang
On Dec 21, 2015, at 3:44 AM, Zhiliang Zhu
mailto:zchl.j...@yahoo.com.INVALID>>
application.
Thanks.
Zhan Zhang
On Dec 21, 2015, at 10:43 AM, Zhiliang Zhu
mailto:zchl.j...@yahoo.com.INVALID>> wrote:
What is difference between repartition / collect and collapse ...
Is collapse the same costly as collect or repartition ?
Thanks in advance ~
On Tuesday, December 22,
application run time, you can log into the container’s box, and
check the local cache of the container to find whether the log file exists or
not (after app terminate, these local cache files will be deleted as well).
Thanks.
Zhan Zhang
On Dec 18, 2015, at 7:23 AM, Kalpesh Jadhav
BTW: It is not only a Yarn-webui issue. In capacity scheduler, vcore is
ignored. If you want Yarn to honor vcore requests, you have to use
DominantResourceCalculator as Saisai suggested.
Thanks.
Zhan Zhang
On Dec 21, 2015, at 5:30 PM, Saisai Shao
mailto:sai.sai.s...@gmail.com>> wrote:
SQLContext is in driver side, and I don’t think you can use it in executors.
How to provide lookup functionality in executors really depends on how you
would use them.
Thanks.
Zhan Zhang
On Dec 22, 2015, at 4:44 PM, SRK wrote:
> Hi,
>
> Can SQL Context be used inside mapParti
Now json, parquet, orc(in hivecontext), text are natively supported. If you use
avro or others, you have to include the package, which are not built into spark
jar.
Thanks.
Zhan Zhang
On Dec 23, 2015, at 8:57 AM, Christopher Brady
mailto:christopher.br...@oracle.com>>
You are using embedded mode, which will create the db locally (in your case,
maybe the db has been created, but you do not have right permission?).
To connect to remote metastore, hive-site.xml has to be correctly configured.
Thanks.
Zhan Zhang
On Dec 23, 2015, at 7:24 AM, Soni spark
materialized in each partition, because
some partition may not have enough number of records, sometimes it is even
empty.
I didn’t see any straightforward walk around for this.
Thanks.
Zhan Zhang
On Dec 23, 2015, at 5:32 PM, 汪洋
mailto:tiandiwo...@icloud.com>> wrote:
It is an application runn
Hi Patcharee,
Did you enable the predicate pushdown in the second method?
Thanks.
Zhan Zhang
On Oct 8, 2015, at 1:43 AM, patcharee wrote:
> Hi,
>
> I am using spark sql 1.5 to query a hive table stored as partitioned orc
> file. We have the total files is about 6000 files a
rsions of OrcInputFormat. The hive path may use NewOrcInputFormat,
but the spark path use OrcInputFormat.
Thanks.
Zhan Zhang
On Oct 8, 2015, at 11:55 PM, patcharee wrote:
> Yes, the predicate pushdown is enabled, but still take longer time than the
> first method
>
> BR,
> P
In your case, you manually set an AND pushdown, and the predicate is right
based on your setting, : leaf-0 = (EQUALS x 320)
The right way is to enable the predicate pushdown as follows.
sqlContext.setConf("spark.sql.orc.filterPushdown", "true”)
Thanks.
Zhan Zhang
On Oct 9
That is weird. Unfortunately, there is no debug info available on this part.
Can you please open a JIRA to add some debug information on the driver side?
Thanks.
Zhan Zhang
On Oct 9, 2015, at 10:22 AM, patcharee
mailto:patcharee.thong...@uni.no>> wrote:
I set hiveContext.s
the
JIRA number?
Thanks.
Zhan Zhang
On Oct 13, 2015, at 1:01 AM, Patcharee Thongtra
mailto:patcharee.thong...@uni.no>> wrote:
Hi Zhan Zhang,
Is my problem (which is ORC predicate is not generated from WHERE clause even
though spark.sql.orc.filterPushdown=true) can be related to some f
Looks like some JVM got killed or OOM. You can check the log to see the real
causes.
Thanks.
Zhan Zhang
On Nov 3, 2015, at 9:23 AM, YaoPau
mailto:jonrgr...@gmail.com>> wrote:
java.io.FileNotFoun
Spark is a client library. You can just download the latest release or build on
you own, and replace your existing one without changing you existing cluster.
Thanks.
Zhan Zhang
On Nov 3, 2015, at 3:58 PM, roni
mailto:roni.epi...@gmail.com>> wrote:
Hi Spark experts,
This may be
If you assembly jar have hive jar included, the HiveContext will be used.
Typically, HiveContext has more functionality than SQLContext. In what case you
have to use SQLContext that cannot be done by HiveContext?
Thanks.
Zhan Zhang
On Nov 6, 2015, at 10:43 AM, Jerry Lam
mailto:chiling
1:9083
HW11188:spark zzhang$
By the way, I don’t know whether there is any caveat for this walk around.
Thanks.
Zhan Zhang
On Nov 6, 2015, at 2:40 PM, Jerry Lam
mailto:chiling...@gmail.com>> wrote:
Hi Zhan,
I don’t use HiveContext features at all. I use mostly DataFrame API. I
I agree with minor change. Adding a config to provide the option to init
SQLContext or HiveContext, with HiveContext as default instead of bypassing
when hitting the Exception.
Thanks.
Zhan Zhang
On Nov 6, 2015, at 2:53 PM, Ted Yu
mailto:yuzhih...@gmail.com>> wrote:
I would suggest ad
Hi Jerry,
https://issues.apache.org/jira/browse/SPARK-11562 is created for the issue.
Thanks.
Zhan Zhang
On Nov 6, 2015, at 3:01 PM, Jerry Lam
mailto:chiling...@gmail.com>> wrote:
Hi Zhan,
Thank you for providing a workaround!
I will try this out but I agree with Ted, there shoul
Hi Folks,
Does anybody meet the following issue? I use "mvn package -Phive -DskipTests”
to build the package.
Thanks.
Zhan Zhang
bin/spark-shell
...
Spark context available as sc.
error: error while loading QueryExecution, Missing dependency 'bad symbolic
reference. A si
Thanks Ted. I am using latest master branch. I will try your build command and
give it a try.
Thank.
Zhan Zhang
On Nov 9, 2015, at 10:46 AM, Ted Yu
mailto:yuzhih...@gmail.com>> wrote:
Which branch did you perform the build with ?
I used the following command yesterday:
mvn -Phive
In the hive-site.xml, you can remove all configuration related to tez and give
it a try again.
Thanks.
Zhan Zhang
On Nov 10, 2015, at 10:47 PM, DaeHyun Ryu
mailto:ry...@kr.ibm.com>> wrote:
Hi folks,
I configured tez as execution engine of Hive. After done that, whenever I
started
When you have following query, 'account=== “acct1” will be pushdown to generate
new query with “where account = acct1”
Thanks.
Zhan Zhang
On Nov 18, 2015, at 11:36 AM, Eran Medan
mailto:eran.me...@gmail.com>> wrote:
I understand that the following are equivalent
df.filt
If you run it on yarn with kerberos setup. You authenticate yourself by kinit
before launching the job.
Thanks.
Zhan Zhang
On Jul 28, 2015, at 8:51 PM, Anh Hong
mailto:hongnhat...@yahoo.com.INVALID>> wrote:
Hi,
I'd like to remotely run spark-submit from a local machine to subm
If you are using spark-1.4.0, probably it is caused by
SPARK-8458<https://issues.apache.org/jira/browse/SPARK-8458>
Thanks.
Zhan Zhang
On Aug 23, 2015, at 12:49 PM, lostrain A
mailto:donotlikeworkingh...@gmail.com>> wrote:
Ted,
Thanks for the suggestions. Actually I tried bot
It looks complicated, but I think it would work.
Thanks.
Zhan Zhang
From: Richard Eggert
Sent: Saturday, September 19, 2015 3:59 PM
To: User
Subject: PrunedFilteredScan does not work for UDTs and Struct fields
I defined my own relation (extending BaseRela
Hi Krishna,
For the time being, you can download from upstream, and it should be running OK
for HDP2.3. For hdp specific problem, you can ask in Hortonworks forum.
Thanks.
Zhan Zhang
On Sep 22, 2015, at 3:42 PM, Krishna Sankar
mailto:ksanka...@gmail.com>> wrote:
Guys,
* We ha
It should be similar to other hadoop jobs. You need hadoop configuration in
your client machine, and point the HADOOP_CONF_DIR in spark to the
configuration.
Thanks
Zhan Zhang
On Sep 22, 2015, at 6:37 PM, Zhiliang Zhu
mailto:zchl.j...@yahoo.com.INVALID>> wrote:
Dear Experts,
Spark
.
Zhan Zhang
On Sep 22, 2015, at 7:49 PM, Zhiliang Zhu
mailto:zchl.j...@yahoo.com>> wrote:
Hi Zhan,
Thanks very much for your help comment.
I also view it would be similar to hadoop job submit, however, I was not
deciding whether it is like that when
it comes to spark.
Have you ever tried th
former is used
to access hdfs, and the latter is used to launch application on top of yarn.
Then in the spark-env.sh, you add export HADOOP_CONF_DIR=/etc/hadoop/conf.
Thanks.
Zhan Zhang
On Sep 22, 2015, at 8:14 PM, Zhiliang Zhu
mailto:zchl.j...@yahoo.com>> wrote:
Hi Zhan,
Yes, I get
You can put hive-site.xml in your conf/ directory. It will connect to Hive when
HiveContext is initialized.
Thanks.
Zhan Zhang
On Jan 21, 2015, at 12:35 PM, YaoPau wrote:
> Is this possible, and if so what steps do I need to take to make this happen?
>
>
>
>
> --
>
You are running yarn-client mode. How about increase the --driver-memory and
give it a try?
Thanks.
Zhan Zhang
On Jan 29, 2015, at 6:36 PM, QiuxuanZhu
mailto:ilsh1...@gmail.com>> wrote:
Dear all,
I have no idea when it raises an error when I run the following code.
def getRo
I think it is expected. Refer to the comments in saveAsTable "Note that this
currently only works with SchemaRDDs that are created from a HiveContext”. If I
understand correctly, here the SchemaRDD means those generated by
HiveContext.sql, instead of applySchema.
Thanks.
Zhan Zhang
O
I think you can configure hadoop/hive to do impersonation. There is no
difference between secure or insecure hadoop cluster by using kinit.
Thanks.
Zhan Zhang
On Feb 2, 2015, at 9:32 PM, Koert Kuipers
mailto:ko...@tresata.com>> wrote:
yes jobs run as the user that launched them.
if yo
Not sure spark standalone mode. But on spark-on-yarn, it should work. You can
check following link:
http://hortonworks.com/hadoop-tutorial/using-apache-spark-hdp/
Thanks.
Zhan Zhang
On Feb 5, 2015, at 5:02 PM, Cheng Lian
mailto:lian.cs@gmail.com>> wrote:
Please note that Spark
Yes. You need to create xiaobogu under /user and provide right permission to
xiaobogu.
Thanks.
Zhan Zhang
On Feb 7, 2015, at 8:15 AM, guxiaobo1982
mailto:guxiaobo1...@qq.com>> wrote:
Hi Zhan Zhang,
With the pre-bulit version 1.2.0 of spark against the yarn cluster installed by
ambari
You need to have right hdfs account, e.g., hdfs, to create directory and
assign permission.
Thanks.
Zhan Zhang
On Feb 11, 2015, at 4:34 AM, guxiaobo1982
mailto:guxiaobo1...@qq.com>> wrote:
Hi Zhan,
My Single Node Cluster of Hadoop is installed by Ambari 1.7.0, I tried to
create the
When you log in, you have root access. Then you can do “su hdfs” or any other
account. Then you can create hdfs directory and change permission, etc.
Thanks
Zhan Zhang
On Feb 11, 2015, at 11:28 PM, guxiaobo1982
mailto:guxiaobo1...@qq.com>> wrote:
Hi Zhan,
Yes, I found there is
Hi Mate,
When you initialize the JavaSparkContext, you don’t need to specify the mode
“yarn-cluster”. I suspect that is the root cause.
Thanks.
Zhan Zhang
On Feb 25, 2015, at 10:12 AM, gulyasm
mailto:mgulya...@gmail.com>> wrote:
JavaSparkContext.
spark context initiate
YarnClusterSchedulerBackend instead of YarnClientSchedulerBackend, which I
think is the root cause.
Thanks.
Zhan Zhang
On Feb 25, 2015, at 1:53 PM, Zhan Zhang
mailto:zzh...@hortonworks.com>> wrote:
Hi Mate,
When you initialize the JavaSparkContext, you don’t need to
cores sitting idle.
OOM: increase the memory size, and JVM memory overhead may help here.
Thanks.
Zhan Zhang
On Feb 26, 2015, at 2:03 PM, Yana Kadiyska
mailto:yana.kadiy...@gmail.com>> wrote:
Imran, I have also observed the phenomenon of reducing the cores helping with
OOM. I wanted
You don’t need to know rdd dependencies to maximize dependencies. Internally
the scheduler will construct the DAG and trigger the execution if there is no
shuffle dependencies in between RDDs.
Thanks.
Zhan Zhang
On Feb 26, 2015, at 1:28 PM, Corey Nolet wrote:
> Let's say I'm
When you use sql (or API from SchemaRDD/DataFrame) to read data form parquet,
the optimizer will do column pruning, predictor pushdown, etc. Thus you can
the benefit of parquet column benefits. After that, you can operate the
SchemaRDD (DF) like regular RDD.
Thanks.
Zhan Zhang
On Feb 26
What confused me is the statement of "The final result is that rdd1 is
calculated twice.” Is it the expected behavior?
Thanks.
Zhan Zhang
On Feb 26, 2015, at 3:03 PM, Sean Owen
mailto:so...@cloudera.com>> wrote:
To distill this a bit further, I don't think you actually want
.saveAsHadoopFile(…)]
In this way, rdd1 will be calculated once, and two saveAsHadoopFile will happen
concurrently.
Thanks.
Zhan Zhang
On Feb 26, 2015, at 3:28 PM, Corey Nolet
mailto:cjno...@gmail.com>> wrote:
> What confused me is the statement of "The final result is that rdd1 is
remove it from the graph, and clean
up the cache.
Take yours as the example, the graph is construct as below:
RDD1——>output
|
|_RDD2___output
Thanks.
Zhan Zhang
On Feb 26, 2015, at 4:20 PM, Corey Nolet
mailto:cjno...@gmail.com>> wrote:
Ted. That one I know. I
Currently in spark, it looks like there is no easy way to know the
dependencies. It is solved at run time.
Thanks.
Zhan Zhang
On Feb 26, 2015, at 4:20 PM, Corey Nolet
mailto:cjno...@gmail.com>> wrote:
Ted. That one I know. It was the dependency part I was curious about
On Feb 26, 201
In Yarn (Cluster or client), you can access the spark ui when the app is
running. After app is done, you can still access it, but need some extra setup
for history server.
Thanks.
Zhan Zhang
On Mar 3, 2015, at 10:08 AM, Ted Yu
mailto:yuzhih...@gmail.com>> wrote:
bq. changing the a
Do you have enough resource in your cluster? You can check your resource
manager to see the usage.
Thanks.
Zhan Zhang
On Mar 3, 2015, at 8:51 AM, abhi
mailto:abhishek...@gmail.com>> wrote:
I am trying to run below java class with yarn cluster, but it hangs in accepted
state . i don
It use HashPartitioner to distribute the record to different partitions, but
the key is just integer evenly across output partitions.
>From the code, each resulting partition will get very similar number of
>records.
Thanks.
Zhan Zhang
On Mar 4, 2015, at 3:47 PM, Du Li
mailto:l...
om
broadcast at TableReader.scala:68
m: org.apache.spark.sql.SchemaRDD =
SchemaRDD[3] at RDD at SchemaRDD.scala:108
== Query Plan ==
== Physical Plan ==
Filter Contains(value#5, Restaurant)
HiveTableScan [key#4,value#5], (MetastoreRelation default, testtable, None),
None
scala>
Thanks.
Zhan Zhang
On Mar 4, 2015, a
/
Thanks.
Zhan Zhang
On Mar 5, 2015, at 11:09 AM, Marcelo Vanzin
mailto:van...@cloudera.com>> wrote:
It seems from the excerpt below that your cluster is set up to use the
Yarn ATS, and the code is failing in that path. I think you'll need to
apply the following patch to your Spark
k the link to see why the shell
failed in the first place.
Thanks.
Zhan Zhang
On Mar 6, 2015, at 9:59 AM, Todd Nist
mailto:tsind...@gmail.com>> wrote:
First, thanks to everyone for their assistance and recommendations.
@Marcelo
I applied the patch that you recommended and am now able to g
Do you mean “--hiveConf” (two dash) , instead of -hiveconf (one dash)
Thanks.
Zhan Zhang
On Mar 6, 2015, at 4:20 AM, James wrote:
> Hello,
>
> I want to execute a hql script through `spark-sql` command, my script
> contains:
>
> ```
> ALTER TABLE xxx
>
Sorry. Misunderstanding. Looks like it already worked. If you still met some
hdp.version problem, you can try it :)
Thanks.
Zhan Zhang
On Mar 6, 2015, at 11:40 AM, Zhan Zhang
mailto:zzh...@hortonworks.com>> wrote:
You are using 1.2.1 right? If so, please add java-opts in conf directo
You are using 1.2.1 right? If so, please add java-opts in conf directory and
give it a try.
[root@c6401 conf]# more java-opts
-Dhdp.version=2.2.2.0-2041
Thanks.
Zhan Zhang
On Mar 6, 2015, at 11:35 AM, Todd Nist
mailto:tsind...@gmail.com>> wrote:
-Dhdp.version=2.2.0.0-2041
essing one partition.
iterPartition += 1
}
You can refer RDD.take for example.
Thanks.
Zhan Zhang
On Mar 9, 2015, at 3:41 PM, Shuai Zheng
mailto:szheng.c...@gmail.com>> wrote:
Hi All,
I am processing some time series data. For one day, it might has 500GB, then
for each hour,
It is during function evaluation in the line search, the value is either
infinite or NaN, which may be caused too large step size. In the code, the step
is reduced to half.
Thanks.
Zhan Zhang
On Mar 13, 2015, at 2:41 PM, cjwang wrote:
> I am running LogisticRegressionWithLBFGS. I got th
Each RDD has multiple partitions, each of them will produce one hdfs file when
saving output. I don’t think you are allowed to have multiple file handler
writing to the same hdfs file. You still can load multiple files into hive
tables, right?
Thanks..
Zhan Zhang
On Mar 15, 2015, at 7:31 AM
Hi Patcharee,
It is an alpha feature in HDP distribution, integrating ATS with Spark history
server. If you are using upstream, you can configure spark as regular without
these configuration. But other related configuration are still mandatory, such
as hdp.version related.
Thanks.
Zhan Zhang
Probably the port is already used by others, e.g., hive. You can change the
port similar to below
./sbin/start-thriftserver.sh --master yarn --executor-memory 512m --hiveconf
hive.server2.thrift.port=10001
Thanks.
Zhan Zhang
On Mar 23, 2015, at 12:01 PM, Neil Dev
mailto:neilk...@gmail.com
You can try to set it in spark-env.sh.
# - SPARK_LOG_DIR Where log files are stored. (Default:
${SPARK_HOME}/logs)
# - SPARK_PID_DIR Where the pid file is stored. (Default: /tmp)
Thanks.
Zhan Zhang
On Mar 24, 2015, at 12:10 PM, Anubhav Agarwal
mailto:anubha...@gmail.com>>
I solve this by increase the PermGen memory size in driver.
-XX:MaxPermSize=512m
Thanks.
Zhan Zhang
On Mar 25, 2015, at 10:54 AM, ÐΞ€ρ@Ҝ (๏̯͡๏)
mailto:deepuj...@gmail.com>> wrote:
I am facing same issue, posted a new thread. Please respond.
On Wed, Jan 14, 2015 at 4:38 AM, Zhan
You can do it in $SPARK_HOME/conf/spark-defaults.con
spark.driver.extraJavaOptions -XX:MaxPermSize=512m
Thanks.
Zhan Zhang
On Mar 25, 2015, at 7:25 PM, ÐΞ€ρ@Ҝ (๏̯͡๏)
mailto:deepuj...@gmail.com>> wrote:
Where and how do i pass this or other JVM argument ?
-XX:MaxPermSize=512m
On Wed,
at :27 []
| ShuffledRDD[2] at reduceByKey at :25 []
+-(8) MapPartitionsRDD[1] at map at :23 []
| ParallelCollectionRDD[0] at parallelize at :21 []
Thanks.
Zhan Zhang
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.o
with keeping key part untouched. Then mapValues may not
be able to do this.
Changing the code to allow this is trivial, but I don’t know whether there is
some special reason behind this.
Thanks.
Zhan Zhang
On Mar 26, 2015, at 2:49 PM, Jonathan Coveney
mailto:jcove...@gmail.com>> wro
Thanks all for the quick response.
Thanks.
Zhan Zhang
On Mar 26, 2015, at 3:14 PM, Patrick Wendell wrote:
> I think we have a version of mapPartitions that allows you to tell
> Spark the partitioning is preserved:
>
> https://github.com/apache/spark/blob/master/core/src/main/scal
Hi Rares,
The number of partition is controlled by HDFS input format, and one file may
have multiple partitions if it consists of multiple block. In you case, I think
there is one file with 2 splits.
Thanks.
Zhan Zhang
On Mar 27, 2015, at 3:12 PM, Rares Vernica
mailto:rvern...@gmail.com
Probably guava version conflicts issue. What spark version did you use, and
which hadoop version it compile against?
Thanks.
Zhan Zhang
On Mar 27, 2015, at 12:13 PM, Johnson, Dale
mailto:daljohn...@ebay.com>> wrote:
Yes, I could recompile the hdfs client with more logging, but I don’
/sp[ark-defaults.conf, adding following settings.
spark.driver.extraJavaOptions -Dhdp.version=x
spark.yarn.am.extraJavaOptions -Dhdp.version=x
3. In $SPARK_HOME/java-opts, add following options.
-Dhdp.version=x
Thanks.
Zhan Zhang
On Mar 30, 2015, at 6:56 AM, Doug Balog
ersion=2.2.0.0–2041
spark.yarn.am.extraJavaOptions -Dhdp.version=2.2.0.0–2041
This is HDP specific question, and you can move the topic to HDP forum.
Thanks.
Zhan Zhang
On Apr 13, 2015, at 3:00 AM, Zork Sail
mailto:zorks...@gmail.com>> wrote:
Hi Zhan,
Alas setting:
-Dhdp.version=2.2.0.0–20
: For spark-1.3, you can use the binary distribution from apache.
Thanks.
Zhan Zhang
On Apr 17, 2015, at 2:01 PM, Udit Mehta
mailto:ume...@groupon.com>> wrote:
I followed the steps described above and I still get this error:
Error: Could not find or load main
You probably want to first try the basic configuration to see whether it works,
instead of setting SPARK_JAR pointing to the hdfs location. This error is
caused by not finding ExecutorLauncher in class path, and not HDP specific, I
think.
Thanks.
Zhan Zhang
On Apr 17, 2015, at 2:26 PM, Udit
Hi Udit,
By the way, do you mind to share the whole log trace?
Thanks.
Zhan Zhang
On Apr 17, 2015, at 2:26 PM, Udit Mehta
mailto:ume...@groupon.com>> wrote:
I am just trying to launch a spark shell and not do anything fancy. I got the
binary distribution from apache and put the
[root@c6402 conf]#
Thanks.
Zhan Zhang
On Apr 17, 2015, at 3:09 PM, Udit Mehta
mailto:ume...@groupon.com>> wrote:
Hi,
This is the log trace:
https://gist.github.com/uditmehta27/511eac0b76e6d61f8b47
On the yarn RM UI, I see :
Error: Could not find or load main
One optimization is to reduce the shuffle by first aggregate locally (only keep
the max for each name), and then reduceByKey.
Thanks.
Zhan Zhang
On Apr 24, 2015, at 10:03 PM, ayan guha
mailto:guha.a...@gmail.com>> wrote:
Here you go
t =
[["A",10,"A10"],["
I tried with simple spark-hive select and insert, and it works. But to directly
manipulate the ORCFile through RDD, spark has to be upgraded to support
hive-0.13 first. Because some ORC API is not exposed until Hive-0.12.
Thanks.
Zhan Zhang
On Aug 11, 2014, at 10:23 PM, vinay.kash
Yes. You are right, but I tried old hadoopFile for OrcInputFormat. In hive12,
OrcStruct is not exposing its api, so spark cannot access it. With Hive13, RDD
can read from OrcFile. Btw, I didn’t see ORCNewOutputFormat in hive-0.13.
Direct RDD manipulation (Hive13)
val inputRead =
sc.hadoopFile
I agree. We need the support similar to parquet file for end user. That’s the
purpose of Spark-2883.
Thanks.
Zhan Zhang
On Aug 14, 2014, at 11:42 AM, Yin Huai wrote:
> I feel that using hadoopFile and saveAsHadoopFile to read and write ORCFile
> are more towards developers becaus
String HBASE_TABLE_NAME = "hbase.table.name”;
Thanks.
Zhan Zhang
On Aug 17, 2014, at 11:39 PM, Cesar Arevalo wrote:
> HadoopRDD
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information
.
Zhan Zhang
On Aug 18, 2014, at 11:26 AM, Peng Cheng wrote:
> I'm curious to see that if you declare broadcasted wrapper as a var, and
> overwrite it in the driver program, the modification can have stable impact
> on all transformations/actions defined BEFORE the overwrite bu
reduceByKey because it is not cached.
I agree with you it is very confusing.
Thanks.
Zhan Zhang
The f
On Aug 20, 2014, at 2:28 PM, Patrick Wendell wrote:
> The reason is that some operators get pipelined into a single stage.
> rdd.map(XX).filter(YY) - this executes in a single stage since
I think it depends on your job. My personal experiences when I run TB data.
spark got loss connection failure if I use big JVM with large memory, but with
more executors with small memory, it can run very smoothly. I was running spark
on yarn.
Thanks.
Zhan Zhang
On Aug 21, 2014, at 3:42 PM
uot;a",10),("b",3),("c",5))).map(e=>Rec(e._1,e._2))
d2.saveAsParquetFile("p2.parquet")
val d1=sqlContext.parquetFile("p1.parquet")
val d2=sqlContext.parquetFile("p2.parquet")
d1.registerAsTable("logs")
d2.insertInto(&q
ounts.saveAsTextFile("hdfs://sandbox.hortonworks.com:8020/tmp/wordcount")
Thanks.
Zhan Zhang
On Aug 26, 2014, at 12:35 AM, motte1988
wrote:
> Hello,
> it's me again.
> Now I've got an explanation for the behaviour. It seems that the driver
> memory is not large
1 - 100 of 113 matches
Mail list logo