Re: Understanding Spark Memory distribution

2015-03-30 Thread giive chen
Hi Ankur If you using standalone mode, your config is wrong. You should use "export SPARK_DAEMON_MEMORY=xxx " in config/spark-env.sh. At least it works on my spark 1.3.0 standalone mode machine. BTW, The SPARK_DRIVER_MEMORY is used in Yarn mode and looks like the standalone mode don't use this c

RE: Announcing Spark 1.0.0

2014-05-30 Thread giive chen
Great work! On May 30, 2014 10:15 PM, "Ian Ferreira" wrote: > Congrats > > Sent from my Windows Phone > -- > From: Dean Wampler > Sent: 5/30/2014 6:53 AM > To: user@spark.apache.org > Subject: Re: Announcing Spark 1.0.0 > > Congratulations!! > > > On Fri, May 30,

Re: Java RDD structure for Matrix predict?

2014-05-27 Thread giive chen
Hi Sandeep I think you should use testRatings.mapToPair instead of testRatings.map. So the code should be JavaPairRDD usersProducts = training.mapToPair( new PairFunction() { public Tuple2 call(Rating r) throws Exception { re

Re: Error reading HDFS file using spark 0.9.0 / hadoop 2.2.0 - incompatible protobuf 2.5 and 2.4.1

2014-04-15 Thread giive chen
Hi Prasad Sorry for missing your reply. https://gist.github.com/thegiive/10791823 Here it is. Wisely Chen On Fri, Apr 4, 2014 at 11:57 PM, Prasad wrote: > Hi Wisely, > Could you please post your pom.xml here. > > Thanks > > > > -- > View this message in context: > http://apache-spark-user-list

Re: Lost an executor error - Jobs fail

2014-04-14 Thread giive chen
Hi Praveen What is your config about "* spark.local.dir" ? * Is all your worker has this dir and all worker has right permission on this dir? I think this is the reason of your error Wisely Chen On Mon, Apr 14, 2014 at 9:29 PM, Praveen R wrote: > Had below error while running shark queries on

Re: Calling Spark enthusiasts in NYC

2014-03-31 Thread giive chen
Hi Andy We are from Taiwan. We are already planning to have a Spark meetup. We already have some resources like place and food budget. But we do need some other resource. Please contact me offline. Thanks Wisely Chen On Tue, Apr 1, 2014 at 1:28 AM, Andy Konwinski wrote: > Hi folks, > > We hav

Re: Distributed running in Spark Interactive shell

2014-03-26 Thread giive chen
This response is for Sai The easiest way to verify your current Spark-Shell setting is just type "sc.master" IF your setting is correct, it should return scala> sc.master res0: String = spark://master.ip.url.com:5050 If your SPARK_MASTER_IP is not correct setting, it will response scala> sc.mast

Re: Error reading HDFS file using spark 0.9.0 / hadoop 2.2.0 - incompatible protobuf 2.5 and 2.4.1

2014-03-25 Thread giive chen
Hi I am quite beginner in spark and I have similar issue last week. I don't know if my issue is the same as yours. I found that my program's jar contain protobuf and when I remove this dependency on my program's pom.xml, rebuild my program and it works. Here is how I solved my own issue. Environ