out using ~60GB of memory right away or
> does it start out smaller and slowly build up to that high? If so, how long
> does it take to get that high?
>
> Also, which version of Spark are you using?
>
>
> SameerF
>
>> On Fri, Oct 24, 2014 at 8:07 AM, marylucy wrote
i used standalone spark,set spark.driver.memory=5g,but spark-submit process use
57g memory, is this normal?how to decrease it?
ays, and
> report back.
>
>
> Sincerely,
>
> DB Tsai
> ---
> My Blog: https://www.dbtsai.com
> LinkedIn: https://www.linkedin.com/in/dbtsai
>
>> On Sat, Oct 18, 2014 at 6:22 PM, marylucy wrote:
>>
ot;)
> .set("spark.akka.frameSize","50")
>
>
> Thanks
> Best Regards
>
>> On Sun, Oct 19, 2014 at 6:52 AM, marylucy wrote:
>> When doing groupby for big data,may be 500g,some partition tasks
>> success,some partition tasks fetchf
ched. Remember that certain operations in Spark are
> lazy, and caching is one of them.
>
> Nick
>
>> On Mon, Oct 20, 2014 at 9:19 AM, marylucy wrote:
>> in spark-shell,I do in follows
>> val input = sc.textfile("hdfs://192.168.1.10/people/testinput/")
&
in spark-shell,I do in follows
val input = sc.textfile("hdfs://192.168.1.10/people/testinput/")
input.cache()
In webui,I cannot see any rdd in storage tab.can anyone tell me how to show rdd
size?thank you
-
To unsubscribe
When doing groupby for big data,may be 500g,some partition tasks success,some
partition tasks fetchfailed error. Spark system retry previous stage,but
always fail
6 computers : 384g
Worker:40g*7 for one computer
Can anyone tell me why fetch failed???
---
I set 200,it remain failed in second step,(map and mapPartition in webui)
In spark1.0.2 stable version ,it works well in first step,configuration same as
1.1.0
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For addit
I am building spark-1.0-rc4 with maven,following
http://spark.apache.org/docs/latest/building-with-maven.html
But running graphx edgeFileList ,some tasks failed
error:requested array size exceed vm limits
error:executor lost
Can anyone know how to fit it
i see it works well,thank you!!!
But in follow situation how to do
var a = sc.textFile("/sparktest/1/").map((_,"a"))
var b = sc.textFile("/sparktest/2/").map((_,"b"))
How to get (3,"a") and (4,"a")
在 Aug 28, 2014,19:54,"
fileA=1 2 3 4 one number a line,save in /sparktest/1/
fileB=3 4 5 6 one number a line,save in /sparktest/2/
I want to get 3 and 4
var a = sc.textFile("/sparktest/1/").map((_,1))
var b = sc.textFile("/sparktest/2/").map((_,1))
a.filter(param=>{b.lookup(param._1).length>0}).map(_._1).foreach(prin
11 matches
Mail list logo