-20150209210707-0007/0 is:
spark-etl-0.0.1-SNAPSHOT.jar stderr stdout
Is there any way we can disallow the system to copy the jar file?
Ey-Chih Chow
To: 2dot7kelvin@gmail.comCC: gen.tan...@gmail.com; user@spark.apache.org
Subject: RE: no space left at worker node
Date: Mon, 9 Feb 2015 12:07:17
ChowFrom: eyc...@hotmail.com
To: 2dot7kel...@gmail.com
CC: gen.tan...@gmail.com; user@spark.apache.org
Subject: RE: no space left at worker node
Date: Mon, 9 Feb 2015 10:59:00 -0800
Thanks. But, in spark-submit, I specified the jar file in the form of
local:/spark-etl-0.0.1-SNAPSHOT.jar. It comes
unner$$anon$1.run(DriverRunner.scala:74)
Subject: Re: no space left at worker node
From: 2dot7kel...@gmail.com
To: eyc...@hotmail.com
CC: gen.tan...@gmail.com; user@spark.apache.org
Maybe, try with "local:" under the heading of Advanced Dependency Management
here: https://s
---
> Date: Sun, 8 Feb 2015 20:09:32 -0800
> Subject: Re: no space left at worker node
> From: 2dot7kel...@gmail.com
> To: eyc...@hotmail.com
> CC: gen.tan...@gmail.com; user@spark.apache.org
>
>
> I guess you may set the parameters below
2015 20:09:32 -0800
Subject: Re: no space left at worker node
From: 2dot7kel...@gmail.com
To: eyc...@hotmail.com
CC: gen.tan...@gmail.com; user@spark.apache.org
I guess you may set the parameters below to clean the directories
./driver-20150208173156-
> 1649876 ./app-20150208173200-/0
> 1649880 ./app-20150208173200-
> 5152036 .
>
> Any suggestion how to resolve it? Thanks.
>
> Ey-Chih Chow
> --
> From: eyc...@hotmail.com
> To: gen.tan...@gmail.com
&g
-20150208173200-/01649880
./app-20150208173200-5152036.
Any suggestion how to resolve it? Thanks.
Ey-Chih ChowFrom: eyc...@hotmail.com
To: gen.tan...@gmail.com
CC: user@spark.apache.org
Subject: RE: no space left at worker node
Date: Sun, 8 Feb 2015 15:25:43 -0800
By this way
By this way, the input and output paths of the job are all in s3. I did not
use paths of hdfs as input or output.
Best regards,
Ey-Chih Chow
From: eyc...@hotmail.com
To: gen.tan...@gmail.com
CC: user@spark.apache.org
Subject: RE: no space left at worker node
Date: Sun, 8 Feb 2015 14:57:15 -0800
Hi Gen,
Thanks. I save my logs in a file under /var/log. This is the only place to
save data. Will the problem go away if I use a better machine?
Best regards,
Ey-Chih Chow
Date: Sun, 8 Feb 2015 23:32:27 +0100
Subject: Re: no space left at worker node
From: gen.tan...@gmail.com
To: eyc
kely will succeed.
>
> Ey-Chih Chow
>
> --
> Date: Sun, 8 Feb 2015 18:18:03 +0100
>
> Subject: Re: no space left at worker node
> From: gen.tan...@gmail.com
> To: eyc...@hotmail.com
> CC: user@spark.apache.org
>
> Hi,
>
> In fac
l.com; eyc...@hotmail.com
CC: user@spark.apache.org
Subject: Re: no space left at worker node
You might want to take a look in core-site.xml, andsee what is listed as usable
directories (hadoop.tmp.dir, fs.s3.buffer.dir).
It seems that on S3, the root disk is relatively small (8G), but the config
Thanks Gen. How can I check if /dev/sdc is well mounted or not? In general,
the problem shows up when I submit the second or third job. The first job I
submit most likely will succeed.
Ey-Chih Chow
Date: Sun, 8 Feb 2015 18:18:03 +0100
Subject: Re: no space left at worker node
From: gen.tan
mnt2 auto
> defaults,noatime,nodiratime,comment=cloudconfig 0 0
>
> There is no entry of /dev/xvdb.
>
> Ey-Chih Chow
>
> ------
> Date: Sun, 8 Feb 2015 12:09:37 +0100
> Subject: Re: no space left at worker node
> From: gen.tan...@gmail.
: Sun, 8 Feb 2015 12:09:37 +0100
Subject: Re: no space left at worker node
From: gen.tan...@gmail.com
To: eyc...@hotmail.com
CC: user@spark.apache.org
Hi,
I fact, I met this problem before. it is a bug of AWS. Which type of machine do
you use?
If I guess well, you can check the file /etc/fstab
6% /mnt
>
> Does anybody know how to fix this? Thanks.
>
>
> Ey-Chih Chow
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/no-space-left-at-worker-node-tp21545.html
> Sent from the Apache Spark User List mailing list
/dev/xvdb 30963708 1729652 27661192 6% /mnt
Does anybody know how to fix this? Thanks.
Ey-Chih Chow
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/no-space-left-at-worker-node-tp21545.html
Sent from the Apache Spark User List mailing list
16 matches
Mail list logo