> On 9 Feb 2016, at 07:19, lmk wrote:
>
> Hi Dhimant,
> As I had indicated in my next mail, my problem was due to disk getting full
> with log messages (these were dumped into the slaves) and did not have
> anything to do with the content pushed into s3. So, looks like this error
> message is ve
Hi Dhimant,
As I had indicated in my next mail, my problem was due to disk getting full
with log messages (these were dumped into the slaves) and did not have
anything to do with the content pushed into s3. So, looks like this error
message is very generic and is thrown for various reasons. You may
I had similar problems with multi part uploads. In my case the real error
was something else which was being masked by this issue
https://issues.apache.org/jira/browse/SPARK-6560. In the end this bad
digest exception was a side effect and not the original issue. For me it
was some library version c
> On 7 Feb 2016, at 07:57, Dhimant wrote:
>
>at
> com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.uploadSinglePart(MultipartUploadOutputStream.java:245)
>... 15 more
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The
> Content-MD5 you specified did no
Hi , I am getting the following error while reading the huge data from S3 and
after processing ,writing data to S3 again.
Did you find any solution for this ?
16/02/07 07:41:59 WARN scheduler.TaskSetManager: Lost task 144.2 in stage
3.0 (TID 169, ip-172-31-7-26.us-west-2.compute.internal):
java.i
This was a completely misleading error message..
The problem was due to a log message getting dumped to the stdout. This was
getting accumulated in the workers and hence there was no space left on
device after some time.
When I re-tested with spark-0.9.1, the saveAsTextFile api threw "no space
le
Is it possible that the Content-MD5 changes during multipart upload to s3?
But even then, it succeeds if I increase the cluster configuration..
For ex.
it throws Bad Digest error after writing 48/100 files when the cluster is of
3 m3.2xlarge slaves
it throws Bad Digest error after writing 64/100 f
Thanks Patrick.
But why am I getting a Bad Digest error when I am saving large amount of
data to s3?
/Loss was due to org.apache.hadoop.fs.s3.S3Exception
org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException:
S3 PUT failed for
'/spark_test%2Fsmaato_one_day_phase_2%2Fsmaato_
You are hitting this issue:
https://issues.apache.org/jira/browse/SPARK-2075
On Mon, Jul 28, 2014 at 5:40 AM, lmk
wrote:
> Hi
> I was using saveAsTextFile earlier. It was working fine. When we migrated
> to
> spark-1.0, I started getting the following error:
> java.lang.ClassNotFoundException:
Anyone has any thoughts on this?
Regards,
lmk
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Bad-Digest-error-while-doing-aws-s3-put-tp10036p11313.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
--
Hi
I was using saveAsTextFile earlier. It was working fine. When we migrated to
spark-1.0, I started getting the following error:
java.lang.ClassNotFoundException:
org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1
java.net.URLClassLoader$1.run(URLClassLoader.java:366)
java.net.URLC
Bad Digest error means the file you are trying to upload actually changed
while uploading. If you can make a temporary copy of the file before
uploading then you won't face this problem.
Thanks
Best Regards
On Fri, Jul 25, 2014 at 5:34 PM, lmk
wrote:
> Can someone look into this and help me r
Can someone look into this and help me resolve this error pls..
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Bad-Digest-error-while-doing-aws-s3-put-tp10036p10644.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
13 matches
Mail list logo