requirement for our data and wouldn’t solve
the problem for our data used outside of Redshift.
Hope that helps someone else out if you hit the same issue.
-Abbi
From: Gourav Sengupta
Date: Monday, September 11, 2017 at 6:32 AM
To: "Mcclintic, Abbi"
Cc: user
Subject: Re: CSV write to
Hi,
Can you please let me know the following:
1. Why are you using JAVA?
2. The way you are creating the SPARK cluster
3. The way you are initiating SPARK session or context
4. Are you able to query the data that is written to S3 using a SPARK
dataframe and validate that the number of rows in the
On 7 Sep 2017, at 18:36, Mcclintic, Abbi
mailto:ab...@amazon.com>> wrote:
Thanks all – couple notes below.
Generally all our partitions are of equal size (ie on a normal day in this
particular case I see 10 equally sized partitions of 2.8 GB). We see the
problem with repartitioning and withou
Thanks all – couple notes below.
Generally all our partitions are of equal size (ie on a normal day in this
particular case I see 10 equally sized partitions of 2.8 GB). We see the
problem with repartitioning and without – in this example we are repartitioning
to 10 but we also see the proble
Sounds like an S3 bug. Can you replicate locally with HDFS?
Try using S3a protocol too; there is a jar you can leverage like so:
spark-submit --packages
com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.3
my_spark_program.py
EMR can sometimes be buggy. :/
You could also try le
Are you assuming that all partitions are of equal size? Did you try with more
partitions (like repartitioning)? Does the error always happen with the last
(or smaller) file? If you are sending to redshift, why not use the JDBC driver?
-Original Message-
From: abbim [mailto:ab...@amazon.c