Hi,
Seems the known issue, see https://issues.apache.org/jira/browse/SPARK-4105
// maropu
On Sat, Sep 10, 2016 at 11:08 PM, 齐忠 wrote:
> Hi all
>
> when use default compression snappy,I get error when spark doing shuffle
>
> 16/09/09 08:33:15 ERROR executor.Executor: Managed memory leak detecte
My suggestion is that you change the Spark setting which controls the
compression codec that Spark uses for internal data transfers. Set
spark.io.compression.codec
to lzf in your SparkConf.
On Mon, Jun 1, 2015 at 8:46 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) wrote:
> Hello Josh,
> Are you suggesting to store the sourc
Hello Josh,
Are you suggesting to store the source data in LZF compression and use the
same Spark code as is ?
Currently its stored in sequence file format and compressed with GZIP.
First line of the data:
(SEQorg.apache.hadoop.io.Textorg.apache.hadoop.io.Text'
org.apache.hadoop.io.compress.GzipC
If you can't run a patched Spark version, then you could also consider
using LZF compression instead, since that codec isn't affected by this bug.
On Mon, Jun 1, 2015 at 3:32 PM, Andrew Or wrote:
> Hi Deepak,
>
> This is a notorious bug that is being tracked at
> https://issues.apache.org/jira/b
Hi Deepak,
This is a notorious bug that is being tracked at
https://issues.apache.org/jira/browse/SPARK-4105. We have fixed one source
of this bug (it turns out Snappy had a bug in buffer reuse that caused data
corruption). There are other known sources that are being addressed in
outstanding patc