Hi,
May be *sc.hadoopConfiguration.setInt( "dfs.blocksize", blockSize ) *helps
you

Best Regards,
Pavel

On Tue, Jan 26, 2016 at 7:13 AM Jia Zou <jacqueline...@gmail.com> wrote:

> Dear all,
>
> First to update that the local file system data partition size can be
> tuned by:
> sc.hadoopConfiguration().setLong("fs.local.block.size", blocksize)
>
> However, I also need to tune Spark data partition size for input data that
> is stored in Tachyon (default is 512MB), but above method can't work for
> Tachyon data.
>
> Do you have any suggestions? Thanks very much!
>
> Best Regards,
> Jia
>
>
> ---------- Forwarded message ----------
> From: Jia Zou <jacqueline...@gmail.com>
> Date: Thu, Jan 21, 2016 at 10:05 PM
> Subject: Spark partition size tuning
> To: "user @spark" <user@spark.apache.org>
>
>
> Dear all!
>
> When using Spark to read from local file system, the default partition
> size is 32MB, how can I increase the partition size to 128MB, to reduce the
> number of tasks?
>
> Thank you very much!
>
> Best Regards,
> Jia
>
>

Reply via email to