I also tried
jsc.sparkContext().sc().hadoopConfiguration().set("dfs.replication", "2")
But, still its not working.
Any ideas why its not working ?
Abhi
On Tue, May 31, 2016 at 4:03 PM, Abhishek Anand
wrote:
> My spark streaming checkpoint directory is being written to HDFS with
> default r
My spark streaming checkpoint directory is being written to HDFS with
default replication factor of 3.
In my streaming application where I am listening from kafka and setting the
dfs.replication = 2 as below the files are still being written with
replication factor=3
SparkConf sparkConfig = new
S