I have a HDFS high available cluster with two namenode, one as active namenode and one as standby namenode. When I want to write data to HDFS I use the active namenode address. Now, my question is what happened if during spark writing data active namenode fails. Is there any way to set both active namenode and standby namenode in spark for writing data?
- Writing data in HDFS high available cluster Soheil Pourbafrani
- Re: Writing data in HDFS high available cluster Subhash Sriram