is there a way for removing hadoop from spark
Considering the case i neednt hdfs, it there a way for removing completely hadoop from spark? Is YARN the unique dependency in spark? is there no java or scala (jdk langs)YARN-like lib to embed in a project instead to call external servers? YARN lib is difficult to customize? I made different ques
how to replace hdfs with a custom distributed fs ?
hi i have my distributed java fs and i would like to implement my class for storing data in spark. How to do? it there a example how to do?