Hello,
I wanted to setup hdfs to be used as a public like file system where,
aside from few core computer that will be running masters, you would
have x amount of data nodes/computers that would be located through
the internet?

How do I setup master servers, and then 3-65+ slave servers, where
each server can come or leave at any time they want.
How would I control how slave servers are added? assuming they would
give me their ip, available size, and in return I would need to
provide then with...?
Should the ssh account that is used be created in some special way? No
shell access? or some restrictions? (command?)
Are there any specific differences that should be accounted for in
this "public" version of hadoop cluster?


Let me know.

Thanks,
Lucas

Reply via email to