n 12, 2010 at 9:05 AM, Vikas Ashok Patil wrote:
> Hello Allen,
>
> Thanks for the reply.
>
> You are right about trying to run two distributed filesystems. The reason
> being, there are certain restrictions (in our cluster environment) to
> include the local file system into lus
. At least the configs don't seem
to allow it.
Thanks,
Vikas A Patil
On Sat, Jun 12, 2010 at 12:32 AM, Allen Wittenauer wrote:
> On Jun 10, 2010, at 8:27 PM, Vikas Ashok Patil wrote:
>
> > Thanks for the replies.
> >
> > If I have fs.default.name = file://my_lustr
Thanks for the replies.
If I have fs.default.name = file://my_lustre_mount_point , then only the
lustre filesystem will be used. I would like to have something like
fs.default.name=file://my_lustre_mount_point , hdfs://localhost:9123
so that both local filesystem and lustre are in use.
Kindly c
Hello All,
I would like to try out a hadoop configuration involving both lustre and
hdfs. Hence I would like to know any thoughts/criticisms on the idea.
In my cluster I have the lustre parallel file system which mainly exposes
storage over a network. Also there is some local space on each node o