I think you can check in the core-site.xml or hdfs-site.xml file under
/root/ephemeral-hdfs/etc/hadoop/ where you can see data node dir property
which will be a comma separated list of volumes.

Thanks
Best Regards

On Thu, Oct 30, 2014 at 5:21 AM, Daniel Mahler <dmah...@gmail.com> wrote:

> I started my ec2 spark cluster with
>
>     ./ec2/spark---ebs-vol-{size=100,num=8,type=gp2} -t m3.xlarge -s 10
> launch mycluster
>
> I see the additional volumes attached but they do not seem to be set up
> for hdfs.
> How can I check if they are being utilized on all workers,
> and how can I get all workers to utilize the extra volumes for hdfs.
> I do not have experience using hadoop directly, only through spark.
>
> thanks
> Daniel
>

Reply via email to