Re: EMR 4.3.0 spark 1.6 shell problem

2016-03-02 Thread Daniel Siegmann
In the past I have seen this happen when I filled up HDFS and some core nodes became unhealthy. There was no longer anywhere to replicate the data. >From your command it looks like you should have 1 master and 2 core nodes in your cluster. Can you verify both the core nodes are healthy? On Wed, Ma

Re: EMR 4.3.0 spark 1.6 shell problem

2016-03-02 Thread Oleg Ruchovets
Here is my command: aws emr create-cluster --release-label emr-4.3.0 --name "ClusterJava8" --use-default-roles --applications Name=Ganglia Name=Hive Name=Hue Name=Mahout Name=Pig Name=Spark --ec2-attributes KeyName=CC-ES-Demo --instance-count 3 --instance-type m3.xlarge --use-default-role

Re: EMR 4.3.0 spark 1.6 shell problem

2016-03-01 Thread Gourav Sengupta
Hi, which region are you using the EMR clusters from? Is there any tweaking of the HADOOP parameters that you are doing before starting the clusters? If you are using AWS CLI to start the cluster just send across the command. I have, never till date, faced any such issues in the Ireland region.

Re: EMR 4.3.0 spark 1.6 shell problem

2016-03-01 Thread Alexander Pivovarov
EMR-4.3.0 and Spark-1.6.0 works fine for me I use r3.2xlarge boxes (spot) (even 3 slave boxes works fine) I use the following settings (in json) [ { "Classification": "spark-defaults", "Properties": { "spark.driver.extraJavaOptions": "-Dfile.encoding=UTF-8", "spark.executor

Re: EMR 4.3.0 spark 1.6 shell problem

2016-03-01 Thread Daniel Siegmann
How many core nodes does your cluster have? On Tue, Mar 1, 2016 at 4:15 AM, Oleg Ruchovets wrote: > Hi , I am installed EMR 4.3.0 with spark. I tries to enter spark shell but > it looks it does't work and throws exceptions. > Please advice: > > [hadoop@ip-172-31-39-37 conf]$ cd /usr/bin/ > [had

EMR 4.3.0 spark 1.6 shell problem

2016-03-01 Thread Oleg Ruchovets
Hi , I am installed EMR 4.3.0 with spark. I tries to enter spark shell but it looks it does't work and throws exceptions. Please advice: [hadoop@ip-172-31-39-37 conf]$ cd /usr/bin/ [hadoop@ip-172-31-39-37 bin]$ ./spark-shell OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512M; supp