Well I was able to run the SparkPi, that also does the similar stuff,
successfully.


On Tue, Aug 5, 2014 at 11:52 AM, Akhil Das <ak...@sigmoidanalytics.com>
wrote:

> For that UI to have some values, your process should do some operation.
> Which is not happening here ( 14/08/05 18:03:13 WARN
> YarnClusterScheduler: Initial job has not accepted any resources; check
> your cluster UI to ensure that workers are registered and have sufficient
> memory )
>
> Can you open up a spark-shell and try some simple code? ( *val x =
> sc.parallelize(1 to 1000000).filter(_<100).collect()* )
>
> Just to make sure your cluster setup is proper and is working.
>
> Thanks
> Best Regards
>
>
> On Wed, Aug 6, 2014 at 12:17 AM, Sunny Khatri <sunny.k...@gmail.com>
> wrote:
>
>> The only UI I have currently is the Application Master (Cluster mode),
>> with the following executor nodes status:
>> Executors (3)
>>
>>    - *Memory:* 0.0 B Used (3.7 GB Total)
>>    - *Disk:* 0.0 B Used
>>
>>  Executor IDAddress RDD BlocksMemory Used Disk UsedActive Tasks Failed
>> TasksComplete Tasks Total TasksTask Time Shuffle ReadShuffle Write 1
>> <add1> 0 0.0 B / 1766.4 MB 0.0 B 0 0 0 0 0 ms 0.0 B 0.0 B 2<add2> 0 0.0
>> B / 1766.4 MB 0.0 B0 0 00 0 ms0.0 B 0.0 B <driver> <add3> 0 0.0 B /
>> 294.6 MB 0.0 B 0 0 0 0 0 ms 0.0 B 0.0 B
>>
>>
>> On Tue, Aug 5, 2014 at 11:32 AM, Akhil Das <ak...@sigmoidanalytics.com>
>> wrote:
>>
>>> Are you able to see the job on the WebUI (8080)? If yes, how much memory
>>> are you seeing there specifically for this job?
>>>
>>> [image: Inline image 1]
>>>
>>> Here you can see i have 11.8Gb RAM on both workers and my app is using
>>> 11GB.
>>>
>>> 1. What are all the memory that you are seeing in your case?
>>> 2. Make sure your application is using the same spark URI (as seen in
>>> the top left of the webUI) while creating the SparkContext.
>>>
>>>
>>>
>>> Thanks
>>> Best Regards
>>>
>>>
>>> On Tue, Aug 5, 2014 at 11:38 PM, Sunny Khatri <sunny.k...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I'm trying to run a spark application with the executor-memory 3G. but
>>>> I'm running into the following error:
>>>>
>>>> 14/08/05 18:02:58 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[5] at 
>>>> map at KMeans.scala:123), which has no missing parents
>>>> 14/08/05 18:02:58 INFO DAGScheduler: Submitting 1 missing tasks from Stage 
>>>> 0 (MappedRDD[5] at map at KMeans.scala:123)
>>>> 14/08/05 18:02:58 INFO YarnClusterScheduler: Adding task set 0.0 with 1 
>>>> tasks
>>>> 14/08/05 18:02:59 INFO CoarseGrainedSchedulerBackend: Registered executor: 
>>>> Actor[akka.tcp://sparkexecu...@test-hadoop2.vpc.natero.com:54358/user/Executor#1670455157]
>>>>  with ID 2
>>>> 14/08/05 18:02:59 INFO BlockManagerInfo: Registering block manager 
>>>> test-hadoop2.vpc.natero.com:39156 with 1766.4 MB RAM
>>>> 14/08/05 18:03:13 WARN YarnClusterScheduler: Initial job has not accepted 
>>>> any resources; check your cluster UI to ensure that workers are registered 
>>>> and have sufficient memory
>>>> 14/08/05 18:03:28 WARN YarnClusterScheduler: Initial job has not accepted 
>>>> any resources; check your cluster UI to ensure that workers are registered 
>>>> and have sufficient memory
>>>> 14/08/05 18:03:43 WARN YarnClusterScheduler: Initial job has not accepted 
>>>> any resources; check your cluster UI to ensure that workers are registered 
>>>> and have sufficient memory
>>>> 14/08/05 18:03:58 WARN YarnClusterScheduler: Initial job has not accepted 
>>>> any resources; check your cluster UI to ensure that workers are registered 
>>>> and have sufficient memory
>>>>
>>>>
>>>> Tried tweaking executor-memory as well, but same result. It always gets 
>>>> stuck registering the block manager.
>>>>
>>>>
>>>> Are there any other settings that needs to be adjusted.
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Sunny
>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to