Sounds like a network issue, for example connecting to remote server?
try
ping 172.21.242.26
telnet 172.21.242.26 596590
or nc -vz 172.21.242.26 596590
example
nc -vz rhes76 1521
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 50.140.197.230:1521.
Ncat: 0 bytes sent, 0 bytes rece
i want to use yarn cluster with my current code. if i use
conf.set("spark.master","local[*]") inplace of
conf.set("spark.master","yarn"), everything is very well. but i try to use
yarn in setmaster, my code give an below error.
```
package com.example.pocsparkspring;
import org.apache.hadoop.con
|
> |
>cc|
> |"user @spark"
> |
> |
>
"user @spark"
|
|
Subject|
|[Spam][SMG] Re: problem running spark with yarn-client
Hi,
Trying to run spark with yarn-client not using spark-submit here
what are you using to submit the job? spark-shell, spark-sql or anything
else
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.
Hello guys,
Trying to run spark with yarn-client not using spark-submit here but the
jobs kept failed while AM launching executor.
The error collected by yarn like below.
Looks like some environment setting is missing?
Could someone help me out with this.
Thanks in advance!
HY Chung
Java
Check doc - http://spark.apache.org/docs/latest/running-on-yarn.html
also you can start EMR-4.2.0 or 4.3.0 cluster with Spark app and see how
it's configured
On Fri, Mar 11, 2016 at 7:50 PM, Divya Gehlot
wrote:
> Hi,
> I am trying to understand behaviour /configuration of spar
Hi,
I am trying to understand behaviour /configuration of spark with yarn
client on hadoop cluster .
Can somebody help me or point me document /blog/books which has deeper
understanding of above two.
Thanks,
Divya
and that Hadoop has to be setup for running Spark
> with
> YARN. My questions -
>
> 1. Do we have to setup Hadoop cluster on EC2 and then build Spark on it?
> 2. Is there a way to modify the existing Spark cluster to work with YARN?
>
> Thanks in advance.
>
> Harika
>
&
AM, roni wrote:
> Hi Harika,
> Did you get any solution for this?
> I want to use yarn , but the spark-ec2 script does not support it.
> Thanks
> -Roni
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Setti
Hi Harika,
Did you get any solution for this?
I want to use yarn , but the spark-ec2 script does not support it.
Thanks
-Roni
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Setting-up-Spark-with-YARN-on-EC2-cluster-tp21818p21991.html
Sent from the Apache
Hi,
I want to setup a Spark cluster with YARN dependency on Amazon EC2. I was
reading this <https://spark.apache.org/docs/1.2.0/running-on-yarn.html>
document and I understand that Hadoop has to be setup for running Spark with
YARN. My questions -
1. Do we have to setup Hadoop cluster
If you launched the job in yarn-cluster mode, the tracking URL is
printed on the output of the launched process. That will lead you to
the Spark UI once the job is running.
If you're using CM, you can reach the same link by clicking on the
"Resource Manager UI" link on your Yarn service, then find
Yeah I got the logs and its reporting about the memory.
14/09/25 00:08:26 WARN YarnClusterScheduler: Initial job has not accepted
any resources; check your cluster UI to ensure that workers are registered
and have sufficient memory
Now I shifted to big cluster with more memory but here im not abl
You need to use the command line yarn application that I mentioned
("yarn logs"). You can't look at the logs through the UI after the app
stops.
On Wed, Sep 24, 2014 at 11:16 AM, Raghuveer Chanda
wrote:
>
> Thanks for the reply .. This is the error in the logs obtained from UI at
> http://dml3:80
Thanks for the reply .. This is the error in the logs obtained from UI at
http://dml3:8042/node/containerlogs/container_1411578463780_0001_02_01/chanda
So now how to set the Log Server url ..
Failed while trying to construct the redirect url to the log server. Log
Server url may not be confi
The screenshot executors.8080.png is of the executors tab itself and only
driver is added without workers even if I kept the master as yarn-cluster.
On Wed, Sep 24, 2014 at 11:18 PM, Matt Narrell
wrote:
> This just shows the driver. Click the Executors tab in the Spark UI
>
> mn
>
> On Sep 24,
our environment to point Spark to
> where your yarn configs are?
>
> Greg
>
> From: Raghuveer Chanda
> Date: Wednesday, September 24, 2014 12:25 PM
> To: "u...@spark.incubator.apache.org"
> Subject: Spark with YARN
>
> Hi,
>
> I'm new to spark
You'll need to look at the driver output to have a better idea of
what's going on. You can use "yarn logs --applicationId blah" after
your app is finished (e.g. by killing it) to look at it.
My guess is that your cluster doesn't have enough resources available
to service the container request you'
rg>"
mailto:u...@spark.incubator.apache.org>>
Subject: Spark with YARN
Hi,
I'm new to spark and facing problem with running a job in cluster using YARN.
Initially i ran jobs using spark master as --master spark://dml2:7077 and it is
running fine on 3 workers.
But now im shift
This just shows the driver. Click the Executors tab in the Spark UI
mn
On Sep 24, 2014, at 11:25 AM, Raghuveer Chanda
wrote:
> Hi,
>
> I'm new to spark and facing problem with running a job in cluster using YARN.
>
> Initially i ran jobs using spark master as --master spark://dml2:7077 and
21 matches
Mail list logo