Hi, I'm running Spark(1.6.1) on YARN(2.5.1), cluster mode.
It's taking 20+ seconds for application to move from ACCEPTED to RUNNING state,
here's logs
16/04/21 09:06:56 INFO impl.YarnClientImpl: Submitted application
application_1461229289298_0001
16/04/21 09:06:57 INFO yarn.Client: Application
in yarn-client mode
>
> Any suggestion ?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26752.html
> Sent from the Apache Spa
I have updated all my nodes in the Cluster to have 4GB RAM memory , but still
face the same error when trying to launch Spark-Shell in yarn-client mode
Any suggestion ?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster
-Spark-on-Yarn-Client-Cluster-mode-tp26691p26739.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h
0.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26717.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional command
Few more added information with Nodes Memory and Core
ptfhadoop01v - 4GB
ntpcam01v - 1GB
ntpcam03v - 2GB
Each of the VM has only 1 core CPU
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26714.html
Sent
his message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26713.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe,
560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26710.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands,
quick thoughts
on this issue.
Regards
Ashesh
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26709.html
Sent from the Apache Spark User List mailing list
sembly.jar
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26704.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
--
n
each node in cluster ?
How do i start the spark-shell in yarn-client mode.
Thanks in advance.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691.html
Sent from the Apache Spark User List mailin
RM NM logs traced below,
RM -->
2016-03-30 14:59:15,498 INFO
org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:
Setting up container Container: [ContainerId:
container_1459326455972_0004_01_01, NodeId: myhost:60653,
NodeHttpAddress: myhost:8042, Resource: , Priority:
0, Toke
ok, start EMR-4.3.0 or 4.2.0 cluster and look at how to configure spark on
yarn properly
:~/Downloads/package/spark-1.6.1-bin-hadoop2.6$ bin/spark-shell --master
yarn-client
16/03/30 03:24:43 DEBUG ipc.Client: IPC Client (111576772) connection to
myhost/192.168.1.108:8032 from myhost sending #138
16/03/30 03:24:43 DEBUG ipc.Client: IPC Client (111576772) connection to
myhost/192.168.1
Looks like still the same while the other MR application is working fine,
On Wed, Mar 30, 2016 at 3:15 AM, Alexander Pivovarov
wrote:
> for small cluster set the following settings
>
> yarn-site.xml
>
>
> yarn.scheduler.minimum-allocation-mb
> 32
>
>
>
> capacity-scheduler.xml
>
>
>
for small cluster set the following settings
yarn-site.xml
yarn.scheduler.minimum-allocation-mb
32
capacity-scheduler.xml
yarn.scheduler.capacity.maximum-am-resource-percent
0.5
Maximum percent of resources in the cluster which can be used to run
application m
Yarn seems to be running fine, I have successful MR jobs completed on the
same,
*Cluster Metrics*
*Apps Submitted Apps Pending Apps Running Apps Completed Containers Running
Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores
Reserved Active Nodes Decommissioned Nodes Lost Nod
check resource manager and node manager logs.
Maybe you find smth explaining why 1 app is pending
do you have any app run successfully? *Apps Completed is 0 on the UI*
On Tue, Mar 29, 2016 at 2:13 PM, Vineet Mishra
wrote:
> Hi Alex/Surendra,
>
> Hadoop is up and running fine and I am able to r
Hi Alex/Surendra,
Hadoop is up and running fine and I am able to run example on the same.
*Cluster Metrics*
*Apps Submitted Apps Pending Apps Running Apps Completed Containers Running
Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores
Reserved Active Nodes Decommissioned Nod
check 8088 ui
- how many cores and memory available
- how many slaves are active
run teragen or pi from hadoop examples to make sure that yarn works
On Tue, Mar 29, 2016 at 1:25 PM, Surendra , Manchikanti <
surendra.manchika...@gmail.com> wrote:
> Hi Vineeth,
>
> Can you please check resource(RA
Hi Vineeth,
Can you please check resource(RAM,Cores) availability in your local
cluster, And change accordingly.
Regards,
Surendra M
-- Surendra Manchikanti
On Tue, Mar 29, 2016 at 1:15 PM, Vineet Mishra
wrote:
> Hi All,
>
> While starting Spark on Yarn on local cluster(Single Node Hadoop 2.6
Hi All,
While starting Spark on Yarn on local cluster(Single Node Hadoop 2.6 yarn)
I am facing some issues.
As I try to start the Spark Shell it keeps on iterating in a endless loop
while initiating,
*6/03/30 01:32:38 DEBUG ipc.Client: IPC Client (1782965120) connection to
myhost/192.168.1.108:8
Thanks for the reply.
I am now trying to configure yarn.web-proxy.address according to
https://issues.apache.org/jira/browse/SPARK-5837, but cannot start the
standalone web proxy server.
I am using CDH 5.0.1 and below is the error log:
sbin/yarn-daemon.sh: line 44:
/opt/cloudera/parcels/CDH/lib/
On 3 Mar 2016, at 09:17, Shady Xu mailto:shad...@gmail.com>>
wrote:
Hi all,
I am running Spark in yarn-client mode, but every time I access the web ui, the
browser redirect me to one of the worker nodes and shows nothing. The url looks
like
http://hadoop-node31.company.com:8088/proxy/applica
Hi all,
I am running Spark in yarn-client mode, but every time I access the web ui,
the browser redirect me to one of the worker nodes and shows nothing. The
url looks like
http://hadoop-node31.company.com:8088/proxy/application_1453797301246_120264
.
I googled a lot and found some possible bugs
For that you need SPARK-1537 and the patch to go with it
It is still the spark web UI, it just hands off storage and retrieval of the
history to the underlying Yarn timeline server, rather than through the
filesystem. You'll get to see things as they go along too.
If you do want to try it, ple
Hi all,
I wonder if anyone has used use MapReduce Job History to show Spark jobs.
I can see my Spark jobs (Spark running on Yarn cluster) on Resource manager
(RM).
I start Spark History server, and then through Spark's web-based user
interface I can monitor the cluster (and track cluster and job
Hello, folks.
We just recently switched to using Yarn on our cluster (when upgrading to
cloudera 5.4.1)
I'm trying to run a spark job from within a broader application (a web
service running on Jetty), so I can't just start it using spark-submit.
Does anyone know of an instructions page on how t
but I think spark code has changed a lot since then.
>
> Any one could offer some guide? Thanks.
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Who-manage-the-log4j-appender-while-running-spark-on-y
ext:
http://apache-spark-user-list.1001560.n3.nabble.com/Who-manage-the-log4j-appender-while-running-spark-on-yarn-tp20778p20818.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsu
01560.n3.nabble.com/Who-manage-the-log4j-appender-while-running-spark-on-yarn-tp20778.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.or
Hi yuemeng,
Are you possibly running the Capacity Scheduler with the default resource
calculator?
-Sandy
On Sat, Dec 6, 2014 at 7:29 PM, yuemeng1 wrote:
> Hi, all
> When i running an app with this cmd: ./bin/spark-sql --master
> yarn-client --num-executors 2 --executor-cores 3, i noticed
Hi, all
When i running an app with this cmd: ./bin/spark-sql --master
yarn-client --num-executors 2 --executor-cores 3, i noticed that yarn
resource manager ui shows the `vcores used` in cluster metrics is 3. It
seems `vcores used` show wrong num (should be 7?)? Or i miss something?
Tha
I'm using org.apache.spark.deploy.yarn.Client object to run my spark job. I
guess this is what spark-submit wraps really.
- Amey
On Mon, Nov 3, 2014 at 5:25 PM, Tobias Pfeiffer wrote:
> Hi,
>
> On Mon, Nov 3, 2014 at 1:29 PM, Amey Chaugule wrote:
>
>> I thought that only applied when you're tr
Hi,
On Mon, Nov 3, 2014 at 1:29 PM, Amey Chaugule wrote:
> I thought that only applied when you're trying to run a job using
> spark-submit or in the shell...
>
And how are you starting your Yarn job, if not via spark-submit?
Tobias
> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/hadoop-conf-dir-when-running-spark-on-yarn-tp17872.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> --
iguration that I pull from sc.hadoopConfiguration() is incorrect.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/hadoop-conf-dir-when-running-spark-on-yarn-tp17872.html
> Sent from the Apache Spark User List
60.n3.nabble.com/hadoop-conf-dir-when-running-spark-on-yarn-tp17872.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e
Archit
We are using yarn-cluster mode , and calling spark via Client class
directly from servlet server. It works fine.
To establish a communication channel to give further requests,
It should be possible with yarn client, but not with yarn server. Yarn
client mode, spark driver i
including user@spark.apache.org.
On Fri, Aug 29, 2014 at 2:03 PM, Archit Thakur
wrote:
> Hi,
>
> My requirement is to run Spark on Yarn without using the script
> spark-submit.
>
> I have a servlet and a tomcat server. As and when request comes, it
> creates a new SC and keeps it alive for the
Hi,
My requirement is to run Spark on Yarn without using the script
spark-submit.
I have a servlet and a tomcat server. As and when request comes, it creates
a new SC and keeps it alive for the further requests, I ma setting my
master in sparkConf
as sparkConf.setMaster("yarn-cluster")
but the
.nabble.com/Running-Spark-on-Yarn-vs-Mesos-tp9320.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
I currently don't have plans to work on that.
-Sandy
> On Apr 22, 2014, at 8:06 PM, Gordon Wang wrote:
>
> Thanks I see. Do you guys have plan to port this to sbt?
>
>
>> On Wed, Apr 23, 2014 at 10:24 AM, Sandy Ryza wrote:
>> Right, it only works for Maven
>>
>>
>>> On Tue, Apr 22, 2014 at
Thanks I see. Do you guys have plan to port this to sbt?
On Wed, Apr 23, 2014 at 10:24 AM, Sandy Ryza wrote:
> Right, it only works for Maven
>
>
> On Tue, Apr 22, 2014 at 6:23 PM, Gordon Wang wrote:
>
>> Hi Sandy,
>>
>> Thanks for your reply !
>>
>> Does this work for sbt ?
>>
>> I checked the
Right, it only works for Maven
On Tue, Apr 22, 2014 at 6:23 PM, Gordon Wang wrote:
> Hi Sandy,
>
> Thanks for your reply !
>
> Does this work for sbt ?
>
> I checked the commit, looks like only maven build has such option.
>
>
>
> On Wed, Apr 23, 2014 at 12:38 AM, Sandy Ryza wrote:
>
>> Hi Gord
Hi Sandy,
Thanks for your reply !
Does this work for sbt ?
I checked the commit, looks like only maven build has such option.
On Wed, Apr 23, 2014 at 12:38 AM, Sandy Ryza wrote:
> Hi Gordon,
>
> We recently handled this in SPARK-1064. As of 1.0.0, you'll be able to
> pass -Phadoop-provided
Hi Gordon,
We recently handled this in SPARK-1064. As of 1.0.0, you'll be able to
pass -Phadoop-provided to Maven and avoid including Hadoop and its
dependencies in the assembly jar.
-Sandy
On Tue, Apr 22, 2014 at 2:43 AM, Gordon Wang wrote:
> In this page http://spark.apache.org/docs/0.9.0/
In this page http://spark.apache.org/docs/0.9.0/running-on-yarn.html
We have to use spark assembly to submit spark apps to yarn cluster.
And I checked the assembly jars of spark. It contains some yarn classes
which are added during compile time. The yarn classes are not what I want.
My question i
48 matches
Mail list logo