Thank you all sirs
Appreciated Mich your clarification.
On Sunday, 19 June 2016, 19:31, Mich Talebzadeh
wrote:
Thanks Jonathan for your points
I am aware of the fact yarn-client and yarn-cluster are both depreciated (still
work in 1.6.1), hence the new nomenclature.
Bear in mind this
Thanks Jonathan for your points
I am aware of the fact yarn-client and yarn-cluster are both depreciated
(still work in 1.6.1), hence the new nomenclature.
Bear in mind this is what I stated in my notes:
"YARN Cluster Mode, the Spark driver runs inside an application master
process which is mana
Mich, what Jacek is saying is not that you implied that YARN relies on two
masters. He's just clarifying that yarn-client and yarn-cluster modes are
really both using the same (type of) master (simply "yarn"). In fact, if
you specify "--master yarn-client" or "--master yarn-cluster", spark-submit
w
Good points but I am an experimentalist
In Local mode I have this
In local mode with:
--master local
This will start with one thread or equivalent to –master local[1]. You can
also start by more than one thread by specifying the number of threads *k*
in –master local[k]. You can also start us
On Sun, Jun 19, 2016 at 12:30 PM, Mich Talebzadeh
wrote:
> Spark Local - Spark runs on the local host. This is the simplest set up and
> best suited for learners who want to understand different concepts of Spark
> and those performing unit testing.
There are also the less-common master URLs:
*
Spark works on different modes, either local (Spark or anything else does
not manager) resources and standalone (Spark itself manages resources)
plus others (see below)
These are from my notes, excluding mesos that I have not used
- Spark Local - Spark runs on the local host. This is the sim
There are many technical differences inside though, how to use is the
almost same with each other.
yea, in a standalone mode, spark runs in a cluster way: see
http://spark.apache.org/docs/1.6.1/cluster-overview.html
// maropu
On Sun, Jun 19, 2016 at 6:14 PM, Ashok Kumar wrote:
> thank you
>
> W
thank you
What are the main differences between a local mode and standalone mode. I
understand local mode does not support cluster. Is that the only difference?
On Sunday, 19 June 2016, 9:52, Takeshi Yamamuro
wrote:
Hi,
In a local mode, spark runs in a single JVM that has a master an
Hi,
In a local mode, spark runs in a single JVM that has a master and one
executor with `k` threads.
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/local/LocalSchedulerBackend.scala#L94
// maropu
On Sun, Jun 19, 2016 at 5:39 PM, Ashok Kumar
wrote:
>
Hi,
I have been told Spark in Local mode is simplest for testing. Spark document
covers little on local mode except the cores used in --master local[k].
Where are the the driver program, executor and resources. Do I need to start
worker threads and how many app I can use safely without exceeding
Hi,
Did you resolve this? I have the same questions.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-in-Local-Mode-tp22279p23278.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
ecial one must do, running locally and
> submitting
> >> >> > a
> >> >> > job
> >> >> > like so:
> >> >> >
> >> >> > spark-submit \
> >> >
e:
> >> >> > Hi,
> >> >> >
> >> >> > Is there anything special one must do, running locally and
> submitting
> >> >> > a
> >> >> > job
> >> >> > like so:
> >> >> >
> >>
>> > a
>> >> > job
>> >> > like so:
>> >> >
>> >> > spark-submit \
>> >> > --class "com.myco.Driver" \
>> >> > --master local[*] \
>> >> > ./lib/myco.j
park-submit \
> >> > --class "com.myco.Driver" \
> >> > --master local[*] \
> >> > ./lib/myco.jar
> >> >
> >> > In my logs, I'm only seeing log messages with the thread identifier of
> >> >
I'm only seeing log messages with the thread identifier of
>> > "Executor task launch worker-0".
>> >
>> > There are 4 cores on the machine so I expected 4 threads to be at play.
>> > Running with local[32] did not yield 32 worker threads.
>&
tor task launch worker-0".
> >
> > There are 4 cores on the machine so I expected 4 threads to be at play.
> > Running with local[32] did not yield 32 worker threads.
> >
> > Any recommendations? Thanks.
> >
> >
> >
> > --
> > View
gt; "Executor task launch worker-0".
>
> There are 4 cores on the machine so I expected 4 threads to be at play.
> Running with local[32] did not yield 32 worker threads.
>
> Any recommendations? Thanks.
>
>
>
> --
> View this message in context:
> http:/
ith local[32] did not yield 32 worker threads.
>
> Any recommendations? Thanks.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-in-local-mode-seems-to-ignore-
task launch worker-0".
There are 4 cores on the machine so I expected 4 threads to be at play.
Running with local[32] did not yield 32 worker threads.
Any recommendations? Thanks.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-in-local-mod
executor for each application)
>
> 2. Is there anyway to set up the max memory used by each worker
> thread/node?
> I only find we can set the memory for each executor? (spark.executor.mem)
>
> Thank you!
>
>
>
>
>
> --
> View this message in context:
> http
memory for each executor? (spark.executor.mem)
Thank you!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-in-Local-Mode-tp22279.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
h. I'll be migrating from Spark 1.0.2 to 1.1.0
in the next day or so to see if that helps.
Does anyone have any experience on the matter?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-in-Local-Mode-vs-Single-Node-Cluster-tp14834.html
Sent f
23 matches
Mail list logo