I hope I can get the application by the driverId, but I don't find the rest api
at spark。Then how can i get the application, which belong to one driver。
Hi All,
I would like to know about spark graphx execution/processing with
database.Yes, i understand spark graphx is in-memory processing but some
extent we can manage querying but would like to do much more complex query
or processing.Please suggest me the usecase or steps for the same.
--
Vi
What is the expected effect of reducing the mesosExecutor.cores to zero?
What functionality of executor is impacted? Is the impact is just that it
just behaves like a regular process?
Regards
Sumit Chawla
On Mon, Dec 26, 2016 at 9:25 AM, Michael Gummelt
wrote:
> > Using 0 for spark.mesos.mesos
Thanks a LOT, Michael!
Pozdrawiam,
Jacek Laskowski
https://medium.com/@jaceklaskowski/
Mastering Apache Spark 2.0 https://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski
On Mon, Dec 26, 2016 at 10:04 PM, Michael Gummelt
wrote:
> In fine-grained mode (which is
In fine-grained mode (which is deprecated), Spark tasks (which are threads)
were implemented as Mesos tasks. When a Mesos task starts and stops, its
underlying cgroup, and therefore the resources its consuming on the
cluster, grows or shrinks based on the resources allocated to the tasks,
which in
Hi Michael,
That caught my attention...
Could you please elaborate on "elastically grow and shrink CPU usage"
and how it really works under the covers? It seems that CPU usage is
just a "label" for an executor on Mesos. Where's this in the code?
Pozdrawiam,
Jacek Laskowski
https://medium.co
Hi David,
Can you use persist instead? Perhaps with some other StorageLevel? It
worked with Spark 2.2.0-SNAPSHOT I use and don't remember how it
worked back then in 1.6.2.
You could also check the Executors tab and see how many blocks you
have in their BlockManagers.
Pozdrawiam,
Jacek Laskowski
I have tried the following code but didn't see anything on the storage tab.
val myrdd = sc.parallelilize(1 to 100)
myrdd.setName("my_rdd")
myrdd.cache()
myrdd.collect()
Storage tab is empty, though I can see the stage of collect() .
I am using 1.6.2 ,HDP 2.5 , spark on yarn
Thanks David
Conf
Hi,
I am running a couple of docker hosts, each with an HDFS and a spark
worker in a spark standalone cluster.
In order to get data locality awareness, I would like to configure Racks
for each host, so that a spark worker container knows from which hdfs
node container it should load its data. Does
> Using 0 for spark.mesos.mesosExecutor.cores is better than dynamic
allocation
Maybe for CPU, but definitely not for memory. Executors never shut down in
fine-grained mode, which means you only elastically grow and shrink CPU
usage, not memory.
On Sat, Dec 24, 2016 at 10:14 PM, Davies Liu wrot
10 matches
Mail list logo