I now need to integrate spark into our own platform built with spring to
reflect the ability of task submission and task monitoring. Spark tasks run
on yarn and are in cluster mode. And our current service may submit tasks
to different yarn clusters.
According to the current method provided by
hello
use google translate and
https://mkdev.me/posts/ci-i-monitoring-spark-prilozheniy
On Mon, May 16, 2016 at 6:13 PM, Ashok Kumar
wrote:
> Hi,
>
> I would like to know the approach and tools please to get the full
> performance for a Spark app running through Spark-shell and
spark + zabbix + jmx
https://translate.google.ru/translate?sl=ru&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=https%3A%2F%2Fmkdev.me%2Fposts%2Fci-i-monitoring-spark-prilozheniy&edit-text=
On Mon, May 16, 2016 at 6:13 PM, Ashok Kumar
wrote:
> Hi,
>
> I would like to
Hi,
I would like to know the approach and tools please to get the full performance
for a Spark app running through Spark-shell and Spark-sumbit
- Through Spark GUI at 4040?
- Through OS utilities top, SAR
- Through Java tools like jbuilder etc
- Through integration Spark with moni
Does anyone have a link handy that describes configuring Ganglia on the mac?
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
Hello,
Spark collect HDFS read/write metrics per application/job see details
http://spark.apache.org/docs/latest/monitoring.html.
I have connected spark metrics to Graphite and then doing nice graphs
display on Graphana.
BR,
Arek
On Thu, Dec 31, 2015 at 2:00 PM, Steve Loughran wrote:
>
>> On
> On 30 Dec 2015, at 13:19, alvarobrandon wrote:
>
> Hello:
>
> Is there anyway of monitoring the number of Bytes or blocks read and written
> by an Spark application?. I'm running Spark with YARN and I want to measure
> how I/O intensive a set of applications are. Closest thing I have seen is
Thanks in advance
Best Regards.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Monitoring-Spark-HDFS-Reads-and-Writes-tp25838.html
Sent from the Apache Spark User List mailing list archive at
/Monitoring-Spark-Jobs-tp23193p23243.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
requests, in-spite of increasing the number of cores
> in the cluster. I suspect there is a bottleneck somewhere else.
>
> Regards,
> Sam
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Monitoring-Spark-Jobs-tp23193.h
response time increasing with
> increase in number of requests, in-spite of increasing the number of cores
> in the cluster. I suspect there is a bottleneck somewhere else.
>
> Regards,
> Sam
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.
average response time increasing with
increase in number of requests, in-spite of increasing the number of cores
in the cluster. I suspect there is a bottleneck somewhere else.
Regards,
Sam
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Monitoring-Spark-Jobs
Cool, great job☺.
Thanks
Jerry
From: Ryan Williams [mailto:ryan.blake.willi...@gmail.com]
Sent: Thursday, February 26, 2015 6:11 PM
To: user; d...@spark.apache.org
Subject: Monitoring Spark with Graphite and Grafana
If anyone is curious to try exporting Spark metrics to Graphite, I just
If anyone is curious to try exporting Spark metrics to Graphite, I just
published a post about my experience doing that, building dashboards in
Grafana <http://grafana.org/>, and using them to monitor Spark jobs:
http://www.hammerlab.org/2015/02/27/monitoring-spark-with-graphite-and-grafana/
If you're only interested in a particular instant, a simpler way is to
check the executors page on the Spark UI:
http://spark.apache.org/docs/latest/monitoring.html. By default each
executor runs one task per core, so you can see how many tasks are being
run at a given time and this translates dire
Are you running Spark in Local or Standalone mode? In either mode, you
should be able to hit port 4040 (to see the Spark
Jobs/Stages/Storage/Executors UI) on the machine where the driver is
running. However, in local mode, you won't have a Spark Master UI on 7080
or a Worker UI on 7081.
You can ma
hello,
im running spark on stand alone station and im try to view the event log
after the run is finished
i turned on the event log as the site said (spark.eventLog.enabled set to
true)
but i can't find the log files or get the web ui to work. any idea on how
to do this?
thanks
Isca
Hi Isca,
I think SPM can do that for you:
http://blog.sematext.com/2014/10/07/apache-spark-monitoring/
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/
On Tue, Dec 2, 2014 at 11:57 PM, Isca Harmatz wrote:
> hell
hello,
im running spark on a cluster and i want to monitor how many nodes/ cores
are active in different (specific) points of the program.
is there any way to do this?
thanks,
Isca
hello,
im running spark on a cluster and i want to monitor how many nodes/ cores
are active in different (specific) points of the program.
is there any way to do this?
thanks,
Isca
tically write such a health check?
Thanks,
Allen
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Monitoring-spark-dis-associated-workers-tp7358.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
21 matches
Mail list logo