As far as I understand even if I could register the custom source, there is
no way to have a cluster-wide variable to pass to it, i.e. the accumulator
can be modified by tasks, but only the driver can read it and the broadcast
value is constant.
So it seems this custom metrics/sinks fuctionality is not really thought out
by the developers. 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Executor-metrics-in-spark-application-tp188p10464.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to