The spark UI lists a number of Executor IDS on the cluster. I would like
to access both executor ID and Task/Attempt IDs from the code inside a
function running on a slave machine.
Currently my motivation is to  examine parallelism and locality but in
Hadoop this aids in allowing code to write non-overlapping temporary files

Reply via email to