Have you tried adding the option below through spark.executor.extraJavaOptions ?

Cheers

> On Dec 13, 2015, at 3:36 AM, Jia Zou <jacqueline...@gmail.com> wrote:
> 
> My goal is to use hprof to profile where the bottleneck is.
> Is there anyway to do this without modifying and rebuilding Spark source code.
> 
> I've tried to add 
> "-Xrunhprof:cpu=samples,depth=100,interval=20,lineno=y,thread=y,file=/home/ubuntu/out.hprof"
>  to spark-class script, but it can only profile the CPU usage of the  
> org.apache.spark.deploy.SparkSubmit class, and can not provide insights for 
> other classes like BlockManager, and user classes.
> 
> Any suggestions? Thanks a lot!
> 
> Best Regards,
> Jia

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to