Greetings,
*Env:*
I am working on a *HTTP tool *that manages spark jobs (like livy) in a
*standalone
mode*, submitting jobs in *Cluster mode.*

*Issue: *
I could not attach a *unique identifier* to refer to the job submitted by
the tool. I have to parse the logs for *driverId* which I do not think is a
production standard solution.

If I use appName as a unique identifier, there is no *one-on-one
mapping *between
*appName*-*driverId *or *appId. *Even the* /json *in spark master UI
or *deploy.Client
*has no utility to to map these two identifiers.


Any help would be appreciated regarding the existing solution. If none,  I
am open to working on this issue if it is a meaningful task for the apache
spark open source.

Please forgive me if I missed any protocol on mailing this list since it.is
my first time.

Thank you
Meivenkatkumar

Reply via email to