Hi Robert,
Thanks for the answer. 
My code does actually contain both mapr streams and maprdb jars. here are
the steps I followed based on your suggestion:
1. I copied only the mapr-streams-*.jar and maprdb*.jar.
2. Then I tried to run my jar, but i got java.lang.noclassdeffounderror for
some maprfs class.
3. I added maprfs*.jar to lib and tried submitting my jar again. 
4. This time I got java.lang.noclassdeffounderror for some hadoopfs class.
5. At this point I just created a sym link in lib folder to point to the
mapr lib folder, basically entailing that ALL the mapr related jars will be
deployed into the system classloader.
6. This previous step did the trick and I was able to get my job running.
Also, I have not yet encountered the error that I had earlier mentioned,
once I cancelled and resubmitted the job.

My only question is: Is this the expected behavior and normal solution? Do
we really need to add ALL the jars? I can possibly nitpick which jar to copy
by using dependency tree, but to do that for all the jobs feels cumbersome.
  



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Issues-while-restarting-a-job-on-HA-cluster-tp11294p11332.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.

Reply via email to