This has less to do with JNI but much to do how to pass custom environment
variables.

We are using YARN and the data is in HDFS. I have run the JNI program in
local mode within Eclipse with no problem since I can set up the environment
variables easily by using run configurations. Just I don't know how to do it
when running inside YARN.

The JNI program relies on LD_LIBRARY_PATH since the native side dynamically
load other libraries (all copied on each node). 

public final class NativeProcessor{
        
        static
        {
                System.loadLibrary("nativeprocessor");//environment variables 
must have
been set before this!!
        }
...
}

For example, when loading libnativeprocessor.so, the so may use
LD_LIBRARY_PATH or other custom environment variables to initialize, say,
some static native variables.

The env.java.opts can be used by java but not native libraries - especially
in the above case it is used in a classloading static block so the variables
need to be ready before loading native library. In hadoop mapreduce job we
can use -Dmapreduce.map.env and -Dmapreduce.reduce.env to do this. In Spark
we can use --conf 'spark.executor.XXX=blah'. I just can not find an
equivalent in Flink yet.

thanks
jackie



 



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/passing-environment-variables-to-flink-program-tp3337p3340.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.

Reply via email to