jpivar...@gmail.com wrote > P.S. Concerning Java/C++ bindings, there are many. I tried JNI, JNA, > BridJ, and JavaCPP personally, but in the end picked JNA because of its > (comparatively) large user base. If Spark will be using Djinni, that could > be a symmetry-breaking consideration and I'll start using it for > consistency, maybe even interoperability.
I think I misunderstood what Djinni is. JNA, BridJ, and JavaCPP provide access to untyped bytes (except for common cases like java.lang.String), but it looks like Djinni goes further and provides a type mapping--- exactly the "serialization format" or "layout of bytes" that I was asking about. Is it safe to say that when Spark has off-heap caching, that it will be in the format specified by Djinni? If I work to integrate ROOT with Djinni, will this be a major step toward integrating it with Spark 2.0? Even if the above answers my first question, I'd still like to know if the new Spark API will allow RDDs to be /filled/ from the C++ side, as a data source, rather than a derived dataset. -- View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/Tungsten-off-heap-memory-access-for-C-libraries-tp13898p17388.html Sent from the Apache Spark Developers List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org