r
specific truststore in my Spark config ? Do I just give -D flags via
JAVA_OPTS ?
Thx
--
-eric ho
I'm trying to pass a trustStore pathname into pyspark.
What env variable and/or config file or script I need to change to do this ?
I've tried setting JAVA_OPTS env var but to no avail...
any pointer much appreciated... thx
--
-eric ho
I'm interested in what I should put into the trustStore file, not just for
Spark but also for Kafka and Cassandra sides..
The way I generated self-signed certs for Kafka and Cassandra sides are
slightly different...
On Thu, Sep 1, 2016 at 1:09 AM, Eric Ho wrote:
> A working example
A working example would be great...
Thx
--
-eric ho
)*
*at org.apache.spark.deploy.master.Master.main(Master.scala)*
=====
--
-eric ho
I can't find in Spark 1.6.2's docs in how to turn encryption on for Spark
to Kafka communication ... I think that the Spark docs only tells you how
to turn on encryption for inter Spark node communications .. Am I wrong ?
Thanks.
--
-eric ho
I heard that Kryo will get phased out at some point but not sure which
Spark release.
I'm using PySpark, does anyone has any docs on how to call / use Kryo
Serializer in PySpark ?
Thanks.
--
-eric ho
u're asking about.
>
> I would personally use something like CoGroup or Join between the two
> RDDs. if index matters, you can use ZipWithIndex on both before you join
> and then see which indexes match up.
>
> On Mon, Aug 15, 2016 at 1:15 PM Eric Ho wrote:
>
>>
this RRD would have contain
elements in array B as well as array A.
Same argument for RRD(B).
Any pointers much appreciated.
Thanks.
--
-eric ho
I couldn't find any RDD functions that would do this for me efficiently. I
don't really want elements of RDD(A) and RDD(B) flying all over the network
piecemeal...
THanks.
--
-eric ho
I'm submitting this via 'dse spark-submit' but somehow, I don't see any
loggings in my cluster or worker machines...
How can I find out ?
My cluster is running DSE 4.6.1 with Spark enabled.
My source is running Kafka 0.8.2.0
I'm launching my program on one of my DSE machines.
Any insights much
Can I specify this in my build file ?
Thanks.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/com-datastax-spark-spark-streaming-2-10-1-1-0-in-my-build-sbt-tp22758.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
12 matches
Mail list logo