What exactly do you need?
Basically you need to add spark libs at your pom.
пн, 24 июл. 2017 г. в 6:22, amit kumar singh :
> Hello everyone
>
> I want to use spark with java API
>
> Please let me know how can I configure it
>
>
> Thanks
> A
>
>
Yes, using the new Spark structured streaming you can keep
submitting streaming jobs against the same SparkContext in different
requests (or you can create a new SparkContext if required in a
request). The SparkJob implementation will get handle of the
SparkContext which
This is a standard practice used for chaining, to support
a.setStepSize(..)
.set setRegParam(...)
On Sun, Jul 23, 2017 at 8:47 PM, tao zhan wrote:
> Thank you for replying.
> But I do not get it completely, why does the "this.type“” necessary?
> why could not it be like:
>
> def setStepSize(
Hello everyone
I want to use spark with java API
Please let me know how can I configure it
Thanks
A
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
It means the same object ("this") is returned.
On Sun, Jul 23, 2017 at 8:16 PM, tao zhan wrote:
> Hello,
>
> I am new to scala and spark.
> What does the "this.type" in set function for?
>
>
>
> https://github.com/apache/spark/blob/481f0792944d9a77f0fe8b5e2596da
> 1d600b9d0a/mllib/src/main/sca
Hi all
I want to change the binary from kafka to string. Would you like help me please?
val df = ss.readStream.format("kafka").option("kafka.bootstrap.server","")
.option("subscribe","")
.load
val value = df.select("value")
value.writeStream
.outputMode("append")
.format("console")
@Sumedh Can I run streaming jobs on the same context with spark-jobserver ?
so there is no waiting for results since the spark sql job is expected
stream forever and results of each streaming job are captured through a
message queue.
In my case each spark sql query will be a streaming job.
On Sat
Cool thanks. Will give that a try...
--Ron
On Friday, July 21, 2017 8:09 PM, Keith Chapman
wrote:
You could also enable it with --conf spark.logLineage=true if you do not want
to change any code.
Regards,Keith.
http://keith-chapman.com
On Fri, Jul 21, 2017 at 7:57 PM, Keith Chapman
>
> left.join(right, my_fuzzy_udf (left("cola"),right("cola")))
>
While this could work, the problem will be that we'll have to check every
possible combination of tuples from left and right using your UDF. It
would be best if you could somehow partition the problem so that we could
reduce the nu
I am facing issue while connecting Apache Spark to Apache Cassandra
Datastore
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Please unsubscribe me
11 matches
Mail list logo