Hello,
We are also experiencing the same error. Can you please provide the steps
that resolved the issue.
Thanks
Satya
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-issue-Spark-1-3-1-hadoop-2-6-on-CDH5-3-with-parquet-tp22808p27197.html
Sent from
ase let me know if any update or jira on this.
Thanks,
Satya
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Fwd-Saving-input-schema-along-with-PipelineModel-tp27450.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Hi , Is anyone able to use --jars with spark-submit in mesos cluster mode.
We have tried giving local file, hdfs file, file from http server , --jars
didnt work with any of the approach
Saw couple of similar open question with no answer
http://stackoverflow.com/questions/33978672/spark-mesos-
creating new thread for this.
Is anyone able to use --jars with spark-submit in mesos cluster mode.
We have tried giving local file, hdfs file, file from http server , --jars
didnt work with any of the approach
Saw couple of similar open question with no answer
http://stackoverflow.com/question
Can you run Spark Kmeans algorithm multiple times and see if the centers
remain stable? I am
guessing it is related to random initialization of centers.
On Mon, Jan 2, 2017 at 1:34 AM, Saroj C wrote:
> Dear Felix,
> Thanks. Please find the differences
>
> Cluster Spark - Size R- Size
>
> 0
> 69
Looks like default algorithm used by R in kmeans function is Hartigan-Wong
whereas Spark seems to be using Lloyd's algorithm.
Can you rerun your kmeans R code using algorithm = "Lloyd" and see if the
results match?
On Tue, Jan 3, 2017 at 12:18 AM, Saroj C wrote:
> Thanks S
Gaurav,
I would suggest elastic search.
> On Mar 3, 2017, at 3:27 AM, Gaurav1809 wrote:
>
> Hello All,
> One small question if you can help me out. I am working on Server log
> processing in Spark for my organization. I am using regular expressions
> (Regex) for pattern matching and then do fu
Hi Shashi,
Based on your requirement for securing data, we can use Apache kebros, or we
could use the security feature in Spark.
> On Apr 28, 2017, at 8:45 AM, Shashi Vishwakarma
> wrote:
>
> Hi All
>
> I was dealing with one the spark requirement here where Client (like Banking
> Client