Hi every one,
I am trying to run the FP growth example. I have tried to compile the
following POM file:
com.oreilly.learningsparkexamples.mini
learning-spark-mini-example
4.0.0
example
jar
0.0.1
org.apache.spark
spark-core_2.10
Thanks for the answer. Any example?
On Jun 13, 2015 2:13 PM, "Sonal Goyal" wrote:
> I think you need to add dependency to spark mllib too.
> On Jun 13, 2015 11:10 AM, "masoom alam" wrote:
>
>> Hi every one,
>>
>> I am trying to run the FP growth ex
These two imports are missing and thus FP-growth is not compiling...
import org.apache.spark.*mllib.fpm.FPGrowth*;
import org.apache.spark.*mllib.fpm.FPGrowthModel*;
How to include the dependency in the POM file?
On Sat, Jun 13, 2015 at 4:26 AM, masoom alam
wrote:
> Thanks for the answer.
This is not working:
org.apache.spark.mlib
spark-mlib
provided
On Sat, Jun 13, 2015 at 11:56 PM, masoom alam
wrote:
> These two imports are missing and thus FP-growth is not compiling...
>
> import org.apa
ttp://repo.maven.apache.org/maven2/org/apache/spark/spark-mllib_2.10/1.4.0/spark-mllib_2.10-1.4.0.pom
*which
is available*
Any pointers?
Thanks for the help.
On Sun, Jun 14, 2015 at 5:24 AM, masoom alam
wrote:
> Thanks a lot. Will try in a while n update
>
> Thanks again
> On Jun 14, 2015
org.scalatest
scalatest_2.10
2.2.1
test
mllib
Any clues?
On Sun, Jun 14, 2015 at 8:20 PM, masoom alam
wrote:
> *Getting the followi
Is it possible to run FP-growth on stream data in its current versionor
a way around?
I mean is it possible to use/augment the old tree with the new incoming
data and find the new set of frequent patterns?
Thanks
Dear all
I want to setup spark in cluster mode. The problem is that each worker node
is looking for a file to process.in its local directory.is it
possible to setup some thing hdfs so that each worker node take its part
of a file from hdfsany good tutorials for this?
Thanks
I am getting am error that I am not able receive data in spark streaming
application from spark.please help with any pointers.
9 - java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at
java.net.AbstractPlainSocketImpl.doCon
HI every one,
I am trying to run KDD data set - basically chapter 5 of the Advanced
Analytics with Spark book. The data set is of 789MB, but Spark is taking
some 3 to 4 hours. Is it normal behaviour.or some tuning is required.
The server RAM is 32 GB, but we can only give 4 GB RAM on 64 bit Ub
Hi everyone
I am new to Scala. I have a written an application using scala in spark
Now we want to interface it through rest api end points..what is the
best choice with usplease share ur experiences
Thanks
11 matches
Mail list logo