Hi syepes,
Are u run the application in standalone mode?
Regards
El 23/06/2015 22:48, "syepes [via Apache Spark User List]" <
ml-node+s1001560n23456...@n3.nabble.com> escribió:
> Hello,
>
> I am trying use the new Kafka consumer
> "KafkaUtils.createDirectStream" but I am having some issues m
Why? I tried this solution and works fine.
El martes, 9 de junio de 2015, codingforfun [via Apache Spark User List] <
ml-node+s1001560n23218...@n3.nabble.com> escribió:
> Hi drarse, thanks for replying, the way you said use a singleton object
> does not work
>
>
>
>
When I run my program with Spark-Submit everythink are ok. But when I try
run in satandalone mode I obtain the nex Exceptions:
((This is with
val df = sqlContext.jsonFile("./datos.json")
))
java.io.EOFException
[error] at
java.io.ObjectInputStream$BlockDataInputStream.readFully(ObjectInputSt
Can u write the code? Maby is the foreachRDD body. :)
El martes, 28 de abril de 2015, CH.KMVPRASAD [via Apache Spark User List] <
ml-node+s1001560n22681...@n3.nabble.com> escribió:
> When i run spark streaming application print method is printing result it
> is f9, but i used foreachrdd on that
Hello!,
I have a questions since days ago. I am working with DataFrame and with
Spark SQL I imported a jsonFile:
/val df = sqlContext.jsonFile("file.json")/
In this json I have the label and de features. I selected it:
/
val features = df.select ("feature1","feature2","feature3",...);
val labe
I am testing the Random Forest in Spark, but I have a question... If I train
for the second time, will update the decision trees created or these are
created anew ?. That is, does the system will continually learning for each
dataset or only the first?
Thanks for everything
--
View this message