hello,
i have noticed that the random forest implementation crashes when
to many trees/ to big maxDepth is used.
im guessing that this is something to do with the amount of nodes that need
to be
kept in driver's memory during the run.
but when i examined the nodes structure is seems rather small
I have written a custom receiver for converting the tuples in the Dynamic
Queue/EventGen to the Dstream.But i dont know why It is only processing
data for some time (3-4 sec.) only and then shows Queue as Empty .ANy
suggestions please ..>>
--code //
public class JavaCustomReceiver extends Recei
In the conf/slaves file, I have hostnames.
Before 1.4.0, it is okay. I view the code in class org.apache.spark.util.Utils,
I alter function localHostName and localHostNameForURI, and it turns back to
hostnames again.
I just don't know why to change these basic functions. Hostname is nice.
JavaDStream inputStream = ssc.queueStream(rddQueue);
Can this rddQueue be of dynamic type in nature .If yes then how to
make it run untill rddQueue is not finished .
Any other way to get rddQueue from a dynamically updatable Normal Queue .
--
Thanks & Regards,
SERC-IISC
Anshu Shukla
Hi everyone:
I am having several problems with an algorithm for MLLIB that I am
developing. It uses large broadcasted variables with many iteration and
breeze vectors as RDDs. The problem is that in some stages the spark
program freezes without notification. I have tried to reduce the use of
In the conf/slaves file, are you having the ip addresses? or the hostnames?
Thanks
Best Regards
On Sat, Jun 13, 2015 at 9:51 PM, Sea <261810...@qq.com> wrote:
> In spark 1.4.0, I find that the Address is ip (it was hostname in v1.3.0),
> why? who did it?
>
>
I use the attached program to test checkpoint. It's quite simple.
When I run the program second time, it will load checkpoint data, that's
expected, however I see NPE in driver log.
Do you have any idea about the issue? I'm on Spark 1.4.0, thank you very
much!
== logs ==
15/