Hi,
Well, I finally was able to figure it out. I was using VectorIndexer with max
category 2 (min value is also 2) for my features, with a increase dimension of
the features vector I landed into problem of no such element found in Vector
Indexer.
It sounds a bit straight forward now, but i wa
Any suggestions any one?
Using version 1.5.1.
Regards
Ankush Khanna
On Nov 10, 2015, at 11:37 AM, Ankush Khanna wrote:
Hi,
I was working with a simple task (running locally). Just reading a file (35 mb)
with about 200 features and making a random forest with 5 trees with 5 depth.
While savi
Hi,
I was working with a simple task (running locally). Just reading a file (35 mb)
with about 200 features and making a random forest with 5 trees with 5 depth.
While saving the file with:
predictions.select("VisitNumber", "probability")
.write.format("json") // tried different formats
.
lem ?
> Thanks
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/NoSuchElementException-key-not-found-when-changing-the-window-lenght-and-interval-in-Spark-Streaming-tp9010p9283.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
Hi
I get exactly the same problem here, do you've found the problem ?
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NoSuchElementException-key-not-found-when-changing-the-window-lenght-and-interval-in-Spark-Streaming-tp9010p9283.html
Sent fro
Hi all,
I'm writing a Spark Streaming program that uses reduceByKeyAndWindow(), and
when I change the windowsLenght or slidingInterval I get the following
exceptions, running in local mode
14/07/06 13:03:46 ERROR actor.OneForOneStrategy: key not found:
1404677026000 ms
java.util.NoSuchElementExc
context:
http://apache-spark-user-list.1001560.n3.nabble.com/NoSuchElementException-key-not-found-tp6743p7157.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
I am not sure what DStream operations you are using, but some operation is
internally creating CoalescedRDDs. That is causing the race condition. I
might be able help if you can tell me what DStream operations you are using.
TD
On Tue, Jun 3, 2014 at 4:54 PM, Michael Chang wrote:
> Hi Tathagat
Hi Tathagata,
Thanks for your help! By not using coalesced RDD, do you mean not
repartitioning my Dstream?
Thanks,
Mike
On Tue, Jun 3, 2014 at 12:03 PM, Tathagata Das
wrote:
> I think I know what is going on! This probably a race condition in the
> DAGScheduler. I have added a JIRA for thi
I think I know what is going on! This probably a race condition in the
DAGScheduler. I have added a JIRA for this. The fix is not trivial though.
https://issues.apache.org/jira/browse/SPARK-2002
A "not-so-good" workaround for now would be not use coalesced RDD, which is
avoids the race condition.
I only had the warning level logs, unfortunately. There were no other
references of 32855 (except a repeated stack trace, I believe). I'm using
Spark 0.9.1
On Mon, Jun 2, 2014 at 5:50 PM, Tathagata Das
wrote:
> Do you have the info level logs of the application? Can you grep the value
> "3285
Do you have the info level logs of the application? Can you grep the
value "32855"
to find any references to it? Also what version of the Spark are you using
(so that I can match the stack trace, does not seem to match with Spark
1.0)?
TD
On Mon, Jun 2, 2014 at 3:27 PM, Michael Chang wrote:
>
Hi all,
Seeing a random exception kill my spark streaming job. Here's a stack
trace:
java.util.NoSuchElementException: key not found: 32855
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:58)
at scala.collectio
13 matches
Mail list logo