You are doing some wrong conversions it seems, would be good if you can
paste the piece of code.

Thanks
Best Regards

On Mon, Mar 9, 2015 at 12:24 PM, Xi Shen <davidshe...@gmail.com> wrote:

> Hi,
>
> I used the method on this
> http://databricks.gitbooks.io/databricks-spark-reference-applications/content/twitter_classifier/train.html
> passage to save my k-means model.
>
> But now, I have no idea how to load it back...I tried
>
> sc.objectFile("/path/to/data/file/directory/")
>
>
> But I got this error:
>
> org.apache.spark.SparkDriverExecutionException: Execution error
>         at
> org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:997)
>         at
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:14
> 17)
>         at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>         at
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375)
>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>         at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>         at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>         at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
>         at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>         at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>         at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>         at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.ArrayStoreException: [Ljava.lang.Object;
>         at scala.runtime.ScalaRunTime$.array_update(ScalaRunTime.scala:88)
>         at
> org.apache.spark.SparkContext$$anonfun$runJob$3.apply(SparkContext.scala:1339)
>         at
> org.apache.spark.SparkContext$$anonfun$runJob$3.apply(SparkContext.scala:1339)
>         at
> org.apache.spark.scheduler.JobWaiter.taskSucceeded(JobWaiter.scala:56)
>         at
> org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:993)
>         ... 12 more
>
> Any suggestions?
>
>
> Thanks,
>
> [image: --]
> Xi Shen
> [image: http://]about.me/davidshen
> <http://about.me/davidshen?promo=email_sig>
>   <http://about.me/davidshen>
>

Reply via email to