there is RowMatrix implemented in spark.
and I check for a while but failed to find any matrix operations (like
multiplication etc) are defined in the class yet.
so, my question is, if I want to do matrix multiplication, (to do vector x
matrix multiplication to be precise), need to convert the vec
I am currently facing the same problem. error snapshot as below:
14-07-24 19:15:30 WARN [pool-3-thread-1] SendingConnection: Error
finishing connection to r64b22034.tt.net/10.148.129.84:47525
java.net.ConnectException: Connection timed out
at sun.nio.ch.SocketChannelImpl.checkConnect(Nativ
odel.predict(prdctpairs)
result.cach()
result.map(x =>
x.user+","+x.product+","+x.rating).saveAsTextFile(output)
it succeeds.
could anyone help explain why the cach() is necessary?
thanks
On Fri, May 9, 2014 at 6:45 PM, phoenix bai wrote:
> Hi all,
>
> My spark code
Hi all,
My spark code is running on yarn-standalone.
the last three lines of the code as below,
val result = model.predict(prdctpairs)
result.map(x =>
x.user+","+x.product+","+x.rating).saveAsTextFile(output)
sc.stop()
the same code, sometimes be able to run successfully and could g
I used spark-submit to run the MovieLensALS example from the examples
module.
here is the command:
$spark-submit --master local
/home/phoenix/spark/spark-dev/examples/target/scala-2.10/spark-examples-1.0.0-SNAPSHOT-hadoop1.0.4.jar
--class org.apache.spark.examples.mllib.MovieLensALS u.data
also,
check if the jar file that includes your example code is under
examples/target/scala-2.10/.
On Sat, May 3, 2014 at 5:58 AM, SK wrote:
> I am using Spark 0.9.1 in standalone mode. In the
> SPARK_HOME/examples/src/main/scala/org/apache/spark/ folder, I created my
> directory called "mycode" in w
Hi all,
I am reading the doc of spark (
http://spark.apache.org/docs/0.9.0/mllib-guide.html#gradient-descent-primitive).
I am trying to translate the doc into Chinese, and there it talks about
gradient descent primitive, and but i am not quite sure what it mean by
primitive?
I know gradient desce
the total memory of your machine is 2G right?
then how much memory is left free? wouldn`t ubuntu take up quite a big
portion of 2G?
just a guess!
On Sat, May 3, 2014 at 8:15 PM, Carter wrote:
> Hi, thanks for all your help.
> I tried your setting in the sbt file, but the problem is still there
according to the code, SPARK_YARN_APP_JAR is retrieved from system
variables.
and the key-value pairs you pass through to JavaSparkContext is isolated
from system variables.
so, you maybe should try setting it through System.setProperty().
thanks
On Wed, Apr 23, 2014 at 6:05 PM, 肥肥 <19934...@qq.
Hi all,
I searched around, but fail to find anything that says about running sparkR
on YARN.
so, is it possible to run sparkR with yarn ? either with yarn-standalone or
yarn-client mode.
if so, is there any document that could guide me through the build & setup
processes?
I am desparate for some
10 matches
Mail list logo