Hi Everyone !!
Im trying to get on premise GPU instance of Spark 3 running on my ubuntu
box, and I am following:
https://nvidia.github.io/spark-rapids/docs/get-started/getting-started-on-prem.html#example-join-operation
Anyone with any insight into why a spark job isnt being ran on the GPU -
appe
I having trouble loading data from an s3 repo
Currently DCOS is running spark 2 so I not sure if there is a modifcation
to code with the upgrade
my code atm looks like this
sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", "xxx")
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", "xxx")
--
M
Unsubscribe.
Thanks
M
x27;ll want all of the various spark versions to be the same.
>
> On Tue, Jul 26, 2016 at 12:34 PM, Michael Armbrust > wrote:
>
>> If you are using %% (double) then you do not need _2.11.
>>
>> On Tue, Jul 26, 2016 at 12:18 PM, Martin Somers
>> wrote:
>&g
my build file looks like
libraryDependencies ++= Seq(
// other dependencies here
"org.apache.spark" %% "spark-core" % "1.6.2" % "provided",
"org.apache.spark" %% "spark-mllib_2.11" % "1.6.0",
"org.scalanlp" % "breeze_2.11" % "0.7",
Just wondering
Whats is the correct way of building a spark job using scala - are there
any changes coming with spark v2
Ive been following this post
http://www.infoobjects.com/spark-submit-with-sbt/
Then again Ive been mainly using docker locally what is decent container
for submitting these
just looking at a comparision between Matlab and Spark for svd with an
input matrix N
this is matlab code - yes very small matrix
N =
2.5903 -0.04160.6023
-0.12362.55960.7629
0.0148 -0.06930.2490
U =
-0.3706 -0.92840.0273
-0.92870.37080