I figured out the cause...brew is updated to scala 2.11 and I got the
latest scala version...
Once I reverted back to 2.10.4, HEAD and 1.0.1 tag compiles fine...
On Sat, Jul 19, 2014 at 1:24 PM, Chester Chen wrote:
> Works for me as well:
>
>
> git branch
>
> branch-0.9
>
> branch-1.0
>
>
Works for me as well:
git branch
branch-0.9
branch-1.0
* master
Chesters-MacBook-Pro:spark chester$ git pull --rebase
remote: Counting objects: 578, done.
remote: Compressing objects: 100% (369/369), done.
remote: Total 578 (delta 122), reused 418 (delta 71)
Receiving objects: 100% (5
> project mllib
.
.
.
> clean
.
.
.
> compile
.
.
.
>test
...all works fine for me @2a732110d46712c535b75dd4f5a73761b6463aa8
On Sat, Jul 19, 2014 at 11:10 AM, Debasish Das
wrote:
> I am at the reservoir sampling commit:
>
> commit 586e716e47305cd7c2c3ff35c0e828b63ef2f6a8
> Author: Reynold Xin
I am at the reservoir sampling commit:
commit 586e716e47305cd7c2c3ff35c0e828b63ef2f6a8
Author: Reynold Xin
Date: Fri Jul 18 12:41:50 2014 -0700
sbt/sbt -Dhttp.nonProxyHosts=132.197.10.21
> project mllib
[info] Set current project to spark-mllib (in build
file:/Users/v606014/spark-master/)
>
see here for similar issue
http://mail-archives.apache.org/mod_mbox/spark-user/201401.mbox/%3CCALNFXi2hBSyCkPpnBJBYJnPv3dSLNw8VpL_6caEn3yfXCykO=w...@mail.gmail.com%3E
On Apr 6, 2014 4:10 PM, "Sean Owen" wrote:
> scala.None certainly isn't new in 2.10.4; it's ancient :
> http://www.scala-lang.org
scala.None certainly isn't new in 2.10.4; it's ancient :
http://www.scala-lang.org/api/2.10.3/index.html#scala.None$
Surely this is some other problem?
On Sun, Apr 6, 2014 at 6:46 PM, Koert Kuipers wrote:
> also, i thought scala 2.10 was binary compatible, but does not seem to be
> the case. the
i suggest we stick to 2.10.3, since otherwise it seems that (surprisingly)
you force everyone to upgrade
On Sun, Apr 6, 2014 at 1:46 PM, Koert Kuipers wrote:
> also, i thought scala 2.10 was binary compatible, but does not seem to be
> the case. the spark artifacts for scala 2.10.4 dont work fo
also, i thought scala 2.10 was binary compatible, but does not seem to be
the case. the spark artifacts for scala 2.10.4 dont work for me, since we
are still on scala 2.10.3, but when i recompiled and published spark with
scala 2.10.3 everything was fine again.
errors i see:
java.lang.ClassNotFoun
patrick,
this has happened before, that a commit introduced java 7 code/dependencies
and your build didnt fail, i think it was when reynold upgraded to jetty 9.
must be that your entire build infrastructure runs java 7...
On Sat, Apr 5, 2014 at 6:06 PM, Patrick Wendell wrote:
> If you want to s
Yeah spark builds are fine...
For solvers we are planning to use breeze optimization since it has most of
the core functions we will need and we can enhance it further (QP solver
for example)
Right now sparse kmeans in spark mllib uses breeze and that might not even
need this line of codeBut
That's a Breeze question, no? you should not need to compile Breeze
yourself to compile Spark -- why do that?
That method indeed only exists in Java 7. But Breeze seems to target
Java 6 as expected:
https://github.com/scalanlp/breeze/blob/master/build.sbt#L59
I see this particular line of code w
thats confusing. it seems to me the breeze dependency has been compiled
with java 6, since the mllib tests passed fine for me with java 6
On Sun, Apr 6, 2014 at 12:00 PM, Debasish Das wrote:
> Hi Koert,
>
> How do I specify that in sbt ?
>
> Is this the correct way ?
> javacOptions ++= Seq("-t
Hi Koert,
How do I specify that in sbt ?
Is this the correct way ?
javacOptions ++= Seq("-target", "1.6", "-source","1.6")
Breeze project for examples compiles fine with jdk7, fails with jdk6 and
the function it fails on:
error] /home/debasish/github/breeze/
src/main/scala/breeze/util/package
classes compiled with java7 run fine on java6 if you specified "-target
1.6". however if thats the case generally you should also be able to also
then compile it with java 6 just fine.
something compiled with java7 with "-target 1.7" will not run on java 6
On Sat, Apr 5, 2014 at 9:10 PM, Debasi
With jdk7 I could compile it fine:
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
What happens if I say take the jar and try to deploy it on ancient centos6
default on cluster ?
java -version
java versi
Will do. I'm just finishing a recompile to check for anything else like this.
The reason is because the tests run with Java 7 (like lots of us do
including me) so it used the Java 7 classpath and found the class.
It's possible to use Java 7 with the Java 6 -bootclasspath. Or just
use Java 6.
--
Se
@patrick our cluster still has java6 deployed...and I compiled using jdk6...
Sean is looking into it...this api is in java7 but not java6...
On Sat, Apr 5, 2014 at 3:06 PM, Patrick Wendell wrote:
> If you want to submit a hot fix for this issue specifically please do. I'm
> not sure why it di
If you want to submit a hot fix for this issue specifically please do. I'm
not sure why it didn't fail our build...
On Sat, Apr 5, 2014 at 2:30 PM, Debasish Das wrote:
> I verified this is happening for both CDH4.5 and 1.0.4...My deploy
> environment is Java 6...so Java 7 compilation is not goin
I verified this is happening for both CDH4.5 and 1.0.4...My deploy
environment is Java 6...so Java 7 compilation is not going to help...
Is this the PR which caused it ?
Andre Schumacher
fbebaedSpark parquet improvements A few improvements to the Parquet
support for SQL queries: - Instea
I can compile with Java 7...let me try that...
On Sat, Apr 5, 2014 at 2:19 PM, Sean Owen wrote:
> That method was added in Java 7. The project is on Java 6, so I think
> this was just an inadvertent error in a recent PR (it was the 'Spark
> parquet improvements' one).
>
> I'll open a hot-fix PR
That method was added in Java 7. The project is on Java 6, so I think
this was just an inadvertent error in a recent PR (it was the 'Spark
parquet improvements' one).
I'll open a hot-fix PR after looking for other stuff like this that
might have snuck in.
--
Sean Owen | Director, Data Science | Lo
21 matches
Mail list logo