24 test failures for sql/test:
https://gist.github.com/ssimeonov/89862967f87c5c497322
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Scala-API-simplifying-common-patterns-tp16238p16247.html
Sent from the Apache Spark Developers List mailing list archi
Matt,
Thanks for the email. Are you just asking whether it should work, or
reporting they don't work?
Internally, the way we track physical data distribution should make the
scenarios described work. If it doesn't, we should make them work.
On Sat, Feb 6, 2016 at 6:49 AM, Matt Cheah wrote:
>
Yea I'm not sure what's going on either. You can just run the unit tests
through "build/sbt sql/test" without running mima.
On Mon, Feb 8, 2016 at 3:47 PM, sim wrote:
> Same result with both caches cleared.
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551
Same result with both caches cleared.
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Scala-API-simplifying-common-patterns-tp16238p16244.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
-
Not 100% sure what's going on, but you can try wiping your local ivy2 and
maven cache.
On Mon, Feb 8, 2016 at 12:05 PM, sim wrote:
> Reynold, I just forked + built master and I'm getting lots of binary
> compatibility errors when running the tests.
>
> https://gist.github.com/ssimeonov/69cb0b
Reynold, I just forked + built master and I'm getting lots of binary
compatibility errors when running the tests.
https://gist.github.com/ssimeonov/69cb0b41750be776
Nothing in the dev tools section of the wiki on this. Any advice on how to
get green before I work on the PRs?
Thanks,
Sim
Sure.
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Scala-API-simplifying-common-patterns-tp16238p16241.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
-
Correct :)
_
From: Sun, Rui
Sent: Sunday, February 7, 2016 5:19 AM
Subject: RE: Fwd: Writing to jdbc database from SparkR (1.5.2)
To: , Felix Cheung , Andrew
Holway
This should be solved by your pending PR
https://github.com/apache/sp
Both of these make sense to add. Can you submit a pull request?
On Sun, Feb 7, 2016 at 3:29 PM, sim wrote:
> The more Spark code I write, the more I hit the same use cases where the
> Scala APIs feel a bit awkward. I'd love to understand if there are
> historical reasons for these and whether t
The more Spark code I write, the more I hit the same use cases where the
Scala APIs feel a bit awkward. I'd love to understand if there are
historical reasons for these and whether there is opportunity + interest to
improve the APIs. Here are my top two:
1. registerTempTable() returns Unit
def cach
This should be solved by your pending PR
https://github.com/apache/spark/pull/10480, right?
From: Felix Cheung [mailto:felixcheun...@hotmail.com]
Sent: Sunday, February 7, 2016 8:50 PM
To: Sun, Rui ; Andrew Holway
; dev@spark.apache.org
Subject: RE: Fwd: Writing to jdbc database from SparkR (1.5
I mean not exposed from the SparkR API.
Calling it from R without a SparkR API would require either a serializer change
or a JVM wrapper function.
On Sun, Feb 7, 2016 at 4:47 AM -0800, "Felix Cheung"
wrote:
That does but it's a bit hard to call from R since it is not exposed.
On Sa
That does but it's a bit hard to call from R since it is not exposed.
On Sat, Feb 6, 2016 at 11:57 PM -0800, "Sun, Rui" wrote:
DataFrameWrite.jdbc() does not work?
From: Felix Cheung [mailto:felixcheun...@hotmail.com]
Sent: Sunday, February 7, 2016 9:54 AM
To: Andrew Holway ; dev@spark
13 matches
Mail list logo