GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/145
SPARK-1254. Consolidate, order, and harmonize repository declarations in
Maven/SBT builds
This suggestion addresses a few minor suboptimalities with how repositories
are handled.
1) Use
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/131#issuecomment-37532494
I do mean dgemm, since it is in jblas, although dsyrk would be even better
as it is specialized for this case. gemm can treat its args as transposed and
apply scalars, so
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/131#issuecomment-37493745
Yes I like where this is going. I had started on a similar change, that
used gemm instead, since it can compute C <- A*B + C.
In fact I had wanted to add an ex
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37394958
@pwendell Yes that's the thing. While the repo was down I could still build
the whole project from an empty repo. For artifacts like paho, where it's found
no
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/125#issuecomment-37327013
Aha right. Sorry don't have the code in front of me. Yeah the tension here
is between making this available both places and duplicating the setup. Does
the Maven p
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/125#issuecomment-37324720
Agree, this is normally deployed as a Maven plugin rather than a manually
run Java program. I can provide that config if interested. Of course we have
this wrinkle with
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37232830
Yeah the Cloudera repo is down today. I know the ops guys are figuring out
why but don't see an ETA. While the repo ought to be, ahem, pretty reliable, it
looks li
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/25#issuecomment-37161832
What is news there? You say your environment requires proxy settings and
you successfully identified them. Here you fail to set them.
---
If your project is set up for it
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/79#issuecomment-36719653
I'm looking forward to this. I have one question just based on the
description and not reading the code. Why only binary classification? RDF is
inherently amenable to
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/77#issuecomment-36686522
OK that works, to package and then test. In the canonical Maven lifecycle,
packaging comes after test, so test would not depend on packaging. In practice
this is at worst a
Github user srowen closed the pull request at:
https://github.com/apache/spark/pull/77
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/77
SPARK-1181. 'mvn test' fails out of the box since sbt assembly does not
necessarily exist
The test suite requires that "sbt assembly" has been run in order for some
tests (like
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/32#issuecomment-36453964
Done, rebased.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/31
SPARK 1084.1 (resubmitted)
(Ported from https://github.com/apache/incubator-spark/pull/637 )
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/srowen
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/32
SPARK-1084.2 (resubmitted)
(Ported from https://github.com/apache/incubator-spark/pull/650 )
This adds one more change though, to fix the scala version warning
introduced by json4s recently
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/26#issuecomment-36225852
FWIW I agree. The tendency is almost always to include a bunch of modules
that are really separate, slightly-downstream projects. You could make similar
arguments for even
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/25#issuecomment-36217136
I'm still confused why you are posting this pull request. You found this
was a problem with your local proxy. This change does not fix that at all. Nor
would any c
17 matches
Mail list logo