The “Contributing to Spark” guide is a good place to start:
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark
On August 11, 2014 at 10:36:25 PM, crigano (chris.p.rig...@gmail.com) wrote:
I am new at contributing, How is the best way to start out?
Thanks!
Chris
-
I am new at contributing, How is the best way to start out?
Thanks!
Chris
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/New-to-Open-Source-and-Sparc-Would-Like-to-Contribute-tp7812.html
Sent from the Apache Spark Developers List mailing list archiv
Hi folks,
I met several Spark SQL unit test failures when sort-based shuffle is enabled,
seems Spark SQL uses GenericMutableRow which will make ExternalSorter's
internal buffer all referred to the same object, I guess GenericMutableRow uses
only one mutable object to represent different rows, t
Thanks Sean,
I change both the API and version because there are some incompatibility
with hive-0.13, and actually can do some basic operation with the real hive
environment. But the test suite always complain with no default database
message. No clue yet.
--
View this message in context:
http
I don't think this will work just by changing the version. Have a look
at: https://issues.apache.org/jira/browse/SPARK-2706
On Tue, Aug 12, 2014 at 12:17 AM, Zhan Zhang wrote:
> I am trying to change spark to support hive-0.13, but always met following
> problem when running the test. My feeling
Try setting it to handle incremental compilation of Scala by itself
(IntelliJ) and to run its own compile server. This is in global
settings, under the Scala settings. It seems to compile incrementally
for me when I change a file or two.
On Mon, Aug 11, 2014 at 8:57 PM, Ron's Yahoo!
wrote:
> Hi,
I am trying to change spark to support hive-0.13, but always met following
problem when running the test. My feeling is the test setup may need to
change, but don't know exactly. Who has the similar issue or is able to shed
light on it?
13:50:53.331 ERROR org.apache.hadoop.hive.ql.Driver: FAILED:
Thanks for looking into this. I think little tools like this are super
helpful.
Would it hurt to open a request with INFRA to install/configure the
JIRA-GitHub plugin while we continue to use the Python script we have? I
wouldn't mind opening that JIRA issue with them.
Nick
On Mon, Aug 11, 2014
Hi Ron,
A possible recommendation is to use maven for the entire process
(avoiding the sbt artifacts/processing). IJ is pretty solid in its maven
support.
a) mvn -DskipTests -Pyarn -Phive -Phadoop-2.3 compile package
b) Inside IJ: Open the parent/root pom.xml as a new maven project
c) Ins
Hi,
I’ve been able to get things compiled on my environment, but I’m noticing
that it’s been quite difficult in IntelliJ. It always recompiles everything
when I try to run one test like BroadcastTest, for example, despite having
compiled make-distribution previously. In eclipse, I have no such
If you don't want to build the entire thing, you can also do
mvn generate-sources in externals/flume-sink
Thanks,
Ron
Sent from my iPhone
> On Aug 11, 2014, at 8:32 AM, Hari Shreedharan
> wrote:
>
> Jay running sbt compile or assembly should generate the sources.
>
>> On Monday, August 11,
I spent some time on this and I'm not sure either of these is an option,
unfortunately.
We typically can't use custom JIRA plug-in's because this JIRA is
controlled by the ASF and we don't have rights to modify most things about
how it works (it's a large shared JIRA instance used by more than 50
Jay running sbt compile or assembly should generate the sources.
On Monday, August 11, 2014, Devl Devel wrote:
> Hi
>
> So far I've been managing to build Spark from source but since a change in
> spark-streaming-flume I have no idea how to generate classes (e.g.
> SparkFlumeProtocol) from the a
It looks like this script doesn't catch PRs that are opened and *then* have
the JIRA issue ID added to the name. Would it be easy to somehow have the
script trigger on PR name changes as well as PR creates?
Alternately, is there a reason we can't or don't want to use the plugin
mentioned below? (I
Hi
So far I've been managing to build Spark from source but since a change in
spark-streaming-flume I have no idea how to generate classes (e.g.
SparkFlumeProtocol) from the avro schema.
I have used sbt to run avro:generate (from the top level spark dir) but it
produces nothing - it just says:
>
an issue 3 - 4 PR, spark dev community is really active :)
it seems currently spark-shell takes only some SUBMISSION_OPTS, but no
APPLICATION_OPTS
do you have plan to add some APPLICATION_OPTS or CLI_OPTS like
hive -e
hive -f
hive -hivevar
then we can use our scala code as scripts, run them dire
16 matches
Mail list logo