res state or config in a way
> that breaks something. I see there's already a reset() call in here to
> try to avoid that.
>
> Well, seems worth a PR, especially if you can demonstrate some
> performance gains.
>
> On Wed, Oct 24, 2018 at 3:09 PM Patrick Brown
> wro
ested in merging in, and how I
might go about that.
Thanks,
Patrick
Yep, that sounds reasonable to me!
On Fri, Mar 30, 2018 at 5:50 PM, Ted Yu wrote:
> +1
>
> Original message
> From: Ryan Blue
> Date: 3/30/18 2:28 PM (GMT-08:00)
> To: Patrick Woody
> Cc: Russell Spitzer , Wenchen Fan <
> cloud0...@gmail.com>, T
partition, but that doesn't make much
> sense because the partitioning would ensure that each partition has just
> one combination of the required clustering columns. Using a hash
> partitioner would make it so that the in-partition sort basically ignores
> the first few values,
e required.
>>
>> For the second point that ordering is useful for statistics and
>> compression, I completely agree. Our best practices doc tells users to
>> always add a global sort when writing because you get the benefit of a
>> range partitioner to handle skew, plus
;
> For your first use case, an explicit global ordering, the problem is that
> there can’t be an explicit global ordering for a table when it is populated
> by a series of independent writes. Each write could have a global order,
> but once those files are written, you have to deal wit
sing something?
>>>>>>
>>>>>> I think that we should design the write side independently based on
>>>>>> what data stores actually need, and take a look at the read side based on
>>>>>> what data stores can actually provide
e
>>>> passed?
>>>>
>>>> To your other questions, you might want to have a look at the recent
>>>> SPIP I’m working on to consolidate and clean up logical plans
>>>> <https://docs.google.com/document/d/1gYm5Ji2Mge3QBdOliFV5gSPTKlX4
Hey all,
I saw in some of the discussions around DataSourceV2 writes that we might
have the data source inform Spark of requirements for the input data's
ordering and partitioning. Has there been a proposed API for that yet?
Even one level up it would be helpful to understand how I should be
thin
s-with-Shapeless
<https://benfradet.github.io/blog/2017/06/14/Deriving-Spark-Dataframe-schemas-with-Shapeless>)
Both use Shapeless to derive Datasets.
I hope it helps.
Patrick.
> On Nov 14, 2017, at 20:38, mlopez wrote:
>
> Hello everyone!
>
> I'm a developer at a
SPARK-22055 & SPARK-22054 to port the
> release scripts and allow injecting of the RM's key.
>
> On Mon, Sep 18, 2017 at 8:11 PM, Patrick Wendell
> wrote:
>
>> For the current release - maybe Holden could just sign the artifacts with
>> her own key manually, if
For the current release - maybe Holden could just sign the artifacts with
her own key manually, if this is a concern. I don't think that would
require modifying the release pipeline, except to just remove/ignore the
existing signatures.
- Patrick
On Mon, Sep 18, 2017 at 7:56 PM, Reynol
ark repo.
[1] https://github.com/apache/spark/tree/master/dev/create-release
- Patrick
On Mon, Sep 18, 2017 at 6:23 PM, Patrick Wendell
wrote:
> One thing we could do is modify the release tooling to allow the key to be
> injected each time, thus allowing any RM to insert their own key at
One thing we could do is modify the release tooling to allow the key to be
injected each time, thus allowing any RM to insert their own key at build
time.
Patrick
On Mon, Sep 18, 2017 at 4:56 PM Ryan Blue wrote:
> I don't understand why it is necessary to share a release key. If
themselves can do quite a
bit of nefarious things anyways.
It is true that we trust all previous release managers instead of only one.
We could probably rotate the jenkins credentials periodically in order to
compensate for this, if we think this is a nontrivial risk.
- Patrick
On Sun, Sep 17, 2017
JIRA
| |
|
Patrick.
De : Katherine Prevost
À : Jörn Franke ; Katherine Prevost
Cc : dev@spark.apache.org
Envoyé le : Mercredi 16 août 2017 11h55
Objet : Re: Questions about the future of UDTs and Encoders
I'd say the quick summary of the problem is this:
The en
Hey all,
Just wondering if anyone has had issues with this or if it is expected that
the semantic around the memory management is different here.
Thanks
-Pat
On Tue, Apr 19, 2016 at 9:32 AM, Patrick Woody
wrote:
> Hey all,
>
> I had a question about the MemoryStore for the BlockMan
Hey all,
I had a question about the MemoryStore for the BlockManager with the
unified memory manager v.s. the legacy mode.
In the unified format, I would expect the max size of the MemoryStore to be
* *
in the same way that when using the StaticMemoryManager it is
* *
.
Instead it appea
Hey Michael,
Any update on a first cut of the RC?
Thanks!
-Pat
On Mon, Feb 15, 2016 at 6:50 PM, Michael Armbrust
wrote:
> I'm not going to be able to do anything until after the Spark Summit, but
> I will kick off RC1 after that (end of week). Get your patches in before
> then!
>
> On Sat, Fe
+1
On Wed, Dec 16, 2015 at 6:15 PM, Ted Yu wrote:
> Ran test suite (minus docker-integration-tests)
> All passed
>
> +1
>
> [INFO] Spark Project External ZeroMQ .. SUCCESS [
> 13.647 s]
> [INFO] Spark Project External Kafka ... SUCCESS [
> 45.424 s]
> [INF
passing by has some idea of how things are going
and can chime in, etc.
Once an RC is cut then we do mostly rely on the mailing list for
discussion. At that point the number of known issues is small enough I
think to discuss in an all-to-all fashion.
- Patrick
On Wed, Dec 2, 2015 at 1:25 PM, Sean O
years, with minimal impact for users.
- Patrick
On Tue, Nov 10, 2015 at 3:35 PM, Nicholas Chammas <
nicholas.cham...@gmail.com> wrote:
> > For this reason, I would *not* propose doing major releases to break
> substantial API's or perform large re-architecting that prevent us
changes crop up fairly frequently.
My feeling is mostly pragmatic... if we can get things working to
standardize on Maven-style resolution by upgrading SBT, let's do it. If
that's not tenable, we can evaluate alternatives.
- Patrick
On Fri, Nov 6, 2015 at 3:07 PM, Marcelo Vanzin wrote:
>
> it is apparently very easy to change the maven resolution mechanism to the
> ivy one.
> Patrick, would this not help with the problems you described?
>
> On 5 November 2015 at 23:23, Patrick Wendell wrote:
>
>> Hey Jakob,
>>
>> The builds in Spark are lar
e to start, given that we need to continue to
support maven - the coupling is intentional. But getting involved in the
build in general would be completely welcome.
- Patrick
On Thu, Nov 5, 2015 at 10:53 PM, Sean Owen wrote:
> Maven isn't 'legacy', or supported for the benefit
I believe this is some bug in our tests. For some reason we are using way
more memory than necessary. We'll probably need to log into Jenkins and
heap dump some running tests and figure out what is going on.
On Mon, Nov 2, 2015 at 7:42 AM, Ted Yu wrote:
> Looks like SparkListenerSuite doesn't OO
I verified that the issue with build binaries being present in the source
release is fixed. Haven't done enough vetting for a full vote, but did
verify that.
On Sun, Oct 25, 2015 at 12:07 AM, Reynold Xin wrote:
> Please vote on releasing the following candidate as Apache Spark
> version 1.5.2. T
I think many of them are coming form the Spark 1.4 builds:
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/Spark-1.4-Maven-pre-YARN/3900/console
On Mon, Oct 19, 2015 at 1:44 PM, Patrick Wendell wrote:
> This is what I'm looking at:
>
ailures so i can look
> in to them more closely?
>
> On Mon, Oct 19, 2015 at 12:27 PM, Patrick Wendell
> wrote:
> > Hey Shane,
> >
> > It also appears that every Spark build is failing right now. Could it be
> > related to your changes?
> >
> >
Hey Shane,
It also appears that every Spark build is failing right now. Could it be
related to your changes?
- Patrick
On Mon, Oct 19, 2015 at 11:13 AM, shane knapp wrote:
> worker 05 is back up now... looks like the machine OOMed and needed
> to be kicked.
>
> On Mon, Oct 19,
Jakob this is now being tested by our harness. I've created a JIRA for the
issue, if you want to take a stab at fixing these, that would be great:
https://issues.apache.org/jira/browse/SPARK-0
- Patrick
On Wed, Oct 14, 2015 at 12:20 PM, Patrick Wendell
wrote:
> Hi Jakob,
>
&
rness today.
In terms of fixing the underlying issues, I am not sure whether there is a
JIRA for it yet, but we should make one if not. Does anyone know?
- Patrick
On Wed, Oct 14, 2015 at 12:13 PM, Jakob Odersky wrote:
> Hi everyone,
>
> I've been having trouble building Spark with
I would tend to agree with this approach. We should audit all
@Experimenetal labels before the 1.6 release and clear them out when
appropriate.
- Patrick
On Wed, Oct 14, 2015 at 2:13 AM, Sean Owen wrote:
> Someone asked, is "ML pipelines" stable? I said, no, most of the key
> c
It's really easy to create and modify those builds. If the issue is that we
need to add SBT or Maven to the existing one, it's a short change. We can
just have it build both of them. I wasn't aware of things breaking before
in one build but not another.
- Patrick
On Mon, Oct 12,
h maybe takes at most one
hour)... it's not worth it.
- Patrick
On Mon, Oct 12, 2015 at 8:24 AM, Sean Owen wrote:
> There are many Jenkins jobs besides the pull request builder that
> build against various Hadoop combinations, for example, in the
> background. Is there an obstacl
a little pedantic, but we ended up removing it from
our source tree and adding things to download it for the user.
- Patrick
On Sun, Oct 11, 2015 at 10:12 PM, Sean Owen wrote:
> No we are voting on the artifacts being released (too) in principle.
> Although of course the artifacts should be a
inside of the
source tree, including some effort to generate jars on the fly which a lot
of our tests use. I am not sure whether it's a firm policy that you can't
have jars in test folders, though. If it is, we could probably do some
magic to get rid of these few ones that have crept in
using the most current version of the build scripts. See related links:
https://issues.apache.org/jira/browse/SPARK-10511
https://github.com/apache/spark/pull/8774/files
I can update our build environment and we can repackage the Spark 1.5.1
source tarball. To not include sources.
- Patrick
On
*to not include binaries.
On Sun, Oct 11, 2015 at 9:35 PM, Patrick Wendell wrote:
> I think Daniel is correct here. The source artifact incorrectly includes
> jars. It is inadvertent and not part of our intended release process. This
> was something I noticed in Spark 1.5.0 and filed a
I would push back slightly. The reason we have the PR builds taking so long
is death by a million small things that we add. Doing a full 2.11 compile
is order minutes... it's a nontrivial increase to the build times.
It doesn't seem that bad to me to go back post-hoc once in a while and fix
2.11 b
ture?
>
> Nick
>
>
> On Tue, Oct 6, 2015 at 1:13 AM Patrick Wendell wrote:
>
>> The missing artifacts are uploaded now. Things should propagate in the
>> next 24 hours. If there are still issues past then ping this thread. Thanks!
>>
>> - Patrick
>>
a look at the project.
In any case, getting some high level view of the functionality you imagine
would be helpful to give more detailed feedback.
- Patrick
On Tue, Oct 6, 2015 at 3:12 PM, Holden Karau wrote:
> Hi Spark Devs,
>
> So this has been brought up a few times before, and
The missing artifacts are uploaded now. Things should propagate in the next
24 hours. If there are still issues past then ping this thread. Thanks!
- Patrick
On Mon, Oct 5, 2015 at 2:41 PM, Nicholas Chammas wrote:
> Thanks for looking into this Josh.
>
> On Mon, Oct 5, 2015 at 5:3
BTW - the merge window for 1.6 is September+October. The QA window is
November and we'll expect to ship probably early december. We are on a
3 month release cadence, with the caveat that there is some
pipelining... as we finish release X we are already starting on
release X+1.
- Patrick
O
Ah - I can update it. Usually i do it after the release is cut. It's
just a standard 3 month cadence.
On Thu, Oct 1, 2015 at 3:55 AM, Sean Owen wrote:
> My guess is that the 1.6 merge window should close at the end of
> November (2 months from now)? I can update it but wanted to check if
> anyone
Hey Richard,
My assessment (just looked before I saw Sean's email) is the same as
his. The NOTICE file embeds other projects' licenses. If those
licenses themselves have pointers to other files or dependencies, we
don't embed them. I think this is standard practice.
- Patrick
other people are
supportive of this plan I can offer to help spend some time thinking
about any potential corner cases, etc.
- Patrick
On Wed, Sep 23, 2015 at 3:13 PM, Marcelo Vanzin wrote:
> Hey all,
>
> This is something that we've discussed several times internally, but
> never r
I just added snapshot builds for 1.5. They will take a few hours to
build, but once we get them working should publish every few hours.
https://amplab.cs.berkeley.edu/jenkins/view/Spark-Packaging
- Patrick
On Mon, Sep 21, 2015 at 10:36 PM, Bin Wang wrote:
> However I find some scripts in
.
I've documented this on the wiki:
https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools
- Patrick
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail:
There is already code in place that restricts which tests run
depending on which code is modified. However, changes inside of
Spark's core currently require running all dependent tests. If you
have some ideas about how to improve that heuristic, it would be
great.
- Patrick
On Tue, Aug 25,
other patches didn't
introduce problems.
https://amplab.cs.berkeley.edu/jenkins/view/Spark-QA-Test/
- Patrick
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org
Hey Meihua,
If you are a user of Spark, one thing that is really helpful is to run
Spark 1.5 on your workload and report any issues, performance
regressions, etc.
- Patrick
On Mon, Aug 3, 2015 at 11:49 PM, Akhil Das wrote:
> I think you can start from here
> https://issues.apache.or
I have a follow up on this:
I see on JIRA that the idea of having a GLMNET implementation was more or
less abandoned, since a OWLQN implementation was chosen to construct a model
using L1/L2 regularization.
However, GLMNET has the property of "returning a multitide of models
(corresponding to
Yeah the best bet is to use ./build/mvn --force (otherwise we'll still
use your system maven).
- Patrick
On Mon, Aug 3, 2015 at 1:26 PM, Sean Owen wrote:
> That statement is true for Spark 1.4.x. But you've reminded me that I
> failed to update this doc for 1.5, to say Maven 3
Hey All,
I got it up and running - it was a newly surfaced bug in the build scripts.
- Patrick
On Wed, Jul 29, 2015 at 6:05 AM, Bharath Ravi Kumar wrote:
> Hey Patrick,
>
> Any update on this front please?
>
> Thanks,
> Bharath
>
> On Fri, Jul 24, 2015 at 8:38 PM
th me. I would vouch for having user continuity, for instance still
have a "shim" ec2/spark-ec2 script that could perhaps just download
and unpack the real script from github.
- Patrick
On Fri, Jul 31, 2015 at 2:13 PM, Shivaram Venkataraman
wrote:
> Yes - It is still in progress, b
what the best behavior would be. Ideally in my mind if the same shortname
were registered twice we'd force the user to use a fully qualified name and
say the short name is ambiguous.
Patrick
On Jul 30, 2015 9:44 AM, "Joseph Batchik" wrote:
> Hi all,
>
> There are now starti
Thanks ted for pointing this out. CC to Ryan and TD
On Tue, Jul 28, 2015 at 8:25 AM, Ted Yu wrote:
> Hi,
> I noticed that ReceiverTrackerSuite is failing in master Jenkins build for
> both hadoop profiles.
>
> The failure seems to start with:
> https://amplab.cs.berkeley.edu/jenkins/job/Spark-Mas
.
It's not worth waiting any time to try and figure out how to fix it,
or blocking on tracking down the commit author. This is because every
hour that we have the PRB broken is a major cost in terms of developer
productivity.
- Pa
I've disabled the test and filed a JIRA:
https://issues.apache.org/jira/browse/SPARK-9335
On Fri, Jul 24, 2015 at 4:05 PM, Steve Loughran wrote:
>
> Looks like Jenkins is hitting some AWS limits
>
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/38396/testReport/org.apache.sp
elp advise people on specific patches if they want
a soundingboard to understand whether it makes sense to backport.
- Patrick
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-m
Hey Bharath,
There was actually an incompatible change to the build process that
broke several of the Jenkins builds. This should be patched up in the
next day or two and nightly builds will resume.
- Patrick
On Fri, Jul 24, 2015 at 12:51 AM, Bharath Ravi Kumar
wrote:
> I noticed the last (
I think we should just revert this patch on all affected branches. No
reason to leave the builds broken until a fix is in place.
- Patrick
On Sun, Jul 19, 2015 at 6:03 PM, Josh Rosen wrote:
> Yep, I emailed TD about it; I think that we may need to make a change to the
> pull request buil
about our
perspective and why you might sense some frustration.
[1]
https://web.archive.org/web/20061020220358/http://www.apache.org/dev/release.html
[2]
https://web.archive.org/web/20061231050046/http://www.apache.org/dev/release.html
- Patrick
On Tue, Jul 14, 2015 at 10:09 AM, Sean Busbey
Spark developer Wiki](link)". I think this would preserve
discoverability while also placing the information on the wiki, which
seems to be the main ask of the policy.
- Patrick
On Sun, Jul 19, 2015 at 2:32 AM, Sean Owen wrote:
> I am going to make an edit to the download page on the web s
+1 from me too
On Sat, Jul 18, 2015 at 3:32 AM, Ted Yu wrote:
> +1 to removing commit messages.
>
>
>
>> On Jul 18, 2015, at 1:35 AM, Sean Owen wrote:
>>
>> +1 to removing them. Sometimes there are 50+ commits because people
>> have been merging from master into their branch rather than rebasing
spark-release-1-4-1.html
Comprehensive list of fixes - http://s.apache.org/spark-1.4.1
Thanks to the 85 developers who worked on this release!
Please contact me directly for errata in the release notes.
- Patrick
-
To unsubscribe, e
Actually the java one is a concrete class.
On Wed, Jul 15, 2015 at 12:14 PM, Patrick Wendell wrote:
> One related note here is that we have a Java version of this that is
> an abstract class - in the doc it says that it exists more or less to
> allow for binary compatibility (it says
/main/java/org/apache/spark/JavaSparkListener.java#L23
I think it might be reasonable that the Scala trait provides only
source compatibitly and the Java class provides binary compatibility.
- Patrick
On Wed, Jul 15, 2015 at 11:47 AM, Marcelo Vanzin wrote:
> Hey all,
>
> Just noticed thi
This vote passes with 14 +1 (7 binding) votes and no 0 or -1 votes.
+1 (14):
Patrick Wendell
Reynold Xin
Sean Owen
Burak Yavuz
Mark Hamstra
Michael Armbrust
Andrew Or
York, Brennon
Krishna Sankar
Luciano Resende
Holden Karau
Tom Graves
Denny Lee
Sean McNamara
- Patrick
On Wed, Jul 8, 2015 at 10
somewhere that is really old and not actually
universally followed. It's difficult for us in such situations to now
how to proceed and how much autonomy we as a PMC have to make
decisions about our own project.
- Patrick
On Sun, Jul 12, 2015 at 7:52 PM, Sean Busbey wrote:
> Please no
I think we can close this vote soon. Any addition votes/testing would
be much appreciated!
On Fri, Jul 10, 2015 at 11:30 AM, Sean McNamara
wrote:
> +1
>
> Sean
>
>> On Jul 8, 2015, at 11:55 PM, Patrick Wendell wrote:
>>
>> Please vote on releasing the following can
Thanks Sean O. I was thinking something like "NOTE: Nightly builds are
meant for development and testing purposes. They do not go through
Apache's release auditing process and are not official releases."
- Patrick
On Sun, Jul 12, 2015 at 3:39 PM, Sean Owen wrote:
> (This sou
ormal policy asks us not to include links "that encourage
non-developers to download" the builds. Stating clearly that the
audience for those links is developers, in my interpretation that
would satisfy the letter and spirit of this policy.
- Patrick
On Sat, Jul 11, 2015 at 11:53 AM, S
+1
On Wed, Jul 8, 2015 at 10:55 PM, Patrick Wendell wrote:
> Please vote on releasing the following candidate as Apache Spark version
> 1.4.1!
>
> This release fixes a handful of known issues in Spark 1.4.0, listed here:
> http://s.apache.org/spark-1.4.1
>
> The tag to be v
Please vote on releasing the following candidate as Apache Spark version 1.4.1!
This release fixes a handful of known issues in Spark 1.4.0, listed here:
http://s.apache.org/spark-1.4.1
The tag to be voted on is v1.4.1-rc4 (commit dbaa5c2):
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=co
This vote is cancelled in favor of RC4.
- Patrick
On Tue, Jul 7, 2015 at 12:06 PM, Patrick Wendell wrote:
> Please vote on releasing the following candidate as Apache Spark version
> 1.4.1!
>
> This release fixes a handful of known issues in Spark 1.4.0, listed here:
> http
additional day in order to get that fix.
- Patrick
On Wed, Jul 8, 2015 at 12:00 PM, Josh Rosen wrote:
> I've filed https://issues.apache.org/jira/browse/SPARK-8903 to fix the
> DataFrameStatSuite test failure. The problem turned out to be caused by a
> mistake made while re
Yeah - we can fix the docs separately from the release.
- Patrick
On Wed, Jul 8, 2015 at 10:03 AM, Mark Hamstra wrote:
> HiveSparkSubmitSuite is fine for me, but I do see the same issue with
> DataFrameStatSuite -- OSX 10.10.4, java
>
> 1.7.0_75, -Phive -Phive-thriftserver -Phadoo
Please vote on releasing the following candidate as Apache Spark version 1.4.1!
This release fixes a handful of known issues in Spark 1.4.0, listed here:
http://s.apache.org/spark-1.4.1
The tag to be voted on is v1.4.1-rc3 (commit 3e8ae38):
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=co
Hey All,
This vote is cancelled in favor of RC3.
- Patrick
On Fri, Jul 3, 2015 at 1:15 PM, Patrick Wendell wrote:
> Please vote on releasing the following candidate as Apache Spark version
> 1.4.1!
>
> This release fixes a handful of known issues in Spark 1.4.0, listed
Hi Tomo,
For now you can do that as a work around. We are working on a fix for
this in the master branch but it may take a couple of days since the
issue is fairly complicated.
- Patrick
On Sat, Jul 4, 2015 at 7:00 AM, tomo cocoa wrote:
> Hi all,
>
> I have a same error and it seems
://github.com/apache/spark/commit/bc51bcaea734fe64a90d007559e76f5ceebfea9e
On Fri, Jul 3, 2015 at 4:36 PM, Patrick Wendell wrote:
> Okay I did some forensics with Sean Owen. Some things about this bug:
>
> 1. The underlying cause is that we added some code to make the tests
> of sub modul
ckage build) so typical users
won't have this bug.
2. Add a profile that re-enables that setting.
3. Use the above profile when publishing release artifacts to maven central.
4. Hope that we don't hit this bug for publishing.
- Patrick
On Fri, Jul 3, 2015 at 3:51 PM, Tarek Auel
Let's continue the disucssion on the other thread relating to the master build.
On Fri, Jul 3, 2015 at 4:13 PM, Patrick Wendell wrote:
> Thanks - it appears this is just a legitimate issue with the build,
> affecting all versions of Maven.
>
> On Fri, Jul 3, 2015 at 4:02 P
quot;, version: "10.10.3", arch: "x86_64", family: "mac"
>
> Let me nuke it and reinstall maven.
>
> Cheers
>
>
> On Fri, Jul 3, 2015 at 3:41 PM, Patrick Wendell wrote:
>>
>> What if you use the built-in maven (i.e. build/mvn). It might be tha
t 23:44, Robin East wrote:
>
> I used the following build command:
>
> build/mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean
> package
>
> this also gave the ‘Dependency-reduced POM’ loop
>
> Robin
>
> On 3 Jul 2015, at 23:41, Patrick Wendell wrote:
>
Can you try using the built in maven "build/mvn..."? All of our builds
are passing on Jenkins so I wonder if it's a maven version issue:
https://amplab.cs.berkeley.edu/jenkins/view/Spark-QA-Compile/
- Patrick
On Fri, Jul 3, 2015 at 3:14 PM, Ted Yu wrote:
> Please take a
What if you use the built-in maven (i.e. build/mvn). It might be that
we require a newer version of maven than you have. The release itself
is built with maven 3.3.3:
https://github.com/apache/spark/blob/master/build/mvn#L72
- Patrick
On Fri, Jul 3, 2015 at 3:19 PM, Krishna Sankar wrote:
>
Please vote on releasing the following candidate as Apache Spark version 1.4.1!
This release fixes a handful of known issues in Spark 1.4.0, listed here:
http://s.apache.org/spark-1.4.1
The tag to be voted on is v1.4.1-rc2 (commit 07b95c7):
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=co
the time of the RC voting is an
interesting topic, Sean I like your most recent proposal. Maybe we can
put that on the wiki or start a DISCUSS thread to cover that topic.
On Tue, Jun 23, 2015 at 10:37 PM, Patrick Wendell wrote:
> Please vote on releasing the following candidate as Apache Sp
Hey Sean - yes I think that is an issue. Our published poms need to
have the dependency versions inlined.
We probably need to revert that bit of the build patch.
- Patrick
On Thu, Jul 2, 2015 at 7:21 AM, vaquar khan wrote:
> +1
>
> On 2 Jul 2015 18:03, "shenyan zhen" wrote
Hey Krishna - this is still the current release candidate.
- Patrick
On Sun, Jun 28, 2015 at 12:14 PM, Krishna Sankar wrote:
> Patrick,
>Haven't seen any replies on test results. I will byte ;o) - Should I test
> this version or is another one in the wings ?
> Cheers
>
Hey Tom - no one voted on this yet, so I need to keep it open until
people vote. But I'm not aware of specific things we are waiting for.
Anyone else?
- Patrick
On Fri, Jun 26, 2015 at 7:10 AM, Tom Graves wrote:
> So is this open for vote then or are we waiting on other things?
argeted at this release means
we are targeting such that we get around 70% of issues merged. That
actually doesn't seem so bad to me since there is some uncertainty in
the process. B
- Patrick
On Wed, Jun 24, 2015 at 1:54 AM, Sean Owen wrote:
> There are 44 issues still targeted for 1.4.1
Please vote on releasing the following candidate as Apache Spark version 1.4.1!
This release fixes a handful of known issues in Spark 1.4.0, listed here:
http://s.apache.org/spark-1.4.1
The tag to be voted on is v1.4.1-rc1 (commit 60e08e5):
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=co
hings to
different people.
- Patrick
On Tue, Jun 16, 2015 at 8:09 AM, Josh Rosen wrote:
> Whatever you do, DO NOT use the built-in JIRA 'releases' feature to migrate
> issues from 1.4.0 to another version: the JIRA feature will have the
> side-effect of automatically changin
p
1? My feeling is that it's much more efficient for us as the Spark
maintainers to pay this cost rather than to force a lot of our users
to deal with painful upgrades.
On Sat, Jun 13, 2015 at 1:39 AM, Steve Loughran wrote:
>
>> On 12 Jun 2015, at 17:12, Patrick Wendell wrote:
>
rk for deciding about these upgrades is the
maintenance cost vs the inconvenience for users.
- Patrick
On Fri, Jun 12, 2015 at 8:45 AM, Nicholas Chammas
wrote:
> I'm personally in favor, but I don't have a sense of how many people still
> rely on Hadoop 1.
>
> Nick
>
>
Hi All,
I'm happy to announce the availability of Spark 1.4.0! Spark 1.4.0 is
the fifth release on the API-compatible 1.X line. It is Spark's
largest release ever, with contributions from 210 developers and more
than 1,000 commits!
A huge thanks go to all of the individuals and organizations invo
1 - 100 of 641 matches
Mail list logo