Hi ,
I am facing the following when running on EMR
Caused by: java.lang.IllegalStateException: There is no space for new record
at
org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.insertRecord(UnsafeInMemorySorter.java:226)
at
org.apache.spark.sql.execution.Unsafe
Hey all,
I just wanted to bring up Kay's old e-mail about this.
If you see a flaky test during a PR, don't just ask for a re-test.
File a bug so that we know that test is flaky and someone will
eventually take a look at it. A lot of them also make great newbie
bugs.
I've filed a bunch of these i
Thanks for the answer, but that doesn't solve my problem. The cmd doesn't
recognize ./build/sbt ('.\build\sbt' is not recognized as an internal or
external command, operable program or batch file.), even when the full path
to the sbt file is specified.
I just realized that I haven't mentioned tha
w
ire compatibility is relevant if hadoop is included in spark build
for those of us that build spark without hadoop included hadoop (binary)
api compatibility matters. i wouldn't want to build against hadoop 2.7 and
deploy on hadoop 2.6, but i am ok the other way around. so to get the
compatibi
I think it would make sense to drop one of them, but not necessarily 2.6.
It kinda depends on what wire compatibility guarantees the Hadoop
libraries have; can a 2.6 client talk to 2.7 (pretty certain it can)?
Is the opposite safe (not sure)?
If the answer to the latter question is "no", then kee
oh nevermind i am used to spark builds without hadoop included. but i
realize that if hadoop is included it matters if its 2.6 or 2.7...
On Thu, Feb 8, 2018 at 5:06 PM, Koert Kuipers wrote:
> wouldn't hadoop 2.7 profile means someone by introduces usage of some
> hadoop apis that dont exist in h
wouldn't hadoop 2.7 profile means someone by introduces usage of some
hadoop apis that dont exist in hadoop 2.6?
why not keep 2.6 and ditch 2.7 given that hadoop 2.7 is backwards
compatible with 2.6? what is the added value of having a 2.7 profile?
On Thu, Feb 8, 2018 at 5:03 PM, Sean Owen wrote
That would still work with a Hadoop-2.7-based profile, as there isn't
actually any code difference in Spark that treats the two versions
differently (nor, really, much different between 2.6 and 2.7 to begin
with). This practice of different profile builds was pretty unnecessary
after 2.2; it's most
CDH 5 is still based on hadoop 2.6
On Thu, Feb 8, 2018 at 2:03 PM, Sean Owen wrote:
> Mostly just shedding the extra build complexity, and builds. The primary
> little annoyance is it's 2x the number of flaky build failures to examine.
> I suppose it allows using a 2.7+-only feature, but outside
Mostly just shedding the extra build complexity, and builds. The primary
little annoyance is it's 2x the number of flaky build failures to examine.
I suppose it allows using a 2.7+-only feature, but outside of YARN, not
sure there is anything compelling.
It's something that probably gains us virtu
Does it gain us anything to drop 2.6?
> On Feb 8, 2018, at 10:50 AM, Sean Owen wrote:
>
> At this point, with Hadoop 3 on deck, I think hadoop 2.6 is both fairly old,
> and actually, not different from 2.7 with respect to Spark. That is, I don't
> know if we are actually maintaining anything h
At this point, with Hadoop 3 on deck, I think hadoop 2.6 is both fairly
old, and actually, not different from 2.7 with respect to Spark. That is, I
don't know if we are actually maintaining anything here but a separate
profile and 2x the number of test builds.
The cost is, by the same token, low.
Hi,
s,sbt ./build/sbt,./build/sbt
In other words, don't execute sbt with ./build/sbt, but ./build/sbt itself
(you don't even have to install sbt to build spark as it's included in the
repo and the script uses it internally)
Pozdrawiam,
Jacek Laskowski
https://about.me/JacekLaskowski
Masteri
13 matches
Mail list logo