hi all,
I want to run a job in the spark context and since I am running the system
in the java environment, I can not use a closure in
the sparkContext().runJob. Instead, I am passing an AbstractFunction1
extension.
while I get the jobs run without an issue, I constantly get the following
WARN me
> On 8 Oct 2015, at 19:31, sbiookag wrote:
>
> Thanks Ted for reply.
>
> But this is not what I want. This would tell spark to read hadoop dependency
> from maven repository, which is the original version of hadoop. I myslef is
> modifying the hadoop code, and wanted to include them inside the
Sorry for not being clear, yes, that's about the Sbt build and treating
warnings as errors.
Warnings in 2.11 are useful, though, it'd be a pity to keep introducing
potential issues. As a stop-gap measure I can disable them in the Sbt
build, is it hard to run the CI test with 2.11/sbt?
iulian
On
How about just fixing the warning? I get it; it doesn't stop this from
happening again, but still seems less drastic than tossing out the
whole mechanism.
On Fri, Oct 9, 2015 at 3:18 PM, Iulian DragoČ™
wrote:
> Sorry for not being clear, yes, that's about the Sbt build and treating
> warnings as e
Hi,
I am trying to build and test the current master. My system is Ubuntu
14.04 with 4 G physical memory with Oracle Java 8.
I have been running into various out-of-memory errors. I tried
building with Maven but couldn't get all the way through compile and
package. I'm having better luck with sb
>
> How about just fixing the warning? I get it; it doesn't stop this from
> happening again, but still seems less drastic than tossing out the
> whole mechanism.
>
+1
It also does not seem that expensive to test only compilation for Scala
2.11 on PR builds.
+1, much better than having a new PR each time to fix something for scala-2.11
every time a patch breaks it.
Thanks,
Hari Shreedharan
> On Oct 9, 2015, at 11:47 AM, Michael Armbrust wrote:
>
> How about just fixing the warning? I get it; it doesn't stop this from
> happening again, but stil
I would push back slightly. The reason we have the PR builds taking so long
is death by a million small things that we add. Doing a full 2.11 compile
is order minutes... it's a nontrivial increase to the build times.
It doesn't seem that bad to me to go back post-hoc once in a while and fix
2.11 b
Dear Spark developers,
I am trying to understand how Spark UI displays operation with the cached RDD.
For example, the following code caches an rdd:
>> val rdd = sc.parallelize(1 to 5, 5).zipWithIndex.cache
>> rdd.count
The Jobs tab shows me that the RDD is evaluated:
: 1 count at :24
That is correct !, I have thought about this a lot of times. The only
solution is to implement a "real" cross build for both version. I am going
to think more in this. :)
Prashant Sharma
On Sat, Oct 10, 2015 at 2:04 AM, Patrick Wendell wrote:
> I would push back slightly. The reason we have t
10 matches
Mail list logo