Hi Guys,
I failed to launch spark jobs on mesos. Actually I submitted the job to cluster
successfully.
But the job failed to run.
I1110 18:25:11.095507 301 fetcher.cpp:498] Fetcher Info:
{"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/1f8e621b-3cbf-4b86-a1c1-9e2cf77265ee-S7\/root","items":[
Hi Devs:
If i run sc.textFile(path,xxx) many times, will the elements be the
same(same element,same order)in each partitions?
My experiment show that it's the same, but which may not cover all the
cases. Thank you!
--
View this message in context:
http://apache-spark-developers-list.100155
Hello Spark Devs/Users,
Im trying to solve the use case with Spark Streaming 1.6.2 where for every
batch ( say 2 mins) data needs to go to the same reducer node after
grouping by key.
The underlying storage is Cassandra and not HDFS.
This is a map-reduce job, where also trying to use the partitio
Hi, All.
Recently, I observed frequent failures of `randomized aggregation test` of
ObjectHashAggregateSuite in SparkPullRequestBuilder.
SPARK-17982 https://github.com/apache/spark/pull/15546 (Today)
SPARK-18123 https://github.com/apache/spark/pull/15664 (Today)
SPARK-18169 https://github.
Hey Dongjoon,
Thanks for reporting. I'm looking into these OOM errors. Already
reproduced them locally but haven't figured out the root cause yet.
Gonna disable them temporarily for now.
Sorry for the inconvenience!
Cheng
On 11/10/16 8:48 AM, Dongjoon Hyun wrote:
Hi, All.
Recently, I obs
Great! Thank you so much, Cheng!
Bests,
Dongjoon.
On 2016-11-10 11:21 (-0800), Cheng Lian wrote:
> Hey Dongjoon,
>
> Thanks for reporting. I'm looking into these OOM errors. Already
> reproduced them locally but haven't figured out the root cause yet.
> Gonna disable them temporarily for now
Ok, I understand your point, thanks. Let me see what I can be done there. I
may come back if it doesn't work out there:-)
On Wed, Nov 9, 2016 at 9:25 AM, Cody Koeninger wrote:
> Ok... in general it seems to me like effort would be better spent
> trying to help upstream, as opposed to us making a
JIRA: https://issues.apache.org/jira/browse/SPARK-18403
PR: https://github.com/apache/spark/pull/15845
Will merge it as soon as Jenkins passes.
Cheng
On 11/10/16 11:30 AM, Dongjoon Hyun wrote:
Great! Thank you so much, Cheng!
Bests,
Dongjoon.
On 2016-11-10 11:21 (-0800), Cheng Lian wrote:
That's a good question, looking at
http://stackoverflow.com/tags/apache-spark/topusers shows a few
contributors who have already been active on SO including some committers
and PMC members with very high overall SO reputations for any
administrative needs (as well as a number of other contributors
+1 (non-binding)
On 2016年11月08日 15:09, Reynold Xin wrote:
Please vote on releasing the following candidate as Apache Spark
version 2.0.2. The vote is open until Thu, Nov 10, 2016 at 22:00 PDT
and passes if a majority of at least 3+1 PMC votes are cast.
[ ] +1 Release this package as Apache S
+1 binding
On Thu, Nov 10, 2016 at 6:05 PM, Kousuke Saruta
wrote:
> +1 (non-binding)
>
>
> On 2016年11月08日 15:09, Reynold Xin wrote:
>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.0.2. The vote is open until Thu, Nov 10, 2016 at 22:00 PDT and passes if
>> a maj
+1 (non-binding)
On Thu, Nov 10, 2016 at 6:06 PM, Tathagata Das
wrote:
> +1 binding
>
> On Thu, Nov 10, 2016 at 6:05 PM, Kousuke Saruta > wrote:
>
>> +1 (non-binding)
>>
>>
>> On 2016年11月08日 15:09, Reynold Xin wrote:
>>
>>> Please vote on releasing the following candidate as Apache Spark versio
12 matches
Mail list logo