Hello,
Sorry for the delay (again), we were busy upgrading our cluster from MAPR
3.0.x to Mapr 3.1.1.26113.GA
I updated my builds to include referencing the native hadoop libraries by
this distribution as well as installing SNAPPY (I no longer see the 'unable
to load native hadoop libraries' and
On 10/10/2014 06:11 AM, Fairiz Azizi wrote:
> Hello,
>
> Sorry for the late reply.
>
> When I tried the LogQuery example this time, things now seem to be fine!
>
> ...
>
> 14/10/10 04:01:21 INFO scheduler.DAGScheduler: Stage 0 (collect at
> LogQuery.scala:80) finished in 0.429 s
>
> 14/10/10 0
Hello,
Sorry for the late reply.
When I tried the LogQuery example this time, things now seem to be fine!
...
14/10/10 04:01:21 INFO scheduler.DAGScheduler: Stage 0 (collect at
LogQuery.scala:80) finished in 0.429 s
14/10/10 04:01:21 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0,
whose
Yep! That's the example I was talking about.
Is an error message printed when it hangs? I get :
14/09/30 13:23:14 ERROR BlockManagerMasterActor: Got two different
block manager registrations on 20140930-131734-1723727882-5050-1895-1
On Tue, Oct 7, 2014 at 8:36 PM, Fairiz Azizi wrote:
> Sure
Sure, could you point me to the example?
The only thing I could find was
https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/LogQuery.scala
So do you mean running it like:
MASTER="mesos://xxx*:5050*" ./run-example LogQuery
I tried that and I can s
I was able to reproduce it on a small 4 node cluster (1 mesos master and 3
mesos slaves) with relatively low-end specs. As I said, I just ran the log
query examples with the fine-grained mesos mode.
Spark 1.1.0 and mesos 0.20.1.
Fairiz, could you try running the logquery example included with Sp
That's what great about Spark, the community is so active! :)
I compiled Mesos 0.20.1 from the source tarball.
Using the Mapr3 Spark 1.1.0 distribution from the Spark downloads page
(spark-1.1.0-bin-mapr3.tgz).
I see no problems for the workloads we are trying.
However, the cluster is small (l
Ok I created SPARK-3817 to track this, will try to repro it as well.
Tim
On Mon, Oct 6, 2014 at 6:08 AM, RJ Nowling wrote:
> I've recently run into this issue as well. I get it from running Spark
> examples such as log query. Maybe that'll help reproduce the issue.
>
>
> On Monday, October 6, 2
I've recently run into this issue as well. I get it from running Spark
examples such as log query. Maybe that'll help reproduce the issue.
On Monday, October 6, 2014, Gurvinder Singh
wrote:
> The issue does not occur if the task at hand has small number of map
> tasks. I have a task which has 9
The issue does not occur if the task at hand has small number of map
tasks. I have a task which has 978 map tasks and I see this error as
14/10/06 09:34:40 ERROR BlockManagerMasterActor: Got two different block
manager registrations on 20140711-081617-711206558-5050-2543-5
Here is the log from th
(Hit enter too soon...)
What is your setup and steps to repro this?
Tim
On Mon, Oct 6, 2014 at 12:30 AM, Timothy Chen wrote:
> Hi Gurvinder,
>
> I tried fine grain mode before and didn't get into that problem.
>
>
> On Sun, Oct 5, 2014 at 11:44 PM, Gurvinder Singh
> wrote:
>> On 10/06/2014 08:
Hi Gurvinder,
I tried fine grain mode before and didn't get into that problem.
On Sun, Oct 5, 2014 at 11:44 PM, Gurvinder Singh
wrote:
> On 10/06/2014 08:19 AM, Fairiz Azizi wrote:
>> The Spark online docs indicate that Spark is compatible with Mesos 0.18.1
>>
>> I've gotten it to work just fin
Hi Gurvinder,
Is there a SPARK ticket tracking the issue you describe?
On Mon, Oct 6, 2014 at 2:44 AM, Gurvinder Singh
wrote:
> On 10/06/2014 08:19 AM, Fairiz Azizi wrote:
> > The Spark online docs indicate that Spark is compatible with Mesos 0.18.1
> >
> > I've gotten it to work just fine on 0
On 10/06/2014 08:19 AM, Fairiz Azizi wrote:
> The Spark online docs indicate that Spark is compatible with Mesos 0.18.1
>
> I've gotten it to work just fine on 0.18.1 and 0.18.2
>
> Has anyone tried Spark on a newer version of Mesos, i.e. Mesos v0.20.0?
>
> -Fi
>
Yeah we are using Spark 1.1.0 w
14 matches
Mail list logo