2) will be ideal but given the velocity of main branch, what Mesos
ended up doing was simply having a separate repo since it will take
too long to merge back to main.
We ended up running it pre-release (or major PR merged) and not on
every PR, I will also comment on asking users to run it.
We did
-- Forwarded message --
From: Timothy Chen
Date: Thu, Aug 17, 2017 at 2:48 PM
Subject: Re: SPIP: Spark on Kubernetes
To: Marcelo Vanzin
Hi Marcelo,
Agree with your points, and I had that same thought around Resource
staging server and like to share that with Spark on Mesos
+1 (non-binding)
Tim
On Tue, Aug 15, 2017 at 9:20 AM, Kimoon Kim wrote:
> +1 (non-binding)
>
> Thanks,
> Kimoon
>
> On Tue, Aug 15, 2017 at 9:19 AM, Sean Suchter
> wrote:
>>
>> +1 (non-binding)
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-developers-list.1001551.n3.nab
only thing keeping it from being enabled is a timeout
> config and someone volunteering to do some testing?
>
>
> On Mon, Apr 3, 2017 at 2:19 PM Timothy Chen wrote:
>>
>> The only reason is that MesosClusterScheduler by design is long
>> running so we really neede
The only reason is that MesosClusterScheduler by design is long
running so we really needed it to have failover configured correctly.
I wanted to create a JIRA ticket to allow users to configure it for
each Spark framework, but just didn't remember to do so.
Per another question that came up in t
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
>
>
>
> From: Timothy Chen
> Sent: Friday, March 31, 2017 5:13 AM
> To: Yu Wei
> Cc: us...@spark.apache.org; dev
> Subject: Re: [Spark
I think failover isn't enabled on regular Spark job framework, since we assume
jobs are more ephemeral.
It could be a good setting to add to the Spark framework to enable failover.
Tim
> On Mar 30, 2017, at 10:18 AM, Yu Wei wrote:
>
> Hi guys,
>
> I encountered a problem about spark on mesos
rk.apache.org/docs/latest/running-on-mesos.html#fine-grained-deprecated
>>>>>>
>>>>>> Note that while Spark tasks in fine-grained will relinquish cores as
>>>>>> they terminate, they will not relinquish memory, as the JVM does not give
>>&
Hi Chawla,
One possible reason is that Mesos fine grain mode also takes up cores
to run the executor per host, so if you have 20 agents running Fine
grained executor it will take up 20 cores while it's still running.
Tim
On Fri, Dec 16, 2016 at 8:41 AM, Chawla,Sumit wrote:
> Hi
>
> I am using S
Congrats Felix!
Tim
On Mon, Aug 8, 2016 at 11:15 AM, Matei Zaharia wrote:
> Hi all,
>
> The PMC recently voted to add Felix Cheung as a committer. Felix has been a
> major contributor to SparkR and we're excited to have him join officially.
> Congrats and welcome, Felix!
>
> Matei
> --
Hi,
How did you package the spark.tgz, and are you running the same code that you
packaged when you ran spark submit?
And what is your settings for spark look like?
Tim
> On Jun 6, 2016, at 12:13 PM, thibaut wrote:
>
> Hi there,
>
> I an trying to configure Spark for running on top of Meso
This will also simplify Mesos users as well, DCOS has to work around
this with our own proxying.
Tim
On Sun, May 22, 2016 at 11:53 PM, Gurvinder Singh
wrote:
> Hi Reynold,
>
> So if that's OK with you, can I go ahead and create JIRA for this. As it
> seems this feature is missing currently and c
I think it's just not implemented, +1 for adding it.
Tim
> On May 10, 2016, at 5:52 PM, Michael Gummelt wrote:
>
> Client mode doesn't seem to support remote JAR downloading, as reported here:
> https://issues.apache.org/jira/browse/SPARK-10643
>
> The docs here:
> http://spark.apache.org/d
Are you suggesting to have shuffle service persist and fetch data with hdfs, or
skip shuffle service altogether and just write to hdfs?
Tim
> On Apr 26, 2016, at 11:20 AM, Michael Gummelt wrote:
>
> Has there been any thought or work on this (or any other networked file
> system)? It would
Hi Yang,
Can you share the master log/slave log?
Tim
> On Apr 12, 2016, at 2:05 PM, Yang Lei wrote:
>
> I have been able to run spark submission in docker container (HOST network)
> through Marathon on mesos and target to Mesos cluster (zk address) for at
> least Spark 1.6, 1.5.2 over Mesos
Yes if want to manually override what IP to use to be contacted by the master
you can set LIPROCESS_IP and LIBPROCESS_PORT.
It is a Mesos specific settings. We can definitely update the docs.
Note that in the future as we move to use the new Mesos Http API these
configurations won't be needed (
Hi Adam,
Thanks for the graphs and the tests, definitely interested to dig a
bit deeper to find out what's could be the cause of this.
Do you have the spark driver logs for both runs?
Tim
On Mon, Nov 30, 2015 at 9:06 AM, Adam McElwee wrote:
> To eliminate any skepticism around whether cpu is a
Hi Jo,
Thanks for the links, I would expected the properties to be in
scheduler properties but I need to double check.
I'll be looking into these problems this week.
Tim
On Tue, Nov 17, 2015 at 10:28 AM, Jo Voordeckers
wrote:
> On Tue, Nov 17, 2015 at 5:16 AM, Iulian Dragoș
> wrote:
>>
>> I t
Hi Chris,
How does coarse grain mode gives you less starvation in your overloaded
cluster? Is it just because it allocates all resources at once (which I think
in a overloaded cluster allows less things to run at once).
Tim
> On Nov 4, 2015, at 4:21 AM, Heller, Chris wrote:
>
> We’ve been m
Fine grain mode does reuse the same JVM but perhaps different placement or
different allocated cores comparing to the same total memory allocation.
Tim
Sent from my iPhone
> On Nov 3, 2015, at 6:00 PM, Reynold Xin wrote:
>
> Soren,
>
> If I understand how Mesos works correctly, even the fine
I would also like to see data shared off-heap to a 3rd party C++
library with JNI, I think the complications would be how to memory
manage this and make sure the 3rd party libraries also adhere to the
access contracts as well.
Tim
On Sat, Aug 29, 2015 at 12:17 PM, Paul Weiss wrote:
> Hi,
>
> Wou
Hi Nik,
Bharath is mostly referring to Spark commiters in this thread.
Tim
On Tue, Jun 9, 2015 at 9:51 PM, Niklas Nielsen wrote:
> Hi Bharath (and rest of Spark dev list!),
>
> Just a small shout out: I am a Apache Mesos Committer and would love to help
> out with anything you need to get this
+1
Been testing cluster mode and client mode with mesos with 6 nodes cluster.
Everything works so far.
Tim
> On Jun 4, 2015, at 5:47 PM, Andrew Or wrote:
>
> +1 (binding)
>
> Ran the same tests I did for RC3:
>
> Tested the standalone cluster mode REST submission gateway - submit / status
re related to your work on the rencently merged Spark
> Cluster Mode for Mesos.
> Can you elaborate how it works compared to the Standalone mode.
> and do you maintain the dyanamic allocation of mesos resources in the
> cluster mode unlike the coarse grained mode?
>
> On Tue,
So, to confirm - in this mode, when a Spark application/context runs a
series of tasks, each task will launch a full SparkExecutor process?
What is the cpu/mem cost of such Spark Executor process (resource
sizing passed in the Mesos task launch request)?
Hi Gidon,
1. Yes, each Spark application is wrapped in a new Mesos framework.
2. In fine grained mode, what happens is that Spark scheduler
specifies a custom Mesos executor per slave, and each Mesos task is a
Spark executor that will be launched by the Mesos executor. It's hard
to determine what
+1 Tested on 4 nodes Mesos cluster with fine-grain and coarse-grain mode.
Tim
On Wed, Apr 8, 2015 at 9:32 AM, Denny Lee wrote:
> The RC2 bits are lacking Hadoop 2.4 and Hadoop 2.6 - was that intended
> (they were included in RC1)?
>
>
> On Wed, Apr 8, 2015 at 9:01 AM Tom Graves
> wrote:
>
>> +1
+1 (non-binding)
Tested Mesos coarse/fine-grained mode with 4 nodes Mesos cluster with
simple shuffle/map task.
Will be testing with more complete suite (ie: spark-perf) once the
infrastructure is setup to do so.
Tim
On Thu, Feb 19, 2015 at 12:50 PM, Krishna Sankar wrote:
> Excellent. Explicit
Congrats all!
Tim
> On Feb 4, 2015, at 7:10 AM, Pritish Nawlakhe
> wrote:
>
> Congrats and welcome back!!
>
>
>
> Thank you!!
>
> Regards
> Pritish
> Nirvana International Inc.
>
> Big Data, Hadoop, Oracle EBS and IT Solutions
> VA - SWaM, MD - MBE Certified Company
> prit...@nirvana-int
What error are you getting?
Tim
Sent from my iPhone
> On Dec 24, 2014, at 8:59 PM, Naveen Madhire wrote:
>
> Hi All,
>
> I am starting to use Spark. I am having trouble getting the latest code
> from git.
> I am using Intellij as suggested in the below link,
>
> https://cwiki.apache.org/conf
re changing the Mesos scheduler. Is there a Jira where
this job is taking place?
-kr, Gerard.
On Mon, Dec 22, 2014 at 6:01 PM, Timothy Chen wrote:
> Hi Gerard,
>
> Really nice guide!
>
> I'm particularly interested in the Mesos scheduling side to more evenly
> distribute
Hi Gerard,
Really nice guide!
I'm particularly interested in the Mesos scheduling side to more evenly
distribute cores across cluster.
I wonder if you are using coarse grain mode or fine grain mode?
I'm making changes to the spark mesos scheduler and I think we can propose a
best way to achi
Matei that makes sense, +1 (non-binding)
Tim
On Wed, Nov 5, 2014 at 8:46 PM, Cheng Lian wrote:
> +1 since this is already the de facto model we are using.
>
> On Thu, Nov 6, 2014 at 12:40 PM, Wangfei (X) wrote:
>
>> +1
>>
>> 发自我的 iPhone
>>
>> > 在 2014年11月5日,20:06,"Denny Lee" 写道:
>> >
>> > +1 g
Hi Matei,
Definitely in favor of moving into this model for exactly the reasons
you mentioned.
>From the module list though, the module that I'm mostly involved with
and is not listed is the Mesos integration piece.
I believe we also need a maintainer for Mesos, and I wonder if there
is someone
spark, you will
>> see that it simply exit and saying in comments "this should never
>> happen, lets just quit" :-)
>>
>> - Gurvinder
>> On 10/06/2014 09:30 AM, Timothy Chen wrote:
>> > (Hit enter too soon...)
>> >
>> > What is your setu
(Hit enter too soon...)
What is your setup and steps to repro this?
Tim
On Mon, Oct 6, 2014 at 12:30 AM, Timothy Chen wrote:
> Hi Gurvinder,
>
> I tried fine grain mode before and didn't get into that problem.
>
>
> On Sun, Oct 5, 2014 at 11:44 PM, Gurvinder Singh
>
Hi Gurvinder,
I tried fine grain mode before and didn't get into that problem.
On Sun, Oct 5, 2014 at 11:44 PM, Gurvinder Singh
wrote:
> On 10/06/2014 08:19 AM, Fairiz Azizi wrote:
>> The Spark online docs indicate that Spark is compatible with Mesos 0.18.1
>>
>> I've gotten it to work just fin
+1 Make-distrubtion works, and also tested simple spark jobs on Spark
on Mesos on 8 node Mesos cluster.
Tim
On Thu, Aug 28, 2014 at 8:53 PM, Burak Yavuz wrote:
> +1. Tested MLlib algorithms on Amazon EC2, algorithms show speed-ups between
> 1.5-5x compared to the 1.0.2 release.
>
> - Origin
If you think it does, it would be good to explain why it
> behaves like that.
>
> Matei
>
> On August 25, 2014 at 2:28:18 PM, Timothy Chen (tnac...@gmail.com) wrote:
>
> Hi Matei,
>
> I'm going to investigate from both Mesos and Spark side will hopefully
> have a good
emory sizes is not exactly the
>> same as the total size of the machine). Then Mesos will be able to re-offer
>> that machine whenever CPUs free up.
>>
>> Matei
>>
>> On August 25, 2014 at 5:05:56 AM, Gary Malouf (malouf.g...@gmail.com)
>> wrote:
>>
&g
+1 to have the work around in.
I'll be investigating from the Mesos side too.
Tim
On Sun, Aug 24, 2014 at 9:52 PM, Matei Zaharia wrote:
> Yeah, Mesos in coarse-grained mode probably wouldn't work here. It's too bad
> that this happens in fine-grained mode -- would be really good to fix. I'll
41 matches
Mail list logo