I have a properties file that is hosted at a url. I would like to be able
to use the url in the --properties-file parameter when submitting a job to
mesos using spark-submit via chronos
I would rather do this than use a file on the local server.
This doesn't seem to work though when submitting fr
box. Then just use spark-submit
> --properties-file app.properties.
>
> On Thu, 11 Jun 2015 15:52 Marcelo Vanzin wrote:
>
>> That's not supported. You could use wget / curl to download the file to a
>> temp location before running spark-submit, though.
>>
>> On Thu,
our working directory to your sandbox. Try using "cd
> $MESOS_SANDBOX && spark-submit --properties-file props.properties"
>
> On Fri, Jun 12, 2015 at 12:32 PM Gary Ogden wrote:
>
>> That's a great idea. I did what you suggested and added the url to the
>>
My Mesos cluster has 1.5 CPU and 17GB free. If I set:
conf.set("spark.mesos.coarse", "true");
conf.set("spark.cores.max", "1");
in the SparkConf object, the job will run in the mesos cluster fine.
But if I comment out those settings above so that it defaults to fine
grained, the task never fini
I'm loading these settings from a properties file:
spark.executor.memory=256M
spark.cores.max=1
spark.shuffle.consolidateFiles=true
spark.task.cpus=1
spark.deploy.defaultCores=1
spark.driver.cores=1
spark.scheduler.mode=FAIR
Once the job is submitted to mesos, I can go to the spark UI for that job
--master etc).
On 16 June 2015 at 04:34, Akhil Das wrote:
> Whats in your executor (that .tgz file) conf/spark-default.conf file?
>
> Thanks
> Best Regards
>
> On Mon, Jun 15, 2015 at 7:14 PM, Gary Ogden wrote:
>
>> I'm loading these settings from a properties fi
n 16 June 2015 at 04:32, Akhil Das wrote:
> Did you look inside all logs? Mesos logs and executor logs?
>
> Thanks
> Best Regards
>
> On Mon, Jun 15, 2015 at 7:09 PM, Gary Ogden wrote:
>
>> My Mesos cluster has 1.5 CPU and 17GB free. If I set:
>>
>> conf.se
In our separate environments we run it with spark-submit, so I can give
that a try.
But we run unit tests differently in our build environment, which is
throwing the error. It's setup like this:
helper = new CassandraHelper(settings.getCassandra().get());
SparkConf sparkConf = get
in wrote:
> On Tue, Oct 6, 2015 at 12:04 PM, Gary Ogden wrote:
> > But we run unit tests differently in our build environment, which is
> > throwing the error. It's setup like this:
> >
> > I suspect this is what you were referring to when you said I have a
>