Thank you for the clarification. It works!
On Fri, Jan 29, 2016 at 6:36 PM Mao Geng wrote:
> Sathish,
>
> The constraint you described is Marathon's, not Mesos's :)
>
> Spark.mesos.constraints are applied to slave attributes like tachyon=true
> ;us-east-1=false, as described in
> https://issues.a
Sathish,
The constraint you described is Marathon's, not Mesos's :)
Spark.mesos.constraints are applied to slave attributes like tachyon=true
;us-east-1=false, as described in
https://issues.apache.org/jira/browse/SPARK-6707.
Cheers,
-Mao
On Fri, Jan 29, 2016 at 2:51 PM, Sathish Kumaran Vairave
Hi
Quick question. How to pass constraint [["hostname", "CLUSTER", "
specific.node.com"]] to mesos?
I was trying --conf spark.mesos.constraints=hostname:specific.node.com. But
it didn't seems working
Please help
Thanks
Sathish
On Thu, Jan 28, 2016 at 6:52 PM Mao Geng wrote:
> From my limit
>From my limited knowledge, only limited options such as network mode,
volumes, portmaps can be passed through. See
https://github.com/apache/spark/pull/3074/files.
https://issues.apache.org/jira/browse/SPARK-8734 is open for exposing all
docker options to spark.
-Mao
On Thu, Jan 28, 2016 at 1:5
Thank you., I figured it out. I have set executor memory to minimal and it
works.,
Another issue has come.. I have to pass --add-host option while running
containers in slave nodes.. Is there any option to pass docker run
parameters from spark?
On Thu, Jan 28, 2016 at 12:26 PM Mao Geng wrote:
>
Sathish,
I guess the mesos resources are not enough to run your job. You might want
to check the mesos log to figure out why.
I tried to run the docker image with "--conf spark.mesos.coarse=false" and
"true". Both are fine.
Best,
Mao
On Wed, Jan 27, 2016 at 5:00 PM, Sathish Kumaran Vairavelu <
Hi,
On the same Spark/Mesos/Docker setup, I am getting warning "Initial Job has
not accepted any resources; check your cluster UI to ensure that workers
are registered and have sufficient resources". I am running in coarse
grained mode. Any pointers on how to fix this issue? Please help. I have
up
Thanks a lot for your info! I will try this today.
On Wed, Jan 27, 2016 at 9:29 AM Mao Geng wrote:
> Hi Sathish,
>
> The docker image is normal, no AWS profile included.
>
> When the driver container runs with --net=host, the driver host's AWS
> profile will take effect so that the driver can acc
Hi Sathish,
The docker image is normal, no AWS profile included.
When the driver container runs with --net=host, the driver host's AWS profile
will take effect so that the driver can access the protected s3 files.
Similarly, Mesos slaves also run Spark executor docker container in --net=host
Hi Mao,
I want to check on accessing the S3 from Spark docker in Mesos. The EC2
instance that I am using has the AWS profile/IAM included. Should we build
the docker image with any AWS profile settings or --net=host docker option
takes care of it?
Please help
Thanks
Sathish
On Tue, Jan 26,
Thank you very much, Jerry!
I changed to "--jars
/opt/spark/lib/hadoop-aws-2.7.1.jar,/opt/spark/lib/aws-java-sdk-1.7.4.jar"
then it worked like a charm!
>From Mesos task logs below, I saw Mesos executor downloaded the jars from
the driver, which is a bit unnecessary (as the docker image already h
Hi Mao,
Can you try --jars to include those jars?
Best Regards,
Jerry
Sent from my iPhone
> On 26 Jan, 2016, at 7:02 pm, Mao Geng wrote:
>
> Hi there,
>
> I am trying to run Spark on Mesos using a Docker image as executor, as
> mentioned
> http://spark.apache.org/docs/latest/running-on-m
Hi there,
I am trying to run Spark on Mesos using a Docker image as executor, as
mentioned
http://spark.apache.org/docs/latest/running-on-mesos.html#mesos-docker-support
.
I built a docker image using the following Dockerfile (which is based on
https://github.com/apache/spark/blob/master/docker/s
13 matches
Mail list logo