Re: spark on mesos gets killed by cgroups for too much memory
tp://prodmesosfileserver01/spark-dist/1.2.2/spark-dist-1.2.2.tgz >> > >> > We increased the cgroup limit to 6GB and the memory resources from 3000 >> > to >> > 6000 for the startup of mesos and now cgroups doesn't kill the job >> > anymore. >>
spark on mesos gets killed by cgroups for too much memory
t isn't trying to take 3GB, even if when running it's only using 512MB? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/spark-on-mesos-gets-killed-by-cgroups-for-too-much-memory-tp24769.html Sent from the Apache Spark