Hi Prabhu, Sorry for the delayed response; it actually was the maximum-allocation-vcores setting. I interpreted the description for maximum-allocation-vcores as the per-container setting, when it actually seems to be the full allocation across the cluster.
Cheers, Malcolm On Tue, Apr 2, 2019 at 1:01 AM Prabhu Josephraj <pjos...@cloudera.com> wrote: > Hi Malcolm, > > Scheduler sets the max-vcores in RegisterApplicationMasterResponse > from the queue's configured max value > (yarn.scheduler.capacity.root.<queue-path>.maximum-allocation-vcores) from > Scheduler Configuration (capacity- > scheduler.xml / fair-scheduler.xml). If not specified, returns the value > from Yarn Configuration > yarn.scheduler.minimum-allocation-vcores. Can you check if the queue where > Samza job runs is specified with > maximum-allocation-vcores. > > Thanks, > Prabhu Joseph > > > On Tue, Apr 2, 2019 at 1:19 AM Malcolm McFarland <mmcfarl...@cavulus.com> > wrote: > >> Hey folks, >> >> (apologies if this is a duplicate, I don't think my first message went >> through) >> >> I'm running Samza 0.14.1 with YARN 2.6.1 on Docker 18.06.1 in ECS. (I know >> YARN on Docker is somewhat unorthodox, but it's how the ops team at our >> company has things setup.) It's running quite well overall -- I have 2 >> resource managers and 3 node managers communicating smoothly. >> >> The trouble is occurring with the application container CPU allocation. >> I'm >> running Samza on this cluster, and although it starts up just fine and >> works fine when it requests 1 CPU/container, it won't start any container >> with more than 1 core. I see this message in the Samza log: "Got AM >> register response. The YARN RM supports container requests with max-mem: >> 16384, max-cpu: 1". >> >> Looking at the source code, this is derived from the >> RegisterApplicationMasterResponse returned from >> AMRMClientAsync.registerApplicationMaster(). I'm trying to trace how YARN >> determines the result for >> response.getMaximumResourceCapability().getVirtualCores(), and it's a bit >> difficult. Does anybody have an overview about how this value is >> determined, and what might be specific about a docker container? Here are >> some relevant YARN configuration values (these are available on both the >> RM >> and NM): >> >> yarn.nodemanager.resource.cpu-vcores=8 >> yarn.nodemanager.resource.memory-mb=16384 >> yarn.nodemanager.vmem-check-enabled=false >> yarn.nodemanager.vmem-pmem-ratio=2.1 >> yarn.scheduler.minimum-allocation-mb=256 >> yarn.scheduler.maximum-allocation-mb=16384 >> yarn.scheduler.minimum-allocation-vcores=1 >> yarn.scheduler.maximum-allocation-vcores=16 >> >> Thanks for the help, >> Malcolm >> >> -- >> Malcolm McFarland >> Cavulus >> > -- Malcolm McFarland Cavulus 1-800-760-6915 mmcfarl...@cavulus.com This correspondence is from HealthPlanCRM, LLC, d/b/a Cavulus. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this message is prohibited. The information contained in this message is intended only for the personal and confidential use of the recipient(s) named above. If you have received this message in error, please notify the sender immediately and delete the original message.