> On May 5, 2020, 12:27 a.m., Greg Mann wrote:
> > I would recommend updating the description so that instead of saying we 
> > "don't need to add" the new reason, say that "it is not possible for Mesos 
> > to provide" the reason, so we must remove it.

Done.


- Qian


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72442/#review220599
-----------------------------------------------------------


On May 5, 2020, 3:53 p.m., Qian Zhang wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/72442/
> -----------------------------------------------------------
> 
> (Updated May 5, 2020, 3:53 p.m.)
> 
> 
> Review request for mesos, Andrei Budnik and Greg Mann.
> 
> 
> Bugs: MESOS-10049
>     https://issues.apache.org/jira/browse/MESOS-10049
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> The method `MemorySubsystemProcess::oomWaited()` will only be invoked when the
> container is OOM killed because it uses more memory than its hard memory limit
> (i.e., the task status reason `REASON_CONTAINER_LIMITATION_MEMORY`), it will
> NOT be invoked when a burstable container is OOM killed because the agent host
> is running out of memory, i.e., we will NOT receive OOM killing notification
> via cgroups notification API for this case. So it is not possible for Mesos to
> provide a task status reason `REASON_CONTAINER_MEMORY_REQUEST_EXCEEDED` for 
> this
> case.
> 
> 
> Diffs
> -----
> 
>   include/mesos/mesos.proto 9412ed736231547b22abc89188316b08d5445e78 
>   include/mesos/v1/mesos.proto 194c42cf57e34d803a21cab03db17614855e8692 
>   src/common/protobuf_utils.cpp 8d1d5c4cb0af911d8dc13e37a1adb62947513d0d 
>   src/slave/containerizer/mesos/isolators/cgroups/subsystems/memory.cpp 
> 60c7a89fb809582723eb50d22f54f4c8ce697584 
> 
> 
> Diff: https://reviews.apache.org/r/72442/diff/2/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Qian Zhang
> 
>

Reply via email to