----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/39548/#review103660 -----------------------------------------------------------
src/tests/scheduler_tests.cpp (line 1001) <https://reviews.apache.org/r/39548/#comment161723> Hmm. Does this really fix the race? What's the guarantee that SUPPRESS and DECLINE are processed by the master/allocator before you advance the clock? I think what you want do instead here is setup a FUTURE_DISPATCH() on HierarchicalAllocatorProcess::suppressOffers() and do a clock::settle() on that future before you advance the clock. Does that make sense? - Vinod Kone On Oct. 22, 2015, 9:12 a.m., Guangya Liu wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/39548/ > ----------------------------------------------------------- > > (Updated Oct. 22, 2015, 9:12 a.m.) > > > Review request for mesos and Vinod Kone. > > > Bugs: MESOS-3733 > https://issues.apache.org/jira/browse/MESOS-3733 > > > Repository: mesos > > > Description > ------- > > Root Cause: The reason is that the DECLINE call set filter as 1hr, > the Clock::advance set as 100m. A race condition is that both DECLINE > and SUPPRESS started up in different threads and the call Clock::advance > may be called before SUPPRESS finished. The clock advanced for 100m which > is greater than 1hr, this caused the allocator start to allocate resource > again and ASSERT_TRUE(event.isPending()) will be failed. > > Solution: Call SUPPRESS first, this can make sure the SUPPRESS can > be finished before DECLINE. > > > Diffs > ----- > > src/tests/scheduler_tests.cpp 7946cb48d62f4ed6d0fdbc771746518e31921f97 > > Diff: https://reviews.apache.org/r/39548/diff/ > > > Testing > ------- > > Platform: Ubuntu 14.04 > make > make check > bin/mesos-tests.sh --gtest_filter="ContentType/SchedulerTest.Suppress/*" > --gtest_repeat=-1 --gtest_break_on_failure > > > Thanks, > > Guangya Liu > >
