-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/61069/#review181751
-----------------------------------------------------------


Fix it, then Ship it!




Looks good, I wasn't able to find any bugs, although my head hurts now :)


3rdparty/libprocess/src/semaphore.hpp
Lines 186 (patched)
<https://reviews.apache.org/r/61069/#comment257440>

    Hm.. interesting that these are `commissioned` (here and in the LIFO 
semaphore) rather than `decomissioned`, any reason not to directly make it 
`decomissioned`?



3rdparty/libprocess/src/semaphore.hpp
Lines 191-198 (patched)
<https://reviews.apache.org/r/61069/#comment257399>

    Some elaboration here would be great for posterity, also some notes on 
longer term improvements related to this problem would be useful (e.g. 
splitting the work queue into several queues with work stealing, noting that 
the concurrent queue has support for producer tokens, etc). Fun stuff :)



3rdparty/libprocess/src/semaphore.hpp
Lines 199 (patched)
<https://reviews.apache.org/r/61069/#comment257443>

    Any reason why you're using THREAD_LOCAL here but thread_local elsewhere?
    
    https://github.com/apache/mesos/search?utf8=?&q=thread_local&type=



3rdparty/libprocess/src/semaphore.hpp
Lines 201-204 (patched)
<https://reviews.apache.org/r/61069/#comment257442>

    Hm.. can't we just do:
    
    ```
    THREAD_LOCAL KernelSemaphore* __semaphore__ = new KernelSemaphore();
    ```



3rdparty/libprocess/src/semaphore.hpp
Lines 218 (patched)
<https://reviews.apache.org/r/61069/#comment257447>

    I'm a little puzzled on how to think about `count`, why do we increment it 
early here (and leave it incremented even if we signaled someone) vs only 
incrementing it when there's no one to signal?



3rdparty/libprocess/src/semaphore.hpp
Lines 221 (patched)
<https://reviews.apache.org/r/61069/#comment257445>

    It seems like there's an advantage for the user of this semaphore to 
provide the capacity when constructing (which should be easy for us given we 
know the number of worker threads is fixed at the current time?), because then 
we limit the amount of looping around we do here? Perhaps a TODO or just 
implement it if it provides an improvement?



3rdparty/libprocess/src/semaphore.hpp
Lines 222-223 (patched)
<https://reviews.apache.org/r/61069/#comment257444>

    Don't bother finishing the loop you mean?



3rdparty/libprocess/src/semaphore.hpp
Lines 228-231 (patched)
<https://reviews.apache.org/r/61069/#comment257446>

    Hm.. why the load + compare and exchange instead of just doing a single 
exchange?
    
    ```
    KernelSemaphore* semaphore = semaphores[i].exchange(nullptr);
    
    if (semaphore != nullptr) {
      semaphore->signal();
      waiters.fetch_sub(1);
      return;
    }
    ```
    
    That should be more performant? If load is faster than exchange, then maybe 
document that you're doing it as an optimization?



3rdparty/libprocess/src/semaphore.hpp
Lines 251 (patched)
<https://reviews.apache.org/r/61069/#comment257449>

    Why not just `while (true)` instead of `do {} while (true)`? The latter 
seems to suggest the presence of a condition when reading top to bottom.



3rdparty/libprocess/src/semaphore.hpp
Lines 275-293 (patched)
<https://reviews.apache.org/r/61069/#comment257450>

    Is it possible to tweak this into a single compare and exchange operation 
instead of the load + compare exchange?
    
    ```
    if ((old == count.load()) > 0) {
      waiters.fetch_sub(1);
      goto CAS;
    }
    
    KernelSemaphore* semaphore = nullptr;
    
    if (semaphores[i].compare_exchange_strong(semaphore, _semaphore_)) {
      done = true;
      break;
    }
    ```
    
    Seems more performant? If load is faster than compare exchange, maybe 
document that you're doing it as an optimization?



3rdparty/libprocess/src/semaphore.hpp
Lines 179-180 (original), 338-342 (patched)
<https://reviews.apache.org/r/61069/#comment257441>

    Can you document the impementation strategy a bit more? I had a hard time 
grasping this, in particular how this array is used.



3rdparty/libprocess/src/semaphore.hpp
Line 179 (original), 339 (patched)
<https://reviews.apache.org/r/61069/#comment257448>

    Ditto here, any reason not to store it as `decommissioned`?


- Benjamin Mahler


On July 24, 2017, 1:58 a.m., Benjamin Hindman wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/61069/
> -----------------------------------------------------------
> 
> (Updated July 24, 2017, 1:58 a.m.)
> 
> 
> Review request for mesos and Benjamin Mahler.
> 
> 
> Bugs: MESOS-7798
>     https://issues.apache.org/jira/browse/MESOS-7798
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> Introduced an optimized fixed size last-in-first-out semaphore.
> 
> 
> Diffs
> -----
> 
>   3rdparty/libprocess/src/process.cpp 
> b268cdad776a3ca2a87cbe60eb098bde2a70667c 
>   3rdparty/libprocess/src/run_queue.hpp 
> 109c300b8292f109b699c096eff0c72d674f4f14 
>   3rdparty/libprocess/src/semaphore.hpp 
> 01438838f67e2b3093d95d49931f72888955f11c 
> 
> 
> Diff: https://reviews.apache.org/r/61069/diff/1/
> 
> 
> Testing
> -------
> 
> make check
> 
> 
> Thanks,
> 
> Benjamin Hindman
> 
>

Reply via email to