Could you use an applicative data structure? (e.g., a balanced binary tree 
where you allocate a new spine for each insertion/deletion)
That has log N overhead to read, log N storage allocated per write, but I 
think if you CAS the writes, the reads can proceed with a lightweight 
barrier.


On Sunday, February 5, 2023 at 11:45:38 AM UTC-5 Ian Lance Taylor wrote:

> On Sun, Feb 5, 2023, 4:34 AM Juliusz Chroboczek <j...@irif.fr> wrote:
>
>> >> I took some time to put this to a test. The Go program here
>> >> https://go.dev/play/p/378Zn_ZQNaz uses a VERY short holding of the
>> >> lock - but a large % of runtime holding the lock.
>>
>> > Thanks for the benchmark.  You're right: if you have hundreds of
>> > goroutines doing nothing but acquiring a read lock, then an RWMutex
>> > can be faster.  They key there is that there are always multiple
>> > goroutines waiting for the lock.
>>
>> Could you please explain that?  You previously implied that the
>> required atomic operation is going to make the difference between the
>> two kinds of mutex irrelevant, why does the argument not apply here?
>>
>
> If there are enough concurrent lockers to overwhelm the plain mutex spin 
> lock, then the read-write mutex will work better.  My argument is that in 
> real programs that is an unlikely case if the lock is held for a short 
> period of time.
>
> Ian
>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/3769f0c0-dc9b-417f-8be0-6113a4fe0090n%40googlegroups.com.

Reply via email to