I think with server processes - with possibly 100k+ connections - the 
contention on a “read mainly” cache is more than you think. This test only uses 
500 readers with little work to simulate the 100k case. 

> On Feb 4, 2023, at 4:59 PM, Ian Lance Taylor <i...@golang.org> wrote:
> 
> On Sat, Feb 4, 2023 at 8:49 AM robert engels <reng...@ix.netcom.com> wrote:
>> 
>> I took some time to put this to a test. The Go program here 
>> https://go.dev/play/p/378Zn_ZQNaz uses a VERY short holding of the lock - 
>> but a large % of runtime holding the lock.
>> 
>> (You can’t run it on the Playground because of the length of time). You can 
>> comment/uncomment the lines 28-31 to test the different mutexes,
>> 
>> It simulates a common system scenario (most web services) - lots of readers 
>> of the cache, but the cache is updated infrequently.
>> 
>> On my machine the RWMutex is > 50% faster - taking 22 seconds vs 47 seconds 
>> using a simple Mutex.
>> 
>> It is easy to understand why - you get no parallelization of the readers 
>> when using a simple Mutex.
> 
> Thanks for the benchmark.  You're right: if you have hundreds of
> goroutines doing nothing but acquiring a read lock, then an RWMutex
> can be faster.  They key there is that there are always multiple
> goroutines waiting for the lock.
> 
> I still stand by my statement for more common use cases.
> 
> Ian
> 
> 
>> On Jan 30, 2023, at 8:29 PM, Ian Lance Taylor <i...@golang.org> wrote:
>> 
>> On Mon, Jan 30, 2023 at 4:42 PM Robert Engels <reng...@ix.netcom.com> wrote:
>> 
>> 
>> Yes but only for a single reader - any concurrent reader is going to 
>> park/deschedule.
>> 
>> 
>> If we are talking specifically about Go, then it's more complex than
>> that.  In particular, the code will spin briefly trying to acquire the
>> mutex, before queuing.
>> 
>> There’s a reason RW locks exist - and I think it is pretty common - but 
>> agree to disagree :)
>> 
>> 
>> Sure: read-write locks are fine and appropriate when the program holds
>> the read lock for a reasonably lengthy time.  As I said, my analysis
>> only applies when code holds the read lock briefly, as is often the
>> case for a cache.
>> 
>> Ian
>> 
>> 
>> On Jan 30, 2023, at 6:23 PM, Ian Lance Taylor <i...@golang.org> wrote:
>> 
>> On Mon, Jan 30, 2023 at 1:00 PM Robert Engels <reng...@ix.netcom.com> wrote:
>> 
>> 
>> Pure readers do not need any mutex on the fast path. It is an atomic CAS - 
>> which is faster than a mutex as it allows concurrent readers. On the slow 
>> path - fairness with a waiting or active writer - it degenerates in 
>> performance to a simple mutex.
>> 
>> The issue with a mutex is that you need to acquire it whether reading or 
>> writing - this is slow…. (at least compared to an atomic cas)
>> 
>> 
>> The fast path of a mutex is also an atomic CAS.
>> 
>> Ian
>> 
>> On Jan 30, 2023, at 2:24 PM, Ian Lance Taylor <i...@golang.org> wrote:
>> 
>> 
>> On Mon, Jan 30, 2023 at 11:26 AM Robert Engels <reng...@ix.netcom.com> 
>> wrote:
>> 
>> 
>> I don’t think that is true. A RW lock is always better when the reader 
>> activity is far greater than the writer - simply because in a good 
>> implementation the read lock can be acquired without blocking/scheduling 
>> activity.
>> 
>> 
>> The best read lock implementation is not going to be better than the
>> best plain mutex implementation.  And with current technology any
>> implementation is going to require atomic memory operations which
>> require coordinating cache lines between CPUs.  If your reader
>> activity is so large that you get significant contention on a plain
>> mutex (recalling that we are assuming the case where the operations
>> under the read lock are quick) then you are also going to get
>> significant contention on a read lock.  The effect is that the read
>> lock isn't going to be faster anyhow in practice, and your program
>> should probably be using a different approach.
>> 
>> Ian
>> 
>> On Jan 30, 2023, at 12:49 PM, Ian Lance Taylor <i...@golang.org> wrote:
>> 
>> 
>> On Sun, Jan 29, 2023 at 6:34 PM Diego Augusto Molina
>> <diegoaugustomol...@gmail.com> wrote:
>> 
>> 
>> From times to times I write a scraper or some other tool that would 
>> authenticate to a service and then use the auth result to do stuff 
>> concurrently. But when auth expires, I need to synchronize all my goroutines 
>> and have a single one do the re-auth process, check the status, etc. and 
>> then arrange for all goroutines to go back to work using the new auth result.
>> 
>> To generalize the problem: multiple goroutines read a cached value that 
>> expires at some point. When it does, they all should block and some I/O 
>> operation has to be performed by a single goroutine to renew the cached 
>> value, then unblock all other goroutines and have them use the new value.
>> 
>> I solved this in the past in a number of ways: having a single goroutine 
>> that handles the cache by asking it for the value through a channel, using 
>> sync.Cond (which btw every time I decide to use I need to carefully re-read 
>> its docs and do lots of tests because I never get it right at first). But 
>> what I came to do lately is to implement an upgradable lock and have every 
>> goroutine do:
>> 
>> 
>> 
>> We have historically rejected this kind of adjustable lock.  There is
>> some previous discussion at https://go.dev/issue/4026,
>> https://go.dev/issue/23513, https://go.dev/issue/38891,
>> https://go.dev/issue/44049.
>> 
>> For a cache where checking that the cached value is valid (not stale)
>> and fetching the cached value is quick, then in general you will be
>> better off using a plain Mutex rather than RWMutex.  RWMutex is more
>> complicated and therefore slower.  It's only useful to use an RWMutex
>> when the read case is both contested and relatively slow.  If the read
>> case is fast then the simpler Mutex will tend to be faster.  And then
>> you don't have to worry about upgrading the lock.
>> 
>> Ian
>> 
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/golang-nuts/CAOyqgcXNVFkc5H-L6K4Mt81gB6u91Ja07hob%3DS8Qwgy2buiQjQ%40mail.gmail.com.
>> 
>> 
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/golang-nuts/CAOyqgcWJ%2BLPOoTk9H7bxAj8_dLsuhgOpy_bZZrGW%3D%2Bz6N%3DrX-w%40mail.gmail.com.
>> 
>> 
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/golang-nuts/CAOyqgcVLzkTgiYqw%2BWh6pTFX74X-LYoyPFK5bkX7T8J8j3mb%3Dg%40mail.gmail.com.
>> 
>> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/CAOyqgcV-7RfjXakYkc-pVJHPwhkaTLXky0mOMXbhqpcXLGwp2Q%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/137AD662-1547-445A-AFB7-65B564C6C5B2%40ix.netcom.com.

Reply via email to