Based on the pprof graph, I would rather believe that the massive 
performance drop happens because of the `semacquire1` implementation.
When the number of goroutines is small, most of the `semacquire1` success 
in the `cansemacquire ` fast path, or a middle path where a lock was 
required but then `cansemacquire` success again.
The drop happens in the case that goroutines are failed for fast path and 
middle path, and therefore needs to be parked, which involves runtime 
schedule costs.
How do you refute to this argument?

On Monday, August 26, 2019 at 10:56:21 PM UTC+2, changkun wrote:
>
> I also tested many times with `go tool pprof`, and it reproducible reports 
> the following difference:
>
> Here is for 2400 goroutines:
>
> [image: 2400.png]
>
> Here is for 4800 goroutines:
>
> [image: 4800.png]
>
> The difference here is: 4800 goroutines heavily call `gopark` and  2400 
> goroutines heavily calls runtime.procyield, have you notice this 
> difference? Are they normal?
> In attachment, you find the SVG graphs.
>
> On Monday, August 26, 2019 at 10:41:42 PM UTC+2, Robert Engels wrote:
>>
>> You might want to try 'perf mem' to report the access delays - it may be 
>> contention on the memory controller as well.
>>
>> Thinking about it again, I wouldn't expect a large jump if things were 
>> fair - for example, if at 100 they all fit in the cache, at 110, some are 
>> still in the cache, but some operations are slower, etc. so I would expect 
>> a jump but not as large as you see.
>>
>> Still, most linux context switches are 3-4 us, and you are talking about 
>> 300 ns, so you're still doing pretty good, and at approx 40 ns, there are 
>> so many aspects that come into play, i'm not sure you or anyone has the 
>> time to figure out - maybe the HFT guys are interested...
>>
>> Like I said, on my OSX machine the times are very similar with both 
>> approaches, so it is OS dependent, and probably OS and hardware 
>> configuration dependent - so I think I've probably reached the end of being 
>> able to help.
>>
>> And finally, it probably doesn't matter at all - if the Go routine is 
>> doing anything of value, 300 ns is probably an insignificant cost.
>>
>>
>> -----Original Message----- 
>> From: changkun 
>> Sent: Aug 26, 2019 3:15 PM 
>> To: golang-nuts 
>> Subject: Re: [go-nuts] sync.Mutex encounter large performance drop when 
>> goroutine contention more than 3400 
>>
>> Did I do anything wrong, the cache hint ratio decrease linearly, is it an 
>> expected result? I thought the cache hint ratio would have a significant 
>> drop:
>>
>> [image: chart.png]
>> Raw data:
>>
>> #goroutines cache-references cache-misses hint/(hint+miss)
>> 2400 697103572 17641686 0.9753175194
>> 3200 798160789 54169784 0.9364451004
>> 3360 1387972473 148415678 0.9033996208
>> 3600 1824541062 272166355 0.8701934506
>> 4000 2053779401 393586501 0.8391795437
>> 4800 1885622275 461872899 0.8032486268
>> On Monday, August 26, 2019 at 9:26:05 PM UTC+2, Robert Engels wrote:
>>>
>>> You can run the process under 'perf' and monitor the cpu cache hit/miss 
>>> ratio.
>>>
>>> -----Original Message----- 
>>> From: changkun 
>>> Sent: Aug 26, 2019 2:23 PM 
>>> To: golang-nuts 
>>> Subject: Re: [go-nuts] sync.Mutex encounter large performance drop when 
>>> goroutine contention more than 3400 
>>>
>>> Your cache theory is very reasonable, but this was clear in the 
>>> beginning post:  "before or after the massive increase, performance drops 
>>> linearly".
>>> Your hypothesis is reasonable, but how can you prove your hypothesis? By 
>>> host machine cache usage monitoring? 
>>> Matching of a concept is still not persuasive.
>>>
>>> On Monday, August 26, 2019 at 8:08:27 PM UTC+2, Robert Engels wrote:
>>>>
>>>> Which is what I would expect - once the number of routines exhaust the 
>>>> cache, it will take the next level (or never since its main memory) to see 
>>>> an massive increase in time. 4800 is 30% slower than 3600 - so it is 
>>>> increasing linearly with the number of Go routines.
>>>>
>>>>
>>>> -----Original Message----- 
>>>> From: changkun 
>>>> Sent: Aug 26, 2019 11:49 AM 
>>>> To: golang-nuts 
>>>> Subject: Re: [go-nuts] sync.Mutex encounter large performance drop when 
>>>> goroutine contention more than 3400 
>>>>
>>>> According to your formula let's sample three points:
>>>>
>>>> 2400 goroutines: 2.508s/(50000000*2400) = 2.09 × 10^-11 s
>>>> 3600 goroutines: 12.219s/(50000000*3600) = 6.78833333 × 10-11 seconds
>>>> 4800 goroutines: 16.020s/(50000000*4800) = 6.67500 × 10^-11 s
>>>>
>>>> One can observe that 3600 and 4800 mostly equal to each other, but they 
>>>> both three times slower than 2400.
>>>>
>>>> goos: linux
>>>> goarch: amd64
>>>> BenchmarkMutexWrite/goroutines-2400-8           50000000                
>>>> 46.5 ns/op
>>>> PASS
>>>> ok      _/home/changkun/dev/tests       2.508s
>>>>
>>>> goos: linux
>>>> goarch: amd64
>>>> BenchmarkMutexWrite/goroutines-3600-8           50000000              
>>>>  240 ns/op
>>>> PASS
>>>> ok      _/home/changkun/dev/tests       12.219s
>>>>
>>>> goos: linux
>>>> goarch: amd64
>>>> BenchmarkMutexWrite/goroutines-4800-8           50000000              
>>>>  317 ns/op
>>>> PASS
>>>> ok      _/home/changkun/dev/tests       16.020s
>>>>
>>>>
>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "golang-nuts" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to golan...@googlegroups.com.
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/golang-nuts/6dd6ec66-b0cc-4c8e-a341-94bff187214f%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/golang-nuts/6dd6ec66-b0cc-4c8e-a341-94bff187214f%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>>
>>>>
>>>>
>>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to golan...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/golang-nuts/495e22e8-4a5f-4a1d-88f8-59ff2b0a4006%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/golang-nuts/495e22e8-4a5f-4a1d-88f8-59ff2b0a4006%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>>
>>>
>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golan...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/golang-nuts/0c2b4dc1-3774-4fe6-9cd6-a841dd7265c4%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/golang-nuts/0c2b4dc1-3774-4fe6-9cd6-a841dd7265c4%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>>
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/ddffa01f-a156-431a-8a3a-0dc247112723%40googlegroups.com.

Reply via email to