yes ,I do a str += str0 so many in my program now I change to use bytes,Buffer to solve it
And I really need to do a gc monitor that notice me my code is worse now gc frequency and pause time do not notice me does the runtime gc_cpu_fraciton (the same as GODEBUG=gctrace=1 CPU occupy ) say something? but I am confused with the value? 在 2017年3月28日星期二 UTC+8下午3:07:53,Dave Cheney写道: > > It looks like your program is generating in excess of 4gb of garbage a > second. I suggest generating a memory profile with pkg/profile and looking > at your allocation hotspots. > > gc 708 @641.202s 11%: 9.2+1025+0.50+0.31+32 ms clock, > 55+1025+0+9983/0.013/9124+194 ms cpu, 8189->8363->4389 MB, 8399 MB goal, 48 > P > gc 709 @642.498s 11%: 9.4+1074+0.31+0.53+29 ms clock, > 56+1074+0+7404/0.004/9114+178 ms cpu, 8227->8380->4357 MB, 8438 MB goal, 48 > P > gc 710 @643.854s 11%: 5.7+1033+0.40+0.38+30 ms clock, > 34+1033+0+9576/0.005/9297+183 ms cpu, 8200->8351->4324 MB, 8410 MB goal, 48 > P > > At the rate your program is allocating I'm sure the garbage collector will > be in "assist" mode most of the time. Assist mode causes goroutines to > "pay" for their allocation by doing some mark and sweep work each time they > allocate. > > The execution trace can show this more clearly. > > On Tuesday, 28 March 2017 17:59:19 UTC+11, 刘桂祥 wrote: >> >> my production program data: >> ... >> gc 149 @91.330s 6%: 6.8+557+0.50+1.0+10 ms clock, 20+557+0+1841/0/1304+31 >> ms cpu, 3694->3793->2016 MB, 3789 MB goal, 48 P >> gc 150 @92.012s 6%: 2.9+505+0.32+0.58+9.5 ms clock, >> 11+505+0+1756/0.014/1574+38 ms cpu, 3739->3805->1987 MB, 3835 MB goal, 48 P >> gc 151 @92.632s 6%: 5.4+420+0.38+0.25+9.9 ms clock, >> 21+420+0+1690/0.17/1241+39 ms cpu, 3748->3810->1993 MB, 3844 MB goal, 48 P >> gc 152 @93.187s 6%: 7.3+471+0.14+0.21+10 ms clock, >> 29+471+0+1792/0.001/978+42 ms cpu, 3762->3861->1989 MB, 3858 MB goal, 48 P >> gc 153 @93.806s 6%: 2.7+481+0.20+0.51+10 ms clock, >> 10+481+0+1807/0.094/1261+40 ms cpu, 3685->3727->1965 MB, 3780 MB goal, 48 P >> gc 154 @94.408s 6%: 3.3+477+0.46+0.54+10 ms clock, >> 13+477+0+1697/0.007/1497+41 ms cpu, 3751->3848->2043 MB, 3847 MB goal, 48 P >> gc 155 @95.011s 6%: 3.2+534+0.83+0.46+9.7 ms clock, >> 12+534+0+1780/0.008/1457+38 ms cpu, 3794->3912->2068 MB, 3891 MB goal, 48 P >> gc 156 @95.660s 6%: 5.4+504+0.96+0.57+9.5 ms clock, >> 21+504+0+1653/0/1604+38 ms cpu, 3802->3900->2075 MB, 3899 MB goal, 48 P >> gc 157 @96.290s 6%: 2.7+482+0.68+0.38+10 ms clock, >> 10+482+0+1732/0.011/1160+41 ms cpu, 3852->3909->2043 MB, 3950 MB goal, 48 P >> ... >> gc 375 @272.575s 8%: 4.1+628+0.75+0.75+15 ms clock, >> 24+628+0+5656/0.006/2881+93 ms cpu, 5847->5933->3115 MB, 5997 MB goal, 48 P >> gc 376 @273.400s 8%: 3.7+739+0.36+0.96+15 ms clock, >> 22+739+0+4737/0.50/3957+94 ms cpu, 5907->6032->3161 MB, 6058 MB goal, 48 P >> gc 377 @274.333s 8%: 3.2+700+0.22+0.41+14 ms clock, >> 19+700+0+6550/0.011/3101+88 ms cpu, 5920->6056->3170 MB, 6072 MB goal, 48 P >> gc 378 @275.219s 8%: 3.1+681+0.33+0.61+16 ms clock, >> 18+681+0+5050/0.002/3335+99 ms cpu, 5922->6036->3155 MB, 6073 MB goal, 48 P >> gc 379 @276.085s 8%: 3.2+764+0.38+0.83+15 ms clock, >> 19+764+0+4807/0.001/3859+91 ms cpu, 5932->6062->3170 MB, 6084 MB goal, 48 P >> gc 380 @277.032s 8%: 3.0+731+0.44+0.61+14 ms clock, >> 18+731+0+6406/0.002/3522+89 ms cpu, 5931->6067->3175 MB, 6083 MB goal, 48 P >> gc 381 @277.947s 8%: 3.1+739+0.30+0.45+15 ms clock, >> 19+739+0+4654/0.001/3375+90 ms cpu, 5930->6063->3160 MB, 6082 MB goal, 48 P >> gc 382 @278.871s 8%: 3.3+645+0.42+0.57+15 ms clock, >> 19+645+0+4512/0.020/3602+91 ms cpu, 5905->5990->3121 MB, 6056 MB goal, 48 P >> ... >> gc 475 @368.078s 9%: 3.3+790+0.31+0.56+19 ms clock, >> 20+790+0+4356/0.001/5125+117 ms cpu, 6137->6255->3274 MB, 6294 MB goal, 48 P >> gc 476 @369.047s 9%: 3.3+695+2.3+0.37+16 ms clock, >> 20+695+0+7353/0.37/4180+97 ms cpu, 6157->6307->3335 MB, 6314 MB goal, 48 P >> gc 477 @369.922s 9%: 3.4+754+0.17+0.38+17 ms clock, >> 20+754+0+4769/0.64/5090+103 ms cpu, 6215->6385->3378 MB, 6374 MB goal, 48 P >> gc 478 @370.857s 9%: 3.6+727+0.59+0.43+16 ms clock, >> 22+727+0+4726/0.009/5578+99 ms cpu, 6258->6405->3405 MB, 6417 MB goal, 48 P >> gc 479 @371.766s 9%: 3.3+778+0.14+0.71+17 ms clock, >> 20+778+0+4581/0.031/5694+107 ms cpu, 6354->6492->3414 MB, 6517 MB goal, 48 P >> gc 480 @372.729s 9%: 3.4+796+0.27+0.63+16 ms clock, >> 20+796+0+4576/0.070/5559+101 ms cpu, 6394->6529->3417 MB, 6557 MB goal, 48 P >> gc 481 @373.712s 9%: 3.3+771+0.12+0.41+16 ms clock, >> 20+771+0+4918/0.001/5195+101 ms cpu, 6404->6521->3399 MB, 6568 MB goal, 48 P >> gc 482 @374.667s 9%: 3.3+763+0.37+0.29+16 ms clock, >> 20+763+0+4805/0.056/5169+100 ms cpu, 6404->6547->3422 MB, 6568 MB goal, 48 P >> ... >> gc 508 @401.881s 10%: 3.5+715+0.15+0.47+19 ms clock, >> 21+715+0+4336/0.18/5210+114 ms cpu, 6240->6385->3394 MB, 6400 MB goal, 48 P >> gc 509 @402.781s 10%: 3.8+842+0.17+0.31+18 ms clock, >> 22+842+0+5586/0.001/5674+110 ms cpu, 6337->6478->3418 MB, 6500 MB goal, 48 P >> gc 510 @403.798s 10%: 3.8+830+0.36+0.30+17 ms clock, >> 23+830+0+4356/0/5296+105 ms cpu, 6394->6502->3391 MB, 6558 MB goal, 48 P >> gc 511 @404.808s 10%: 3.4+830+0.70+0.42+18 ms clock, >> 20+830+0+4347/0.065/5628+110 ms cpu, 6407->6509->3382 MB, 6571 MB goal, 48 P >> gc 512 @405.819s 10%: 3.6+832+0.65+0.46+19 ms clock, >> 22+832+0+4450/0.17/5178+119 ms cpu, 6400->6545->3428 MB, 6563 MB goal, 48 P >> gc 513 @406.827s 10%: 3.4+784+0.17+0.27+17 ms clock, >> 20+784+0+4530/0.045/5297+104 ms cpu, 6404->6542->3431 MB, 6568 MB goal, 48 P >> gc 514 @407.788s 10%: 4.3+735+0.63+1.2+19 ms clock, >> 26+735+0+4576/0.009/5267+115 ms cpu, 6424->6536->3383 MB, 6589 MB goal, 48 P >> gc 515 @408.717s 10%: 8.0+783+0.30+0.25+18 ms clock, >> 48+783+0+5859/0.012/6100+111 ms cpu, 6382->6536->3428 MB, 6544 MB goal, 48 P >> gc 516 @409.685s 10%: 4.1+712+0.76+0.70+18 ms clock, >> 25+712+0+5675/0.018/5834+111 ms cpu, 6385->6507->3421 MB, 6549 MB goal, 48 P >> gc 517 @410.567s 10%: 3.9+806+0.40+0.74+19 ms clock, >> 23+806+0+5823/0.003/5744+115 ms cpu, 6437->6503->3399 MB, 6602 MB goal, 48 P >> ... >> gc 638 @547.759s 10%: 9.0+877+0.16+0.36+24 ms clock, >> 45+877+0+7723/0.25/7339+121 ms cpu, 7527->7693->4028 MB, 7719 MB goal, 48 P >> gc 639 @548.866s 10%: 7.9+935+0.75+0.61+25 ms clock, >> 39+935+0+6085/0.002/6937+129 ms cpu, 7532->7629->3958 MB, 7724 MB goal, 48 P >> gc 640 @550.026s 10%: 5.3+972+0.27+0.34+24 ms clock, >> 26+972+0+6123/0.49/7629+123 ms cpu, 7531->7692->4021 MB, 7722 MB goal, 48 P >> gc 641 @551.214s 10%: 7.7+889+0.51+0.35+27 ms clock, >> 38+889+0+7709/0.035/6185+137 ms cpu, 7525->7639->3984 MB, 7718 MB goal, 48 P >> gc 642 @552.320s 10%: 10+887+0.35+0.29+24 ms clock, >> 53+887+0+5841/0.006/7909+123 ms cpu, 7547->7670->3996 MB, 7740 MB goal, 48 P >> gc 643 @553.437s 10%: 8.0+963+0.55+0.49+24 ms clock, >> 40+963+0+6521/0.16/6691+120 ms cpu, 7554->7700->4034 MB, 7748 MB goal, 48 P >> ... >> gc 700 @630.556s 11%: 6.5+900+0.43+0.75+30 ms clock, >> 32+900+0+6964/0.069/9088+154 ms cpu, 8151->8311->4346 MB, 8360 MB goal, 48 P >> gc 701 @631.750s 11%: 9.2+1075+0.40+0.21+32 ms clock, >> 46+1075+0+6838/0.004/8123+161 ms cpu, 8162->8348->4357 MB, 8371 MB goal, 48 >> P >> gc 702 @633.135s 11%: 6.5+1093+0.24+0.37+31 ms clock, >> 32+1093+0+6754/0.018/8033+157 ms cpu, 8134->8263->4300 MB, 8342 MB goal, 48 >> P >> gc 703 @634.514s 11%: 5.8+969+0.86+2.0+31 ms clock, >> 29+969+0+11542/0.068/6350+157 ms cpu, 8137->8256->4275 MB, 8345 MB goal, 48 >> P >> gc 704 @635.759s 11%: 5.6+1113+0.42+0.41+31 ms clock, >> 34+1113+0+8972/0.015/8845+188 ms cpu, 8105->8247->4299 MB, 8312 MB goal, 48 >> P >> gc 705 @637.142s 11%: 10+1197+0.58+0.32+31 ms clock, >> 61+1197+0+7186/0.019/9007+191 ms cpu, 8106->8242->4303 MB, 8313 MB goal, 48 >> P >> gc 706 @638.615s 11%: 6.3+939+0.60+0.65+29 ms clock, >> 37+939+0+6967/0.024/8398+177 ms cpu, 8127->8233->4298 MB, 8335 MB goal, 48 P >> gc 707 @639.814s 11%: 9.9+1101+0.43+0.43+30 ms clock, >> 59+1101+0+7027/0.001/8864+181 ms cpu, 8177->8359->4382 MB, 8386 MB goal, 48 >> P >> gc 708 @641.202s 11%: 9.2+1025+0.50+0.31+32 ms clock, >> 55+1025+0+9983/0.013/9124+194 ms cpu, 8189->8363->4389 MB, 8399 MB goal, 48 >> P >> gc 709 @642.498s 11%: 9.4+1074+0.31+0.53+29 ms clock, >> 56+1074+0+7404/0.004/9114+178 ms cpu, 8227->8380->4357 MB, 8438 MB goal, 48 >> P >> gc 710 @643.854s 11%: 5.7+1033+0.40+0.38+30 ms clock, >> 34+1033+0+9576/0.005/9297+183 ms cpu, 8200->8351->4324 MB, 8410 MB goal, 48 >> P >> gc 711 @645.156s 11%: 8.1+746+0.14+0.43+31 ms clock, >> 49+746+0+9968/0.009/8210+191 ms cpu, 8142->8241->4296 MB, 8349 MB goal, 48 P >> >> >> CPU profile >> flat flat% sum% cum cum% >> 1.71mins 20.91% 20.91% 1.71mins 20.91% runtime.futex >> 0.77mins 9.38% 30.29% 0.77mins 9.38% runtime.memmove >> 0.68mins 8.30% 38.60% 1.78mins 21.70% runtime.scanobject >> 0.68mins 8.29% 46.89% 0.68mins 8.29% runtime.cas64 >> 0.41mins 4.98% 51.87% 1.26mins 15.40% runtime.gcDrainUntilPreempt >> 0.36mins 4.42% 56.29% 0.36mins 4.43% runtime.atomicload64 >> 0.30mins 3.62% 59.91% 0.99mins 12.10% runtime.greyobject >> 0.30mins 3.62% 63.53% 0.30mins 3.62% runtime.heapBitsForObject >> 0.22mins 2.68% 66.21% 0.22mins 2.68% runtime.atomicor8 >> 0.17mins 2.06% 68.27% 0.17mins 2.06% runtime.memclr >> 0.16mins 1.89% 70.16% 0.16mins 1.89% runtime.xchg >> 0.15mins 1.87% 72.03% 0.15mins 1.87% runtime.xadd >> 0.15mins 1.82% 73.85% 0.15mins 1.82% runtime.procyield >> 0.14mins 1.70% 75.55% 1.27mins 15.47% runtime.lock >> 0.14mins 1.65% 77.20% 0.15mins 1.83% runtime.mSpan_Sweep.func1 >> 0.12mins 1.41% 78.61% 0.12mins 1.41% runtime.cas >> 0.11mins 1.38% 80.00% 0.40mins 4.93% runtime.scang >> 0.10mins 1.28% 81.27% 0.12mins 1.50% runtime.atomicload >> 0.10mins 1.27% 82.54% 1.16mins 14.18% runtime.findrunnable >> 0.09mins 1.06% 83.61% 0.09mins 1.06% runtime.xadd64 >> 0.08mins 1.00% 84.61% 0.08mins 1.00% runtime.osyield >> 0.07mins 0.83% 85.44% 0.13mins 1.56% runtime.readgstatus >> 0.06mins 0.76% 86.20% 0.21mins 2.55% runtime.gcmarkwb_m >> 0.06mins 0.71% 86.91% 0.21mins 2.55% runtime.heapBitsSweepSpan >> 0.06mins 0.71% 87.62% 3.71mins 45.30% runtime.mallocgc >> 0.04mins 0.53% 88.15% 0.34mins 4.17% runtime.mSpan_Sweep >> 0.04mins 0.44% 88.58% 0.14mins 1.77% runtime.runqgrab >> 0.03mins 0.39% 88.97% 0.09mins 1.11% runtime.siftdownTimer >> 0.03mins 0.35% 89.32% 3.75mins 45.77% runtime.systemstack >> 0.03mins 0.32% 89.65% 0.43mins 5.25% runtime.sweepone >> 0.02mins 0.29% 89.94% 0.21mins 2.61% >> runtime.writebarrierptr_nostore1 >> 0.02mins 0.27% 90.20% 0.15mins 1.79% runtime.shade >> 0.02mins 0.26% 90.47% 0.09mins 1.05% runtime.scanblock >> 0.02mins 0.26% 90.72% 0.17mins 2.07% runtime.gentraceback >> 0.02mins 0.25% 90.98% 0.24mins 2.92% runtime.heapBits.setMarked >> 0.02mins 0.24% 91.21% 0.05mins 0.62% runtime.step >> 0.02mins 0.22% 91.43% 0.50mins 6.11% runtime.(*gcWork).tryGet >> 0.01mins 0.17% 91.60% 0.22mins 2.72% >> runtime.writebarrierptr_nostore1.func1 >> 0.01mins 0.14% 91.74% 2.17mins 26.42% runtime.gcAssistAlloc >> 0.01mins 0.13% 91.87% 0.22mins 2.73% runtime.writebarrierptr >> >> >> In the end the server is hang ,no any response and pstack info >> Thread 9 (Thread 0x7f8df3fff700 (LWP 38683)): >> #0 0x0000000000463563 in runtime.futex () >> #1 0x000000000042b913 in runtime.futexsleep () >> #2 0x0000000000411314 in runtime.notesleep () >> #3 0x00000000004346d2 in runtime.stopm () >> #4 0x0000000000435128 in runtime.gcstopm () >> #5 0x0000000000435e7c in runtime.schedule () >> #6 0x000000000043641a in runtime.goschedImpl () >> #7 0x00000000004364b2 in runtime.gopreempt_m () >> #8 0x0000000000447522 in runtime.newstack () >> #9 0x000000000045fc8f in runtime.morestack () >> #10 0x000000c820031500 in ?? () >> #11 0x0000000000000000 in ?? () >> Thread 8 (Thread 0x7f8dedbfe700 (LWP 38684)): >> #0 0x0000000000463563 in runtime.futex () >> #1 0x000000000042b913 in runtime.futexsleep () >> #2 0x0000000000411314 in runtime.notesleep () >> #3 0x00000000004346d2 in runtime.stopm () >> #4 0x0000000000435128 in runtime.gcstopm () >> #5 0x0000000000435e7c in runtime.schedule () >> #6 0x000000000043641a in runtime.goschedImpl () >> #7 0x00000000004364b2 in runtime.gopreempt_m () >> #8 0x0000000000447522 in runtime.newstack () >> #9 0x000000000045fc8f in runtime.morestack () >> #10 0x000000c82002aa00 in ?? () >> #11 0x0000000000000000 in ?? () >> Thread 7 (Thread 0x7f8de77fd700 (LWP 43538)): >> #0 0x0000000000463563 in runtime.futex () >> #1 0x000000000042b913 in runtime.futexsleep () >> #2 0x0000000000411314 in runtime.notesleep () >> #3 0x00000000004346d2 in runtime.stopm () >> #4 0x00000000004375bc in runtime.exitsyscall0 () >> #5 0x000000000045fb3b in runtime.mcall () >> #6 0x000000c820058000 in ?? () >> #7 0x0000000000000000 in ?? () >> Thread 6 (Thread 0x7f8de13fc700 (LWP 43539)): >> #0 0x0000000000463563 in runtime.futex () >> #1 0x000000000042b913 in runtime.futexsleep () >> #2 0x0000000000411314 in runtime.notesleep () >> #3 0x00000000004346d2 in runtime.stopm () >> #4 0x00000000004375bc in runtime.exitsyscall0 () >> #5 0x000000000045fb3b in runtime.mcall () >> #6 0x000000c82004c000 in ?? () >> #7 0x0000000000000000 in ?? () >> Thread 5 (Thread 0x7f8ddaffb700 (LWP 43540)): >> #0 0x0000000000463563 in runtime.futex () >> #1 0x000000000042b913 in runtime.futexsleep () >> #2 0x0000000000411314 in runtime.notesleep () >> #3 0x00000000004346d2 in runtime.stopm () >> #4 0x00000000004375bc in runtime.exitsyscall0 () >> #5 0x000000000045fb3b in runtime.mcall () >> #6 0x000000c82004c000 in ?? () >> #7 0x0000000000000000 in ?? () >> Thread 4 (Thread 0x7f8dd4bfa700 (LWP 43541)): >> #0 0x0000000000463563 in runtime.futex () >> #1 0x000000000042b913 in runtime.futexsleep () >> #2 0x0000000000411314 in runtime.notesleep () >> #3 0x00000000004346d2 in runtime.stopm () >> #4 0x00000000004375bc in runtime.exitsyscall0 () >> #5 0x000000000045fb3b in runtime.mcall () >> #6 0x000000c820058000 in ?? () >> #7 0x0000000000000000 in ?? () >> Thread 3 (Thread 0x7f8dce7f9700 (LWP 43542)): >> #0 0x0000000000463563 in runtime.futex () >> #1 0x000000000042b913 in runtime.futexsleep () >> #2 0x0000000000411314 in runtime.notesleep () >> #3 0x00000000004346d2 in runtime.stopm () >> #4 0x0000000000435128 in runtime.gcstopm () >> #5 0x0000000000435e7c in runtime.schedule () >> #6 0x000000000043641a in runtime.goschedImpl () >> #7 0x00000000004364b2 in runtime.gopreempt_m () >> #8 0x0000000000447522 in runtime.newstack () >> #9 0x000000000045fc8f in runtime.morestack () >> #10 0x000000c820036a00 in ?? () >> #11 0x0000000000000000 in ?? () >> Thread 2 (Thread 0x7f8dc83f8700 (LWP 43543)): >> #0 0x0000000000463563 in runtime.futex () >> #1 0x000000000042b913 in runtime.futexsleep () >> #2 0x0000000000411314 in runtime.notesleep () >> #3 0x00000000004346d2 in runtime.stopm () >> #4 0x00000000004375bc in runtime.exitsyscall0 () >> #5 0x000000000045fb3b in runtime.mcall () >> #6 0x000000c82004c000 in ?? () >> #7 0x0000000000000000 in ?? () >> Thread 1 (Thread 0x7f8ffa7c9700 (LWP 37229)): >> #0 0x0000000000463563 in runtime.futex () >> #1 0x000000000042b913 in runtime.futexsleep () >> #2 0x0000000000411314 in runtime.notesleep () >> #3 0x00000000004346d2 in runtime.stopm () >> #4 0x0000000000435128 in runtime.gcstopm () >> #5 0x0000000000435e7c in runtime.schedule () >> #6 0x000000000043641a in runtime.goschedImpl () >> #7 0x00000000004364b2 in runtime.gopreempt_m () >> #8 0x0000000000447522 in runtime.newstack () >> #9 0x000000000045fc8f in runtime.morestack () >> #10 0x0000000000f5df20 in runtime.g0 () >> #11 0x00007ffcb932e830 in ?? () >> #12 0x0000000000f5dee0 in ?? () >> #13 0x00000000004335f2 in runtime.mstart () >> #14 0x000000000045fa28 in runtime.rt0_go () >> #15 0x0000000000000001 in ?? () >> #16 0x00007ffcb932e878 in ?? () >> #17 0x0000000000000001 in ?? () >> #18 0x00007ffcb932e878 in ?? () >> #19 0x0000000000000000 in ?? () >> >> >> 在 2017年3月28日星期二 UTC+8下午2:17:14,Dave Cheney写道: >>> >>> Or to answer your question another way; the more allocations your >>> program performs, the higher the GC overhead to clean them up. >>> >>> Are you asking: is there a way to limit the % CPU the GC is allowed to >>> use? If so, the answer is no, because: >>> >>> 1. If the GC doesn't keep up with the garbage your program produces >>> memory usage grows withou bound. >>> 2. The garbage collector is designed to minimise stop the world pause >>> time, which costs more in CPU time than a traditional high latency, stop >>> the world, concurrent mark sweep collector. >> >> -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.