Hi,

I was trying to implement the following problem statement from the book 
*Programming 
Elixer* in go.

Let's write some code that creates n processes. The first will send a 
> number to the second. It will increment that number and pass it to the 
> third. This will continue until we get to the last process, which will pass 
> the number back to the top level


My implementation: https://play.golang.org/p/iI8GqQ08q6

While attempting to do this I realised once the process complete I see the 
application taking up a large chunk of memory. I was trying to understand 
why this was happening. Why is go still holding up memory after the 
computation is done?

memleak git:(master) ✗ GODEBUG=gctrace=1 ./memleak 1000000


Alloc = 60
TotalAlloc = 60
Sys = 1700
NumGC = 0


before: go routines 2
result: 1000000
after: go routines 2
Ctrl-C to exit
gc 1 @1.300s 4%: 40+96+0.10 ms clock, 322+77/89/123+0.81 ms cpu, 260->260->
170 MB, 261 MB goal, 8 P


Alloc = 184148
TotalAlloc = 260398
Sys = 1137084
NumGC = 1


GC forced
gc 2 @126.456s 0%: 0.012+101+0.049 ms clock, 0.10+0/96/118+0.39 ms cpu, 170
->170->170 MB, 340 MB goal, 8 P
scvg0: inuse: 180, idle: 882, sys: 1063, released: 0, consumed: 1063 (MB)
GC forced
gc 3 @246.577s 0%: 0.011+107+0.051 ms clock, 0.094+0/102/122+0.41 ms cpu, 
170->170->170 MB, 340 MB goal, 8 P
GC forced
scvg1: 882 MB released
scvg1: inuse: 180, idle: 882, sys: 1063, released: 882, consumed: 180 (MB)
gc 4 @366.708s 0%: 0.016+116+24 ms clock, 0.13+0/111/128+197 ms cpu, 170->
170->170 MB, 340 MB goal, 8 P
GC forced
gc 5 @486.868s 0%: 0.012+114+0.063 ms clock, 0.10+0/114/138+0.50 ms cpu, 170
->170->170 MB, 340 MB goal, 8 P
scvg2: 0 MB released
scvg2: inuse: 180, idle: 882, sys: 1063, released: 882, consumed: 180 (MB)
GC forced
gc 6 @607.003s 0%: 0.014+112+0.058 ms clock, 0.11+0/106/125+0.46 ms cpu, 170
->170->170 MB, 340 MB goal, 8 P
GC forced
scvg3: inuse: 180, idle: 882, sys: 1063, released: 882, consumed: 180 (MB)
gc 7 @727.144s 0%: 0.013+39+0.052 ms clock, 0.10+0/52/228+0.41 ms cpu, 170->
170->170 MB, 340 MB goal, 8 P
GC forced
gc 8 @847.203s 0%: 0.012+65+0.060 ms clock, 0.097+0/69/189+0.48 ms cpu, 170
->170->170 MB, 340 MB goal, 8 P
scvg4: inuse: 180, idle: 882, sys: 1063, released: 882, consumed: 180 (MB)


memleak git:(master) ✗ top -o mem

PID    COMMAND      %CPU TIME     #TH   #WQ  #PORTS MEM    PURG   CMPRS 
 PGRP  PPID  STATE    BOOSTS             %CPU_ME %CPU_OTHRS UID  FAULTS     
COW      MSGSENT     MSGRECV     SYSBSD      SYSMACH
0      kernel_task  5.1  13:52:38 136/8 0    2      1501M- 0B     0B     0 
    0     running   0[0]              0.00000 0.00000    0    1699585    75 
      1476697193+ 1255953068+ 0           0
63272  memleak      0.0  00:05.91 11    0    35     1163M  0B     0B     
63272 33103 sleeping *0[1]              0.00000 0.00000    501  298605     
80       42          20          277052      320101



Also when I change up this implementation to do something different, while 
still spawning up 1 million goroutines and buffered channels this memory 
usage drops considerably. 
Changed code : https://play.golang.org/p/pJ2Y7ENXna

memleak git:(master) ✗ GODEBUG=gctrace=1 ./memleak 1000000


Alloc = 61
TotalAlloc = 61
Sys = 1700
NumGC = 0


before: go routines 2
gc 1 @0.000s 0%: 0.056+810+0.030 ms clock, 0.17+0.005/0.070/810+0.092 ms cpu
, 7->8->8 MB, 8 MB goal, 8 P
buffered: started all go goutines 2
buffered: final count 1000000
after: go routines 2


Alloc = 8198
TotalAlloc = 8205
Sys = 14434
NumGC = 1


Ctrl-C to exit

>From what I understand reading and a few comments from the gopher slack, 
this is because  go is not releasing memory back to OS but holds it for a 
longer in case it needs this, if that makes sense? It would be really 
helpful, if someone could help me understand this.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to