consider the following program: ```go package main
import ( "fmt" "os" "runtime/pprof" ) func main() { f, err := os.OpenFile("memory.pb.gz", os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666 ) if err != nil { panic(err) } m := make(map[int]int) for i := 0; i < 10e6; i++ { m[i] = 2 * i } if err := pprof.WriteHeapProfile(f); err != nil { panic(err) } } ``` after inserting 10 million key/value pair, the result of this memory profile show inuse_objects count ~ 100k[image: inuse_objects.png] in my view this inuse_objects is found by GC by following the bucket and old_buckets pointer of hmap ```go // A header for a Go map. type hmap struct { // Note: the format of the hmap is also encoded in cmd/compile/internal/reflectdata/reflect.go. // Make sure this stays in sync with the compiler's definition. count int // # live cells == size of map. Must be first (used by len() builtin) flags uint8 B uint8 // log_2 of # of buckets (can hold up to loadFactor * 2^B items) noverflow uint16 // approximate number of overflow buckets; see incrnoverflow for details hash0 uint32 // hash seed buckets unsafe.Pointer // array of 2^B Buckets. may be nil if count==0. oldbuckets unsafe.Pointer // previous bucket array of half the size, non-nil only when growing nevacuate uintptr // progress counter for evacuation (buckets less than this have been evacuated) extra *mapextra // optional fields } ``` but according to this doc: https://go101.org/optimizations/6-map.html <https://go101.org/optimizations/6-map.htmlhttps://go101.org/optimizations/6-map.html> If the key type and element type of a map both don't contain pointers, then in the scan phase of a GC cycle, the garbage collector will not scan the entries of the map. This could save much time. This tip is also valid for other kinds of container in Go, such as slices, arrays and channels. My question is: maybe GC should not follow the bucket and old_buckets pointer of hmap? or the doc above just forbid GC from scan the entries not the bucket and old_buckets pointer of hmap, and larger map[int]int do increase GC scan overhead by having more buckets and old_buckets? Thanks -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/32427592-36e4-4262-8904-e2d68b14f7e6n%40googlegroups.com.