When using `binary.Write` for encoding a slice of structs, I encountered 
some weird behaviour where memory allocations in a particular path was more 
than I expected.

I wrote some benchmarks in the standard library's encoding/binary package 
to demonstrate this.

func BenchmarkWriteSlice1000Structs(b *testing.B) { slice := make([]Struct, 
1000) buf := new(bytes.Buffer) var w io.Writer = buf 
b.SetBytes(int64(Size(slice))) b.ResetTimer() for i := 0; i < b.N; i++ { 
buf.Reset() Write(w, BigEndian, slice) } b.StopTimer() } func 
BenchmarkWriteSlice10Structs(b *testing.B) { slice := make([]Struct, 10) 
buf := new(bytes.Buffer) var w io.Writer = buf 
b.SetBytes(int64(Size(slice))) b.ResetTimer() for i := 0; i < b.N; i++ { 
buf.Reset() Write(w, BigEndian, slice) } b.StopTimer() }

   - Encoding a slice with 1000 struct elements

root@ubuntu-s-2vcpu-2gb-fra1-01:~/go/src/encoding/binary# ../../../bin/go 
test -run='^$' -memprofile memprofile.out -benchmem -bench 
BenchmarkWriteSlice1000Structs -count=10 
root@ubuntu-s-2vcpu-2gb-fra1-01:~/go/src/encoding/binary# ../../../bin/go 
tool pprof memprofile.out File: binary.test Type: alloc_space Time: Mar 9, 
2024 at 3:27pm (UTC) Entering interactive mode (type "help" for commands, 
"o" for options) (pprof) top Showing nodes accounting for 1305.40MB, 99.84% 
of 1307.48MB total Dropped 8 nodes (cum <= 6.54MB) flat flat% sum% cum cum% 
1302.13MB 99.59% 99.59% 1304.21MB 99.75% encoding/binary.Write 3.27MB 0.25% 
99.84% 1307.48MB 100% encoding/binary.BenchmarkWriteSlice1000Structs 0 0% 
99.84% 1305.31MB 99.83% testing.(*B).launch 0 0% 99.84% 1307.48MB 100% 
testing.(*B).runN


   - Encoding a slice with 10 struct elements

root@ubuntu-s-2vcpu-2gb-fra1-01:~/go/src/encoding/binary# > ../../../bin/go 
test -run='^$' -memprofile memprofile.out -benchmem -bench 
BenchmarkWriteSlice10Structs -count=10 
root@ubuntu-s-2vcpu-2gb-fra1-01:~/go/src/encoding/binary# ../../../bin/go 
tool pprof memprofile.out warning: GOPATH set to GOROOT (/root/go) has no 
effect File: binary.test Type: alloc_space Time: Mar 9, 2024 at 4:24pm 
(UTC) Entering interactive mode (type "help" for commands, "o" for options) 
(pprof) top Showing nodes accounting for 905.58MB, 100% of 905.58MB total 
flat flat% sum% cum cum% 792.58MB 87.52% 87.52% 905.58MB 100% 
encoding/binary.Write 113MB 12.48% 100% 113MB 12.48% 
reflect.(*structType).Field 0 0% 100% 905.58MB 100% 
encoding/binary.BenchmarkWriteSlice10Structs 0 0% 100% 113MB 12.48% 
encoding/binary.dataSize 0 0% 100% 113MB 12.48% encoding/binary.sizeof 0 0% 
100% 113MB 12.48% reflect.(*rtype).Field 0 0% 100% 905.58MB 100% 
testing.(*B).launch 0 0% 100% 905.58MB 100% testing.(*B).runN (pprof) 

Per the benchmarks, there is a rise in total memory allocated incurred at `
reflect.(*structType).Field` when encoding a slice of 10 struct elements 
compared to a slice of 1000 struct elements. I expected to see the memory 
incurred to be at worst, the same if not less when encoding a slice of 
structs with lesser length. I draw my conclusion from here since we are 
calling sizeof on the same struct type regardless of the length of the 
slice. 
[](https://github.com/golang/go/blob/74726defe99bb1e19cee35e27db697085f06fda1/src/encoding/binary/binary.go#L483)

Also, looking at the primary source of the allocations, per the line below, 
since we are working with the same struct type, I expect the memory used 
here to be the same regardless since both benchmarks are working with the 
same struct type and hence have the same fields.

[](https://github.com/golang/go/blob/74726defe99bb1e19cee35e27db697085f06fda1/src/reflect/type.go#L1061)

Is this change because of memory constraints when I scale to a 1000 
structs or am I just missing something obvious?

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/68d5cbcf-ac74-4d68-b186-4014cb370162n%40googlegroups.com.

Reply via email to