Here's an entirely unscientific method to determine the overhead of profiling. The Go distribution contains a set of basic benchmarks, one of which is a loopback based http client server benchmark. Running the benchmark with and without profiling gives a rough ballpark for the overhead of profiling.
lucky(~/go/test/bench/go1) % go test -run=XXX -bench=HTTPClientServer goos: linux goarch: amd64 BenchmarkHTTPClientServer-4 20000 84296 ns/op PASS ok _/home/dfc/go/test/bench/go1 4.274s lucky(~/go/test/bench/go1) % go test -run=XXX -bench=HTTPClientServer -cpuprofile=/tmp/c.p goos: linux goarch: amd64 BenchmarkHTTPClientServer-4 20000 85316 ns/op PASS ok _/home/dfc/go/test/bench/go1 4.402s You could use this to experiment with the other kinds of profiles; memory, block, trace, etc. If you wanted to go a step further you could adding profiling to your own project with my github.com/pkg/profile package then compare the results of a http load test with and without profiling enabled. Thanks Dave On Tuesday, 25 July 2017 10:44:10 UTC+10, nat...@honeycomb.io wrote: > > Hello, > > I am curious what the performance impact of running pprof to collect > information about CPU or memory usage is. Is it like strace where there > could be a massive slowdown (up to 100x) or is it lower overhead, i.e., > safe to use in production? The article here - > http://artem.krylysov.com/blog/2017/03/13/profiling-and-optimizing-go-web-applications/ > > - suggests that "one of the biggest pprof advantages is that it has low > overhead and can be used in a production environment on a live traffic > without any noticeable performance penalties". Is that accurate? > > Thanks! > > Nathan > -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.