There are a few options for post-processing, all within the set of
supported Go APIs. One is to use the `-focus` flag of `go tool pprof` so it
only displays samples that match a particular call stack. If the code to be
profiled is in a single function, that might be all you need. Another is to
set
The behavior of that nil check may have changed last week
in https://github.com/golang/go/commit/548158c4a57580e8c8bd0e9b2f91d03b31efa879
. Maybe because dloggerFake is a struct{} rather than a *struct{}?
I wouldn't bother with the nil check. I'd chain the methods like
dlog().s("foo").i(42).end
In the upper right corner, there's a set of three vertically stacked dots.
It's right below the "Sign in" text, or your profile picture. Clicking that
will open a menu of options that includes "Download patch". The keyboard
shortcut "d" will get there too.
I find that the second option, "Checko
Hi Robert,
First, note that the contention profile for runtime-internal locks doesn't
correctly blame the part of the code that _caused_ delay: with Go 1.22 and
1.23, the call stacks are of (runtime-internal) lock users that
_experienced_ delay. That's https://go.dev/issue/66999, and those odd
Try specifying the path to the binary as the second to last argument (see "go
tool pprof -h").
On Linux as of Go 1.7, CPU profiles include information on what executables are
mapped into memory. This allows the pprof tool to locate the binary. Raw CPU
profiles taken on macOS don't include this
Yes, this sounds like https://golang.org/issue/16528. During the concurrent
mark phase (the "27 [ms]" of "0.008+27+0.072 ms clock"), both your code and
the garbage collector are running. The program is allowed to use four OS
threads ("4 P"), which might be executing your code in your goroutines,
Yes, this sounds a lot like https://golang.org/issue/16293, where
goroutines that allocate memory while the garbage collector is running can
end up stalled for nearly the entire GC cycle, in programs where a large
amount of the memory is in a single allocation. For the program you've
shared, th
I'm not sure. You mention that there's only a single mDNS query per minute,
so the five-second timeouts would never overlap.
Looking at the profile again, a lot of the time seems to be spent on the
locks involved in sleeping for some amount of time. This makes it seem like
there are a lot of ac
Does your program set a very large Timeout on its mdns requests (maybe tens
of hours long)?
It looks like your program is consuming a lot of CPU cycles on managing
timers. On the left half of the flame graph, lots of CPU cycles are spent
in runtime.timerproc. Time here indicates a large number