2017. január 26., csütörtök 10:20:07 UTC+1 időpontban l...@pinkfroot.com a következőt írta: > > Ok, why do you think I'm looking at too low a level, interested to > understand! > > When I look at http://127.0.0.1/debug/pprof/goroutine?debug=2 most of > the threads are in the semacquire state waiting for a RWMutex lock to free > up. I'm trying to track down what/where it is stuck inside a current lock. > > On Thursday, January 26, 2017 at 8:16:28 AM UTC, Konstantin Khomoutov > wrote: >> >> On Thu, 26 Jan 2017 07:51:11 +0000 >> Lee Armstrong <l...@planefinder.net> wrote: >> >> > > Send HUP (kill -HUP pid or pressing Ctrl+\ on the terminal), when >> > > the program is locked up, save the resulting stack trace, and >> > > inspect. It'll list all running/blocked goroutines, and the point >> > > of the wait, too. >> > Thanks, I just had a look and it is very similar to the output of >> > http://127.0.0.1/debug/pprof/goroutine?debug=2 which I can already >> > get to and I can’t see where the lock is left open! >> >> IIUC, you're looking at the too low a level: say, receiving from a >> channel may also eventually wait on some OS-level synchronization >> object. So I think you should look a bit above this OS level. >> > You should get a full stack trace, gather ALL locked positions, examine each of its execution trace: there has to be some A-B B-A lock ordering issue, you haven't thought before. (For example somewhere you call Close, wich locks and calls close, which does not lock, but calls Flush, which does lock...)
-- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.