Martijn van Oosterhout <kleptog@svana.org> writes: > With one big exception: sometimes I/O is non-interruptable (the good > old D state). In this case the interrupts will happen but will simply > be queued and in fact will all be dropped except the last one > (non-realtime signals are never stacked). The handler will probably be > called the moment it returns to user-space.
If backends store their current status in shared memory then a separate process entirely can receive the interrupts, scan through the shared memory process states and do the accounting. It would never be interuppting i/o since that process would never be doing i/o. It could use real-time signals, reads on /dev/rtc, or whatever other methods exist for scheduling periodic events since it would be doing nothing but handling these interrupts. The neat thing about this is that it would be possible to look at a process from outside instead of having to hijack the application to get feedback. You could find information like total time spent in i/o versus cpu aggregated across all queries from a single backend or for all backends. The downside is that to handle things like EXPLAIN ANALYZE you would have to provide enough information about plans to this accounting process for it to store the information. Plans are currently purely local to the backend running them. Perhaps it would be enough to provide a unique id for the plan (sequential id per backend would be enough). And a unique id for the plan node. The accounting process wouldn't know anything more about what that node represented but the backend could later query to fetch the information and associate it with the plan. -- greg ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq