On Nov 28, 2011 8:55 AM, "Greg Smith" <g...@2ndquadrant.com> wrote: > > On 11/27/2011 04:39 PM, Ants Aasma wrote: >> >> On the AMD I saw about 3% performance drop with timing enabled. On the >> Intel machine I couldn't measure any statistically significant change. > > > Oh no, it's party pooper time again. Sorry I have to be the one to do it this round. The real problem with this whole area is that we know there are systems floating around where the amount of time taken to grab timestamps like this is just terrible.
I believe on most systems on modern linux kernels gettimeofday an its ilk will be a vsyscall and nearly as fast as a regular function call. > > I recall a patch similar to this one was submitted by Greg Stark some time ago. It used the info for different reasons--to try and figure out whether reads were cached or not--but I believe it withered rather than being implemented mainly because it ran into the same fundamental roadblocks here. My memory could be wrong here, there were also concerns about what the data would be used for. I speculated about doing that but never did. I had an experimental patch using mincore to do what you describe but it wasn't intended for production code I think. The only real patch was to use getrusage which I still intend to commit but it doesn't tell you the time spent in I/O -- though it does tell you the sys time which should be similar.