Hi! I would like to measure the total time of serving a request. From what I was looking around, generally one could do that with a middleware handler, which starts measuring time, calls ServeHTTP of the original handler, and once ServeHTTP returns, measures the time it took.
But I am not completely convinced that this really takes into the account full total time of serving a request (besides time spent inside stdlib to pass the request to my code): ServeHTTP can call Write which buffers data to be send and return, but I would like to also take into the account total time it took to send this data out (at least out of the program into the kernel). Am I right that this time might not be captured with the approach above? Is there a way to also measure the time it takes for buffers to be written out? I am considering calling Flush in the middleware, before measuring the end time. My understanding is that it would block until data is written out. But I am not sure if that would have some other unintended side effects? Maybe measuring that extra time is not important in practice? I am thinking of trying to measure it because it might be that the client's connection is slow (maybe by malice) and would like to have some data on how often that is happening. Mitar -- https://mitar.tnode.com/ https://twitter.com/mitar_m https://noc.social/@mitar -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/CAKLmikNMmJ3GLuec8dO9p%3DG4svSgFkOqRVh1G%2BL4%2BC-hKTKoeA%40mail.gmail.com.