Hello, appLatencySeconds and latencySeconds are equal to nginx variables. With, appLatencySeconds == $upstream_response_time [1] and latencySeconds == $request_time[2]. For clarity I'll include the definitions: appLatencySeconds :$upstream_response_time: keeps time spent on receiving the response from the upstream server. latencySeconds: $request_time: request processing time elapsed between the first bytes were read from the client and the log write after the last bytes were sent to the client.
I'll focus on how to debug this issue since it is expressed in the title. There is a guide on `Optimizing Application Latency with Load Balancing` [3], where it is expressed how to measure latency in different scenarios. However, it seems to me that the issue could be due to a bad network throughput of some of the clients of your application, I would check the user-agent, payload size of such request that take long to be passed to the App Engine app to determine if that could be the issue. [1] http://nginx.org/en/docs/http/ngx_http_upstream_module.html [2] http://nginx.org/en/docs/http/ngx_http_log_module.html [3] https://cloud.google.com/load-balancing/docs/tutorials/optimize-app-latency On Wednesday, September 16, 2020 at 8:56:32 PM UTC+2 [email protected] wrote: > Sorry -- correction, our target cpu utilization is at the default of 0.5, > but the utilization at the time was very much below that. > > On Wednesday, September 16, 2020 at 11:55:37 AM UTC-7 Eddie Zhou wrote: > >> We were actually able to find the Load Balancer logs (hoping this helps >> someone else in the future) via resource.type="http_load_balancer" and >> connecting trace to jsonPayload.trace from App Engine nginx.request. >> >> In our case, this shows that the load balancer received the client >> request at 10:05:53 and App Engine saw it at 10:06:04, which seems to >> mirror the skew and contradict the first linked thread above. >> >> What would be any suggested App Engine configuration changes to reduce >> this time? Does this indicate that we have too many requests for our >> instances to handle? For more information, the number of instances didn't >> scale up during this time -- possibly because we have target cpu >> utilization set to 0.8 and the utilization was very much below that. I'm >> not sure how that setting interacts with request backlog -- some signals >> are alluded to here >> <https://medium.com/google-cloud/app-engine-scheduler-settings-and-instance-count-4d1e669f33d5> >> but >> that post is from 2017, and also unclear if it refers to Standard or Flex. >> >> On Wednesday, September 16, 2020 at 11:42:54 AM UTC-7 Eddie Zhou wrote: >> >>> Hi all, >>> >>> We have a Go App Engine Flex app that occasionally sees a large >>> disparity between appLatencySeconds and latencySeconds in the nginx.request >>> logs, on the order of sub-1s for the former and close to 10s for the >>> latter. Our request traces for the application code confirm the former. >>> >>> We thought this might have to do with the App Engine request queueing, >>> but this post >>> <https://groups.google.com/g/google-appengine/c/JcW8qcDeSnA> seems to >>> indicate the latencySeconds value is independent of that. This thread >>> <https://groups.google.com/g/google-appengine/c/BawYguWHq7Q/m/z3zk4ylpAQAJ> >>> points >>> to looking at Load Balancer logs, but it doesn't appear we can look at >>> those associated with our App Engine apps. >>> >>> Does anyone have pointers on how to debug this, or relatedly, request >>> queuing in general? These slow request times for our clients are tough >>> when they're not due to our application code. >>> >>> Thanks! >>> >>> -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/google-appengine/5dc283ab-a925-4b39-845a-bec918969c77n%40googlegroups.com.
