Nevermind, I forgot that you had mentioned Django :-) The app ID would
still be helpful as well as example code if you feel comfortable
sharing.

Thank you,

Jeff

On Aug 31, 3:16 pm, "Jeff S (Google)" <[email protected]> wrote:
> Hi Jeff,
>
> Ah I see, thanks for the details. I'm looking into this, would you
> mind sharing which runtime you are using, and the app ID?
>
> Cheers,
>
> Jeff
>
> On Aug 31, 1:52 pm, Jeff Enderwick <[email protected]> wrote:
>
> > thanks - I expected that the api calls would use parallel processing,
> > but the app/servelet itself is a single thread of execution.
> > if I have api_cpu_ms of 74, and cpu_ms of 1500, then that gives 1426ms
> > for the non-api (app/servelet usage), yes?
> > I'm trying to grok how that would happen in a single thread in 965ms
> > of wall-clock time.
>
> > Jeff
>
> > On Mon, Aug 31, 2009 at 11:30 AM, Jeff S (Google)<[email protected]> wrote:
> > > Hi Jeff
>
> > > On Fri, Aug 28, 2009 at 1:08 AM, Jeff Enderwick <[email protected]>
> > > wrote:
>
> > >> Trolling my logs, I'm coming across cases where there is extreme
> > >> (~10x) variance in cpu_ms for the exact same code flow, same GET URL
> > >> and same data (not even any intervening writes to the datastore). I am
> > >> logging my db.* function accesses, and I have factored out memcache
> > >> too. For example:
>
> > >> 92ms, 142cpu_ms, 74api_cpu_ms, followed by:
> > >> 965ms, 1500cpu_ms, 74api_cpu_ms
>
> > >> Q1: what could cause such a whopping delta? I am using Django, so
> > >> perhaps template compilation? I used cprofile on the SDK with a
> > >> similar data/result set, and the first page served was maybe ~2x
> > >> subsequent pages in total time. Thoughts?
>
> > > Your idea of template compilation is along the same lines as my thinking. 
> > > I
> > > can't say difinitively for this case but I would guess that you might be
> > > seeing a more expensive first request when a new instance of you app is
> > > being spun up.
>
> > >> Q2: I am assuming the 1st number after the '200' is the wall-clock
> > >> time-to-server. As the app is single threaded ... how is it able to
> > >> burn 1500ms less 74ms in only 965ms?
>
> > > Most API calls make calls to distributed services which parallelize work
> > > across multiple machines, so it often easy to use more CPU time than wall
> > > clock time. If you want to see where the CPU usage is coming from, you can
> > > get information about CPU quota levels at any point within your code as
> > > documented here:
>
> > >http://code.google.com/appengine/docs/quotas.html#Monitoring_CPU_Usag...
>
> > > Thank you,
>
> > > Jeff
>
> > >> Thanks!
>
>
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to