Hi Tom
except: clause will catch too many exceptions - what if this will be
not a timeout but your own error in code?
I propose to do
try:
return func(*args, **kw)
except (db.InternalError, db.Timeout,
db.CapabilityDisabledError), e:
logging.error(...)
....
Regards
On Feb 10, 8:59 am, tom <[email protected]> wrote:
> I use this code that I wrote:
>
> import time
> import traceback
>
> def retry(func,*args,**kw):
>
> start=time.time()
> e=0
> while time.time()-start < 25:
> try:
> return func(*args,**kw)
> except:
> traceback.print_exc()
> e+=1
> time.sleep(e)
>
> raise
>
> def do():
>
> ''' do your stuff like writing to datastore '''
>
> pass
>
> retry(do)
>
> On Feb 10, 7:02 am, Eli Jones <[email protected]> wrote:
>
> > Well.. you can always wrap puts with a try,except like so (if you want it to
> > just keep retrying):
>
> > wait = .1
> > while True:
> > try:
> > db.put(myEntity)
> > break
> > except db.Timeout:
> > from time import sleep
> > sleep(wait)
> > wait *= 2
>
> > On Tue, Feb 9, 2010 at 5:54 PM, phtq <[email protected]> wrote:
> > > The recipe does cut down the Timeouts dramatically, but there are
> > > still a large number which seem to bypass the this fix completely. A
> > > sample error log entry is attached:
>
> > > Exception in request:
> > > Traceback (most recent call last):
> > > File "/base/python_lib/versions/third_party/django-0.96/django/core/
> > > handlers/base.py", line 77, in get_response
> > > response = callback(request, *callback_args, **callback_kwargs)
> > > File "/base/data/home/apps/kbdlessons/1-01.339729324125102596/
> > > views.py", line 725, in newlesson
> > > productentity = Products.gql("where Name = :1", ProductID).get()
> > > File "/base/python_lib/versions/1/google/appengine/ext/db/
> > > __init__.py", line 1564, in get
> > > results = self.fetch(1, rpc=rpc)
> > > File "/base/python_lib/versions/1/google/appengine/ext/db/
> > > __init__.py", line 1616, in fetch
> > > raw = raw_query.Get(limit, offset, rpc=rpc)
> > > File "/base/python_lib/versions/1/google/appengine/api/
> > > datastore.py", line 1183, in Get
> > > limit=limit, offset=offset, prefetch_count=limit,
> > > **kwargs)._Get(limit)
> > > File "/base/python_lib/versions/1/google/appengine/api/
> > > datastore.py", line 1113, in _Run
> > > raise _ToDatastoreError(err)
> > > Timeout
>
> > > Any ideas on how to deal with is class of Timeouts?
>
> > > On Jan 28, 9:48 am, phtq <[email protected]> wrote:
> > > > Thanks for mentioning this recipe, it worked well in testing and we
> > > > will try it on the user population tomorrow.
>
> > > > On Jan 27, 9:48 am, djidjadji <[email protected]> wrote:
>
> > > > > There is an article series about the datastore. It explains that the
> > > > > Timeouts are inevitable. It gives the reason for the timeouts. They
> > > > > will always be part of Bigtable and the Datastore of GAE.
>
> > > > > The only solution is a retry on EVERY read. The get by id/key and the
> > > queries.
> > > > > If you do that then very few reads will result in aTimeout.
> > > > > I wait first 3 and then 6 secs between each request. I log
> > > > > eachTimeout.
> > > > > If stillTimeoutafter 3 read tries I raise the exception.
>
> > > > > The result is very few final read Timeouts. The log shows frequent
> > > > > requests that need a retry, but most of them will succeed with the
> > > > > first.
>
> > > > > For speed, fetch the Static content object by key_name, and key_name
> > > > > is the file path.
>
> > > > > 2010/1/26 phtq <[email protected]>:
>
> > > > > > Our application error log for the 26th showed around 160 failed http
> > > > > > requests due to timeouts. That's 160 users being forced to hit the
> > > > > > refresh button on their browser to get a normal response. A more
> > > > > > typical day has 20 to 60 timeouts. We have been waiting over a year
> > > > > > for this bug to get fixed with no progress at all. Its beginning to
> > > > > > look like it's unfixable so perhaps Google could provide some
> > > > > > workaround. In our case, the issue arises because of the 1,000 file
> > > > > > limit. We are forced to hold all our .js, .css, .png. mp3, etc.
> > > > > > files
> > > > > > in the database and serve them from there. The application is quite
> > > > > > large and there are well over 10,000 files. The Python code serving
> > > up
> > > > > > the files does just one DB fetch and has about 9 lines of code so
> > > > > > there is no way it can be magically restructured to make theTimeout
> > > > > > go away. However, putting all the files on the app engine as real
> > > > > > files would avoid the DB access and make the problem go away. Could
> > > > > > Google work towards removing that file limit?
>
> > > > > > --
> > > > > > You received this message because you are subscribed to the Google
> > > Groups "Google App Engine" group.
> > > > > > To post to this group, send email to
> > > [email protected].
> > > > > > To unsubscribe from this group, send email to
> > > [email protected]<google-appengine%[email protected]>
> > > .
> > > > > > For more options, visit this group athttp://
> > > groups.google.com/group/google-appengine?hl=en.
>
> > > --
> > > You received this message because you are subscribed to the Google Groups
> > > "Google App Engine" group.
> > > To post to this group, send email to [email protected].
> > > To unsubscribe from this group, send email to
> > > [email protected]<google-appengine%[email protected]>
> > > .
> > > For more options, visit this group at
> > >http://groups.google.com/group/google-appengine?hl=en.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/google-appengine?hl=en.