Thanks gang, I'm gonna paste what I've put together, doesn't seem right. Am I way off?
Here's my code. - It goes through a table Item - Matches that Item ID to an API call - Grabs the data, saves it and creates the thumbnail - It dies due to Timeouts and Other baloney, all silly, nothing code based. items = Item.objects.all().filter(cover='').order_by('-reference_id') for item in items: url = "http://someaddress.org/books/?issue=%s" % item.reference_id url_array = [] url_open = urllib.urlopen(url) url_read = url_open.read().decode('utf-8') try: url_data = simplejson.loads(url_read) url_array.append(url_data) for detail in url_array: if detail['artworkUrl']: cover_url = detail['artworkUrl'].replace(' ','%20') cover_open = urllib.urlretrieve(cover_url) cover_name = os.path.split(cover_url)[1] item.cover.save(cover_name, File(open(cover_open[0])), save=True) ## Create and save Thumbnail print "Cover - %s: %s" % (item.number, url) else: print "Missing - %s: %s" % (item.number, url) except ValueError: print "Error Processing record: %s: %s" % (item.reference_id, url) pass except IOError: print "IOError; Retrying..." pass print "Done" On Jul 12, 12:33 pm, MRAB <pyt...@mrabarnett.plus.com> wrote: > The Danny Bos wrote: > > Heya, > > > I'm running a py script that simply grabs an image, creates a > > thumbnail and uploads it to s3. I'm simply logging into ssh and > > running the script through Terminal. It works fine, but gives me an > > IOError every now and then. > > > I was wondering if I can catch this error and just get the script to > > start again? > > I mean, in Terminal it dies anyway, so I have to start it again by > > hand, which is a pain as it dies so sporadically. Can I automate this > > error, catch it and just get it to restart the loop? > > > Thanks for your time and energy, > > Exceptions can be caught. You could do something like this: > > while True: > try: > do_something() > break > except IOError: > pass -- http://mail.python.org/mailman/listinfo/python-list