Thanks! I tried rebooting the OS. Now my resources seem ok (but I didn't check before the reboot):
Files used: 1376 out of 75556 Mem used: 580mb out of 796mb Swap used: 0 CPU: 88-99% idle And I know longer see the "Exception occurred" or "IOError" messages, however I DO still see "Premature end of script headers". These errors come in batches, every 10-20 seconds or so I get a continuous block of 10-20 "Premature end of script headers" errors from different clients. These are followed by errors notifying me that clients' ajax requests failed. I also found three of these in my web2py tickets: Traceback (most recent call last): File "gluon/main.py", line 337, in wsgibase parse_get_post_vars(request, environ) File "gluon/main.py", line 222, in parse_get_post_vars request.body = copystream_progress(request) ### stores request body File "gluon/main.py", line 95, in copystream_progress copystream(source, dest, size, chunk_size) File "gluon/fileutils.py", line 301, in copystream data = src.read(size) IOError: request data read error However, I've gotten around 3000 "premature end of script" errors, and only 3 of these IOErrors. Is there a way to identify what is causing the "Premature end of script" errors? On Jul 19, 7:50 pm, Graham Dumpleton <graham.dumple...@gmail.com> wrote: > On Jul 20, 12:01 pm, Michael Toomim <too...@gmail.com> wrote: > > > I'm getting errors like these in my apache error logs: > > > [Mon Jul 19 18:55:20 2010] [error] [client 65.35.93.74] Premature end > > of script headers: wsgihandler.py, > > referer:http://yuno.us/init/hits/hit?assignmentId=1A7KADKCHTB1IJS3Z5CR16OZM4V... > > [Mon Jul 19 18:55:20 2010] [error] [client 143.166.226.43] Premature > > end of script headers: wsgihandler.py, > > referer:http://yuno.us/init/hits/hit?assignmentId=1A9FV5YBGVV54NALMIRILFKHPT1... > > The above is because the daemon process you are running web2py in > crashed. > > > [Mon Jul 19 18:55:50 2010] [error] [client 117.204.99.178] mod_wsgi > > (pid=7730): Exception occurred processing WSGI script '/home/toomim/ > > projects/utility/web2py/wsgihandler.py'. > > [Mon Jul 19 18:55:50 2010] [error] [client 117.201.42.84] mod_wsgi > > (pid=7730): Exception occurred processing WSGI script '/home/toomim/ > > projects/utility/web2py/wsgihandler.py'. > > [Mon Jul 19 18:55:50 2010] [error] [client 117.201.42.84] mod_wsgi > > (pid=7730): Exception occurred processing WSGI script '/home/toomim/ > > projects/utility/web2py/wsgihandler.py'. > > [Mon Jul 19 18:55:50 2010] [error] [client 117.201.42.84] IOError: > > failed to write data > > In the case of daemon mode being used, this is because the Apache > server child process crashed. > > > > > > > [Mon Jul 19 18:55:50 2010] [error] [client 117.201.42.84] mod_wsgi > > (pid=7730): Exception occurred processing WSGI script '/home/toomim/ > > projects/utility/web2py/wsgihandler.py'. > > [Mon Jul 19 18:55:50 2010] [error] [client 117.201.42.84] IOError: > > failed to write data > > [Mon Jul 19 18:55:50 2010] [error] [client 117.201.42.84] mod_wsgi > > (pid=7730): Exception occurred processing WSGI script '/home/toomim/ > > projects/utility/web2py/wsgihandler.py'. > > [Mon Jul 19 18:55:50 2010] [error] [client 117.201.42.84] IOError: > > failed to write data > > [Mon Jul 19 18:55:50 2010] [error] [client 117.201.42.84] mod_wsgi > > (pid=7730): Exception occurred processing WSGI script '/home/toomim/ > > projects/utility/web2py/wsgihandler.py'. > > [Mon Jul 19 18:55:50 2010] [error] [client 117.201.42.84] IOError: > > failed to write data > > > My web app gets about 7 requests per second. At first, things work > > fine. Then after a while it seems like every request gets handled by > > MULTIPLE threads, because my logging.debug() statements print multiple > > copies of each message and it seems my database gets multiple entries. > > And I get these errors in the apache logs (with LogLevel debug). > > > Any idea what to do? Where to look? I'm on ubuntu. > > Look at your systems resource usage, ie., memory, open files etc. The > above are symptomatic of your operating system running out of > resources and processes not coping too well with that. > > Graham