So I'm working to scale up my web2py based app a bit.  Part of this was 
moving the Scheduler to a separate machine.  Since doing this I'm getting 
some errors and weird behavior and could use some insight.  There are 
multiple issues but I sort of need to explain it all so I opted for 1 post 
with multiple questions instead of multiple concise posts...sorry

My configuration:

"production server" - nginx serving my website. db connects to a mysql 
instance bound to production server network address. scheduler connects to 
mysql instance running on the "dev/workq server"
"dev/workq server" - nginx serving a copy of the same web2py 
directory...plan to use as development server if needed as well.  db 
connects to mysql instance running on production server. scheduler connects 
to mysql instance bound to dev server network address.

from 0_db.py:
db = 
DAL('mysql://dev:xxx...@production.server.edu/myapp',pool_size=8,check_reserved=['mysql'],migrate=ENABLE_MIGRATE,fake_migrate_all=ENABLE_FAKE_MIGRATE)



from scheduler.py:
scheduler = Scheduler(DAL('mysql://workq:x...@dev.workq.server.edu/myapp
',pool_size=8,check_reserved=['mysql'],migrate=ENABLE_MIGRATE,fake_migrate_all=ENABLE_FAKE_MIGRATE),heartbeat=2)




My steps from empty mysql databases and database directories on both 
servers and both mysql instances.
- go to site on the production server - migrate = True, fake_migrate = 
False -> OK
- go to site on the dev server - migrate = True, fake_migrate = False -> 
Error - <class 'gluon.contrib.pymysql.err.InternalError'> (1050, u"Table 
'auth_user' already exists")
- go to site on the dev server - migrate = False, fake_migrate = True -> OK
- start scheduler task -> OK

*Am I starting this all up improperly?  I'm a little confused since I've 
got two web2py instances talking to different db instances for the web app 
and the scheduler...but I think having to do a fake migrate on the second 
server makes sense.*
*
*
*
*
So now I *think* my website is up and running properly.  I then run a 
function that schedules a job.  This seems to run (by looking at the 
comfortscheduler monitor), but it's supposed to schedule additional jobs 
itself.  This never happens.


inside the single task that is scheduled and runs:

for items in my_thing:

  #...do stuff

  # Submit tasks to process more stuff
  print "submitting job \n"
  scheduler.queue_task(my_task2,timeout=60000,
    pvars=dict(arg1= arg1,
    arg2=arg2))
  
  db(db.collections.name == collection).update(last_id=last_id)
  db.commit()



*If I try to view the details of the task in the ComfortScheduler monitor 
(from my production server) by clicking on the UUID link of the task i get 
an error:*

<type 'exceptions.AttributeError'> 'DAL' object has no attribute 
'scheduler_task'

1.
2.
3.
4.
5.
6.
7.
8.

Traceback (most recent call last):
  File "/home/www-data/web2py/gluon/restricted.py", line 217, in restricted
    exec ccode in environment
  File 
"/home/www-data/web2py/applications/parity/views/plugin_cs_monitor/task_details.html",
 line 111, in <module>
  File "/home/www-data/web2py/gluon/dal.py", line 8041, in __getattr__
    return ogetattr(self, key)
AttributeError: 'DAL' object has no attribute 'scheduler_task'




I think the problem here is the comfortscheduler code looks at the global 
db object maybe?  Since I've passed in to Scheduler() a different database 
is this breaking things?  This might be a question only niphlod can answer 
since it's his app....


So ignoring that for now, if i go and look in the table I can see the 
run_output "submitting job submitting job submitting job submitting job" 
indicating 
it got to the point in the code where it should have submitted more tasks.  
*Any idea why new tasks would not be getting scheduled?  *I think it might 
be because I'm calling db.commit() in my task...but this is on my main 
web2py db, not the db the scheduler is using?  Can i have 2 global db 
objects?  So should the scheduler be setup more like this:

sched_db = DAL('mysql://workq:x...@dev.workq.server.edu/myapp
',pool_size=8,check_reserved=['mysql'],migrate=ENABLE_MIGRATE,fake_migrate_all=ENABLE_FAKE_MIGRATE)
scheduler = Scheduler(sched_db,heartbeat=2)


and then i'd have to call both 

db.commit()
sched_db.commit()

Anyone have a server config like this before?

Long message...lots of questions...sorry and thanks.

Dean




-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to