you just had to delete corresponding .table files. scheduler tables are created as soon as the scheduler is istantiated, unless migrations are turned off.
On Friday, December 18, 2015 at 4:17:48 PM UTC+1, Gael Princivalle wrote: > > No I don't. > > But now I have: > Dropped all tables in the database folder > Delete all scheduler tables in the postgress database. > Delete the scheduler > Create the scheduler. > I still have the same db error, for example for workers: > > SELECT count(*) FROM "scheduler_worker" WHERE ("scheduler_wo... > > table files are created but not for scheduler tables > scheduler tables in postgress db are not created. > > Do you know why? > > Il giorno venerdì 18 dicembre 2015 15:48:06 UTC+1, Niphlod ha scritto: >> >> when you dropped tables, did you remember to delete the corresponding >> *.table files from the databases/ folder ? >> >> On Friday, December 18, 2015 at 12:47:38 PM UTC+1, Gael Princivalle wrote: >>> >>> >scheduler tables can be dropped manually >>> Done >>> >>> >deleting scheduler.py also. >>> Done >>> >>> >But I still don't think that it's the way to fix the issue you're >>> facing. >>> Well I'm not able to understand which is this problem, as when I create >>> a scheduler in a new app tasks are running. >>> For the moment creating a new scheduler is my better idea, if you have a >>> better one thanks. >>> >>> Anyway now I've got this error when I try to open the scheduler table, >>> for example scheduler_task. >>> My db string have now migrate=True, fake_migrate_all=True. >>> My only other idea is rebuild all the app starting from a new one, but >>> if I can resolve it in a shorter way I prefer. >>> >>> Traceback (most recent call last): >>> File >>> "/home/tasko/webapps/w2p_2_12_3/web2py/applications/hydrover_oleodinamica/controllers/appadmin.py", >>> line 238, in select >>> nrows = db(query).count() >>> File >>> "/home/tasko/webapps/w2p_2_12_3/web2py/gluon/packages/dal/pydal/objects.py", >>> line 1992, in count >>> return db._adapter.count(self.query,distinct) >>> File >>> "/home/tasko/webapps/w2p_2_12_3/web2py/gluon/packages/dal/pydal/adapters/base.py", >>> line 1311, in count >>> self.execute(self._count(query, distinct)) >>> File >>> "/home/tasko/webapps/w2p_2_12_3/web2py/gluon/packages/dal/pydal/adapters/postgres.py", >>> line 360, in execute >>> return BaseAdapter.execute(self, *a, **b) >>> File >>> "/home/tasko/webapps/w2p_2_12_3/web2py/gluon/packages/dal/pydal/adapters/base.py", >>> line 1378, in execute >>> return self.log_execute(*a, **b) >>> File >>> "/home/tasko/webapps/w2p_2_12_3/web2py/gluon/packages/dal/pydal/adapters/base.py", >>> line 1372, in log_execute >>> ret = self.cursor.execute(command, *a[1:], **b) >>> ProgrammingError: relation "scheduler_task" does not exist >>> LINE 1: SELECT count(*) FROM "scheduler_task" WHERE ("scheduler_task... >>> >>> >>> >>> >>> Il giorno venerdì 18 dicembre 2015 11:58:15 UTC+1, Niphlod ha scritto: >>>> >>>> scheduler tables can be dropped manually >>>> deleting scheduler.py also. >>>> But I still don't think that it's the way to fix the issue you're >>>> facing. >>>> >>>> On Friday, December 18, 2015 at 11:22:57 AM UTC+1, Gael Princivalle >>>> wrote: >>>>> >>>>> >delete what ? the istantiation ? >>>>> >>>>> Delete the scheduler.py file from models and all scheduler tables. >>>>> How can I do it? >>>>> >>>>> Il giorno venerdì 18 dicembre 2015 10:48:23 UTC+1, Niphlod ha scritto: >>>>>> >>>>>> >>>>>> >>>>>> On Friday, December 18, 2015 at 10:45:22 AM UTC+1, Gael Princivalle >>>>>> wrote: >>>>>>> >>>>>>> I've set a scheduler in a new application and tasks are running. >>>>>>> >>>>>>> So probably there's a problem when a app comes from a previous >>>>>>> web2py version. >>>>>>> >>>>>> >>>>>> as long as you let migration happen, no issues whatsoever. >>>>>> >>>>>> >>>>>>> Anyway I think it can be resolved deleting the scheduler and create >>>>>>> it again, but I've tried to: >>>>>>> Killing the worker - Ok >>>>>>> Delete Scheduler Ko >>>>>>> >>>>>>> When I delete the scheduler.py file from admin web2py creates it >>>>>>> again. >>>>>>> >>>>>>> How can I delete the scheduler? >>>>>>> >>>>>> >>>>>> delete what ? the istantiation ? >>>>>> >>>>>> >>>>>>> >>>>>>> Thanks, regards. >>>>>>> >>>>>>> Il giorno giovedì 17 dicembre 2015 21:40:36 UTC+1, Gael Princivalle >>>>>>> ha scritto: >>>>>>>> >>>>>>>> Thanks Niphlod. >>>>>>>> >>>>>>>> >So, here it is the breakdown of the possible issues: >>>>>>>> >- are tasks QUEUED or ASSIGNED ? >>>>>>>> QUEUED. But if I set the net run time to now + 2 minutes I can >>>>>>>> check that task don't run. >>>>>>>> Status options don't have ASSIGNED. >>>>>>>> >>>>>>>> >If they are QUEUED, either they can't be executed yet (i.e. >>>>>>>> start_time in the future) or tasks are queued with a group_name that >>>>>>>> can't >>>>>>>> be processed by a worker . >>>>>>>> Main group for all. >>>>>>>> >>>>>>>> *>Check #1: there should be a scheduler_task row with group_name >>>>>>>> "in" scheduler_worker group_names* >>>>>>>> >- if tasks SHOULD be ASSIGNED but remain on QUEUED status, no >>>>>>>> worker is running or workers can't agree on who is "the ticker". >>>>>>>> Well worker is running and main is the group_name. >>>>>>>> >>>>>>>> *>Check #2 (there should be a scheduler_worker with is_ticker = >>>>>>>> True)* >>>>>>>> Yes it have. >>>>>>>> >>>>>>>> Check is complete. I think I will cancel the scheduler and create >>>>>>>> it again. Thanks for your help. >>>>>>>> >>>>>>>> Il giorno giovedì 17 dicembre 2015 21:21:56 UTC+1, Niphlod ha >>>>>>>> scritto: >>>>>>>>> >>>>>>>>> I thought more people liked to see the code : I find myself >>>>>>>>> explaining scheduler internals more often than I'd like to :P >>>>>>>>> >>>>>>>>> soooooooooo. worker names.... worker names are used to identify a >>>>>>>>> worker process (it's enforced as unique in the model)... >>>>>>>>> I'll reply to some ideal "FAQ" questions.... >>>>>>>>> - Why worker names are important ? Because tasks ASSIGNED to a >>>>>>>>> worker_name (assigned_worker_name in scheduler_task) get >>>>>>>>> processed by that worker, and that worker only >>>>>>>>> - Who chooses worker names ? the worker itself. It does so >>>>>>>>> concatenating the hostname and the PID, which results in a good (and >>>>>>>>> unique) way to identify a process. >>>>>>>>> - Who chooses that task "foo" gets processed by worker "bar" ? A >>>>>>>>> worker. It's when tasks from QUEUED go to the ASSIGNED status... The >>>>>>>>> worker >>>>>>>>> that does this is "the ticker". The ticker is "elected" with a dumb >>>>>>>>> (and >>>>>>>>> slow) - but reliable - algorithm among workers: it's the only one >>>>>>>>> that can >>>>>>>>> "assign" tasks (either to itself or to other workers). The only thing >>>>>>>>> that >>>>>>>>> blocks a ticker to assign tasks to a worker is the group_name. >>>>>>>>> >>>>>>>>> So, here it is the breakdown of the possible issues: >>>>>>>>> - are tasks QUEUED or ASSIGNED ? If they are QUEUED, either they >>>>>>>>> can't be executed yet (i.e. start_time in the future) or tasks are >>>>>>>>> queued >>>>>>>>> with a group_name that can't be processed by a worker . >>>>>>>>> *Check #1: there should be a scheduler_task row with group_name >>>>>>>>> "in" scheduler_worker group_names* >>>>>>>>> - if tasks SHOULD be ASSIGNED but remain on QUEUED status, no >>>>>>>>> worker is running or workers can't agree on who is "the ticker". >>>>>>>>> *Check #2 (there should be a scheduler_worker with is_ticker = >>>>>>>>> True)* >>>>>>>>> That should be it. >>>>>>>>> >>>>>>>>> >>>>>>>>> Note: if tasks are ASSIGNED to a worker that isn't there anymore >>>>>>>>> (i.e. is dead or, in your case, you changed hosting facility) it's >>>>>>>>> not an >>>>>>>>> issue. Any worker, periodically, checks if ALL other workers are >>>>>>>>> alive (and >>>>>>>>> kicking) and if a worker isn't kicking it's removed from the >>>>>>>>> scheduler_worker table AND all tasks ASSIGNED to it gets >>>>>>>>> redistributed >>>>>>>>> among live workers. This in addition to the ticker redistributing >>>>>>>>> tasks >>>>>>>>> every once in a while if they're ready to be executed but not >>>>>>>>> executed yet >>>>>>>>> (it can, and it does happen, that a worker is busy processing a >>>>>>>>> long-running task while there are other tasks ready to be processed >>>>>>>>> (by >>>>>>>>> other workers). Every worker gets a fair chance of doing something >>>>>>>>> useful >>>>>>>>> instead of sleeping) >>>>>>>>> >>>>>>>>> On Thursday, December 17, 2015 at 9:02:11 PM UTC+1, Gael >>>>>>>>> Princivalle wrote: >>>>>>>>>> >>>>>>>>>> Hello to all. >>>>>>>>>> >>>>>>>>>> I've migrate my webfaction hosting to another webfaction hosting >>>>>>>>>> that run CentOS 7 (before it was a previous version of CentOS). >>>>>>>>>> I was running web2py 2.9.12, now I have web2py 2.12.3. >>>>>>>>>> >>>>>>>>>> I've got a strange problem with the scheduler. >>>>>>>>>> >>>>>>>>>> Workers are in the db, they run, they are assigned to tasks, but >>>>>>>>>> tasks don't run. >>>>>>>>>> When I've created new workers tasks have not take automatically >>>>>>>>>> the worker name. I've put all worker names by myself. >>>>>>>>>> >>>>>>>>>> Perhaps the problem is due to the worker names. >>>>>>>>>> In the previous hosting they were like that: >>>>>>>>>> s18060957#14459 >>>>>>>>>> >>>>>>>>>> And now they have the webserver name: >>>>>>>>>> web490.webfaction.com#12949 >>>>>>>>>> >>>>>>>>>> web2py seems to codify the webserver name, but in this new >>>>>>>>>> configuration it don't. >>>>>>>>>> Is it why Scheduler don't run tasks? >>>>>>>>>> How I can resolve that? >>>>>>>>>> >>>>>>>>>> Thanks, regards. >>>>>>>>>> >>>>>>>>> -- Resources: - http://web2py.com - http://web2py.com/book (Documentation) - http://github.com/web2py/web2py (Source code) - https://code.google.com/p/web2py/issues/list (Report Issues) --- You received this message because you are subscribed to the Google Groups "web2py-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to web2py+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.