Re: [openstack-dev] [keystone][middleware]: Use encrypted password in the service conf file

2017-10-12 Thread pnkk
umable to > keystonemiddleware as a library [0]. > > [0] https://etherpad.openstack.org/p/oslo-ptg-queens > > > On 10/11/2017 07:43 AM, pnkk wrote: > > Hi, > > We have our API server(based on pyramid) integrated with keystone for > AuthN/AuthZ. >

[openstack-dev] [keystone][middleware]: Use encrypted password in the service conf file

2017-10-11 Thread pnkk
Hi, We have our API server(based on pyramid) integrated with keystone for AuthN/AuthZ. So our service has a *.conf file which has [keystone_authtoken] section that defines all the stuff needed for registering to keystone. WSGI pipeline will first get filtered with keystone auth token and then get

Re: [openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-06-05 Thread pnkk
lar capabilities via > http://docs.openstack.org/developer/taskflow/workers.html#design but > anyway what u've done is pretty neat as well. > > I am assuming this isn't an openstack project (due to usage of celery), > any details on what's being worked on (am curio

Re: [openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-06-01 Thread pnkk
u looked at those? It almost appears that u are using celery as a job > distribution system (similar to the jobs.html link mentioned above)? Is > that somewhat correct (I haven't seen anyone try this, wondering how u are > using it and the choices that directed u to that, aka, am curi

Re: [openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-05-27 Thread pnkk
the node is rebooted? Who will retry this transaction? Thanks, Kanthi On Fri, May 27, 2016 at 5:39 PM, pnkk wrote: > Hi, > > When taskflow engine is executing a job, the execution failed due to IO > error(traceback pasted below). > > 2016-05-25 19:45

[openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-05-27 Thread pnkk
Hi, When taskflow engine is executing a job, the execution failed due to IO error(traceback pasted below). 2016-05-25 19:45:21.717 7119 ERROR taskflow.engines.action_engine.engine 127.0.1.1 [-] Engine execution has failed, something bad must of happened (last 10 machine transitions were [('SCHED

Re: [openstack-dev] [TaskFlow] TaskFlow persistence

2016-03-23 Thread pnkk
Joshua, We are performing few scaling tests for our solution and see that there are errors as below: Failed saving logbook 'cc6f5cbd-c2f7-4432-9ca6-fff185cf853b'\n InternalError: (pymysql.err.InternalError) (1205, u'Lock wait timeout exceeded; try restarting transaction') [SQL: u'UPDATE logbooks

Re: [openstack-dev] [TaskFlow] TaskFlow persistence

2016-03-19 Thread pnkk
a bug @ > bugs.launchpad.net/taskflow for that and we can try to add said lock > (that should hopefully resolve what u are seeing, although if it doesn't > then the bug lies somewhere else). > > Thanks much! > > -Josh > > > On 03/19/2016 08:45 AM, pnkk wrote: > >> Hi Joshua,

Re: [openstack-dev] [TaskFlow] TaskFlow persistence

2016-03-19 Thread pnkk
Hi Joshua, Thanks for all your inputs. We are using this feature successfully. But I rarely see an issue related to concurrency. To give you a brief, we use eventlets and every job runs in a separate eventlet thread. In the job execution part, we use taskflow functionality and persist all the de