umable to
> keystonemiddleware as a library [0].
>
> [0] https://etherpad.openstack.org/p/oslo-ptg-queens
>
>
> On 10/11/2017 07:43 AM, pnkk wrote:
>
> Hi,
>
> We have our API server(based on pyramid) integrated with keystone for
> AuthN/AuthZ.
>
Hi,
We have our API server(based on pyramid) integrated with keystone for
AuthN/AuthZ.
So our service has a *.conf file which has [keystone_authtoken] section
that defines all the stuff needed for registering to keystone.
WSGI pipeline will first get filtered with keystone auth token and then get
lar capabilities via
> http://docs.openstack.org/developer/taskflow/workers.html#design but
> anyway what u've done is pretty neat as well.
>
> I am assuming this isn't an openstack project (due to usage of celery),
> any details on what's being worked on (am curio
u looked at those? It almost appears that u are using celery as a job
> distribution system (similar to the jobs.html link mentioned above)? Is
> that somewhat correct (I haven't seen anyone try this, wondering how u are
> using it and the choices that directed u to that, aka, am curi
the node is
rebooted? Who will retry this transaction?
Thanks,
Kanthi
On Fri, May 27, 2016 at 5:39 PM, pnkk wrote:
> Hi,
>
> When taskflow engine is executing a job, the execution failed due to IO
> error(traceback pasted below).
>
> 2016-05-25 19:45
Hi,
When taskflow engine is executing a job, the execution failed due to IO
error(traceback pasted below).
2016-05-25 19:45:21.717 7119 ERROR taskflow.engines.action_engine.engine
127.0.1.1 [-] Engine execution has failed, something bad must of happened
(last 10 machine transitions were [('SCHED
Joshua,
We are performing few scaling tests for our solution and see that there are
errors as below:
Failed saving logbook 'cc6f5cbd-c2f7-4432-9ca6-fff185cf853b'\n
InternalError: (pymysql.err.InternalError) (1205, u'Lock wait timeout
exceeded; try restarting transaction') [SQL: u'UPDATE logbooks
a bug @
> bugs.launchpad.net/taskflow for that and we can try to add said lock
> (that should hopefully resolve what u are seeing, although if it doesn't
> then the bug lies somewhere else).
>
> Thanks much!
>
> -Josh
>
>
> On 03/19/2016 08:45 AM, pnkk wrote:
>
>> Hi Joshua,
Hi Joshua,
Thanks for all your inputs.
We are using this feature successfully. But I rarely see an issue related
to concurrency.
To give you a brief, we use eventlets and every job runs in a separate
eventlet thread.
In the job execution part, we use taskflow functionality and persist all
the de