ahem until "it makes sense" any modification (and discussion about it)
doesn't really help anyone ^_^ .
The fact that you have 4 workers and a congestion problem gives me the hint
that your db is on the lower side of the needed specs for a normal server.
These kind of issues starts to show
Hi Niphold,
you are right: I have an extra database select in order to get the list of
dead workers.
Usually I have four workers for example. They are static and shouldn't
terminate often. In this case, I call only once the database in order to
get the list of dead workers and I assume this li
sorry, but it doesn't really make sense.
You're executing twice the same command (the call enclosed in len() and the
actual .delete() call), which is the counter-arg for relaxing a pressured
database environment.
On Monday, October 31, 2016 at 2:04:24 PM UTC+1, Erwn Ltmann wrote:
>
> Hi,
>
> t
Hi,
thank you for your reply.
@Pierre: MariaDB (in my case) handled deadlocks automaticly too. Good to
known, I don't have to be worry about that.
@Niphlod: I tried to beef up my database host. No effects. Another
suggestion is to prevent the cases for such situation. I did it by an
another e
the only thing you can do is either beefing up the database instance (less
deadlocks because of faster execution of queries) or lower the db pressure
(lower number of workers, higher heartbeat).
--
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2p
I have got deadlocks too but postgresql knows how to resolve this so i
don't need worry about it.
take a look here:
https://www.postgresql.org/docs/9.1/static/explicit-locking.html
/*---excerpt--*/
13.
6 matches
Mail list logo