Excerpts from Guilherme Russi's message of 2013-09-03 11:52:39 -0700: > Query OK, 502150 rows affected (32 min 2.77 sec) and nothing has changed, > lol.
There's also indexes in Havana that help a lot, you might consider adding them manually: ALTER TABLE token ADD INDEX ix_token_valid (valid); ALTER TABLE token ADD INDEX ix_token_expires (expires); Note that a 500,000 row delete is _brutal_ on your server. We use this in TripleO: https://git.openstack.org/cgit/openstack/tripleo-image-elements/tree/elements/keystone/cleanup-keystone-tokens.sh It allows space in between the deletes for other things to happen, and also deletes in a more efficient way to not thrash around the table deleting things in index order. Also, if you don't need the content of your token table for audit purposes and you can afford the RAM, you should definitely consider switching to the memcached backend for tokens. If you do want to stay with the SQL backend, make sure your MySQL is tuned for tons of tiny transactions and has enough memory to keep the working set in RAM. _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack