On Fri, 29 Sep 2017 13:49:55 -0500, Matt Riedemann wrote:
For awhile now actually.

Someone was asking about when archive_deleted_rows would actually work, and the answer is, it should since at least mitaka:

https://review.openstack.org/#/q/I77255c77780f0c2b99d59a9c20adecc85335bb18

And starting in Ocata there is the --until-complete option which lets you run it continuously until its done, rather than the weird manual batching from before:

https://review.openstack.org/#/c/378718/

So this shouldn't be news, but it might be. So FYI.

True that. However, I want to give people a heads up about something I learned recently (today actually). I think problems with archive can arise if you've restarted your database after archiving, and attempt to do a future archive. The InnoDB engine in MySQL keeps the AUTO_INCREMENT counter only in memory, so after a restart it selects the maximum value and adds 1 to use as the next value [1].

So if you had soft-deleted rows with primary keys 1 through 10 in the main table and ran archive_deleted_rows, those rows would get inserted into the shadow table and be hard-deleted from the main table. Then, if you restarted the database, the primary key AUTO_INCREMENT counter would be initialized to 1 again and the primary keys you had archived would be reused. If those new rows with primary keys 1 through 10 were eventually soft-deleted and then you ran archive_deleted_rows, the archive would fail with something like, "DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, u"Duplicate entry '1' for key 'PRIMARY'")". The workaround would be to delete or otherwise move the archived rows containing duplicate keys out of the shadow table.

-melanie

[1] https://dev.mysql.com/doc/refman/5.7/en/innodb-auto-increment-handling.html#innodb-auto-increment-initialization


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to