Hi,
thank you very much for your detailed answer.
Arguably, we could add a check of the on-disk state before we delete
the file in meta data. But implications would have to be investigated
first (whether to completely fail in the recursive delete case,
whether this would hurt performance a lot, etc.).
Don't worry. This is most probably an edge case and does not affect many
users.
As you stated already we just have to make sure that the entries in the
working copy have read/write access.
After all we will have to and can live with it...
Thanks a lot for your time
Best regards,
Alexander Lüders
Am 06.02.2014 14:47, schrieb Stefan Sperling:
On Thu, Feb 06, 2014 at 02:11:45PM +0100, Alexander Lüders wrote:
So after the failed svn delete a subsequent cleanup would try to finish the
unfinished delete?
Well, here's how the mechanics work in some more detail:
The deletion in meta data doesn't fail. It adds a base-deleted row
for the file to the NODES table in wc.db and also adds a work queue
item which will be run later to delete the on-disk file, once the entire
deletion (which is possibly recursive in the general case) has been
committed to wc.db. If this fails part of the way through for some
reason, all meta-data changes are rolled back, and no change happens.
It's the deletion of the on-disk file that is failing, when the work
queue is run, after all related meta-data changes have been completed.
'svn cleanup' simply tries to run the work queue to get the working
copy into a consistent state. It doesn't know how to undo completed
meta data modifications.
I just wonder why it's different for a svn commit of a versioned modified
file. Applying the same access restrictions on a modified file do not result
in a "locked" working copy after a failed commit. I.e. the commit fails and
no cleanup is necessary to retry the unfinished commit, which eventually
would fail again.
File modifications are not managed as part of the meta data.
They are detected on the fly during the commit process by comparing
file timestamps and sizes to values recorded in meta data during
checkout/update, and even comparing file content if necessary.
I don't understand why it is necessary to retry the failed delete (keeping
it in the journal) at all.
Because the operations is done in meta data first and then on disk.
Arguably, we could add a check of the on-disk state before we delete
the file in meta data. But implications would have to be investigated
first (whether to completely fail in the recursive delete case,
whether this would hurt performance a lot, etc.).
--
Entimo AG
Stralauer Platz 33 - 34 | 10243 Berlin | Germany
Tel: +49.30.52 00 24 131 | Fax: +49.30.52 00 24 101
a...@entimo.com | http://www.entimo.com/
Vorstand: Jürgen Spieler (Vors.), Marianne Neumann
Aufsichtratsvorsitzender: Erika Tannenbaum
Sitz der Gesellschaft: Berlin, Germany | Handelsregister: HRB
Berlin-Charlottenburg 85073