I have been reading the git-repack.sh script and I have
found a piece that I am concerned with. It looks like after
repacking there is a place when packfiles could be
temporarily unaccessible making the objects within
temporarily unaccessible. If my evaluation is true, it
would seem like git repacking is not "server" safe?
In particular, I am talking about this loop:
# Ok we have prepared all new packfiles.
# First see if there are packs of the same name and if so
# if we can move them out of the way (this can happen if we
# repacked immediately after packing fully.
rollback=
failed=
for name in $names
do
for sfx in pack idx
do
file=pack-$name.$sfx
test -f "$PACKDIR/$file" || continue
rm -f "$PACKDIR/old-$file" &&
mv "$PACKDIR/$file" "$PACKDIR/old-$file" ||
{
failed=t
break
}
rollback="$rollback $file"
done
test -z "$failed" || break
done
It would seem that one way to avoid this (at least on
systems supporting hardlinks), would be to instead link the
original packfile to old-file first, then move the new
packfile in place without ever deleting the original one
(from its original name), only delete the old-file link.
Does that make sense at all?
Thanks,
-Martin
--
Employee of Qualcomm Innovation Center, Inc. which is a
member of Code Aurora Forum
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html