On Thu, Aug 29, 2013 at 09:17:57PM -0700, Junio C Hamano wrote:

> Jeff King <p...@peff.net> writes:
> 
> > When we read a sha1 file, we first look for a packed
> > version, then a loose version, and then re-check the pack
> > directory again before concluding that we cannot find it.
> > This lets us handle a process that is writing to the
> > repository simultaneously (e.g., receive-pack writing a new
> > pack followed by a ref update, or git-repack packing
> > existing loose objects into a new pack).
> >
> > However, we do not do the same trick with has_sha1_file; we
> > only check the packed objects once, followed by loose
> > objects. This means that we might incorrectly report that we
> > do not have an object, even though we could find it if we
> > simply re-checked the pack directory.
> 
> Hmm, would the same reasoning apply to sha1_object_info(), or do
> existing critical code happen not to have a problematic calling
> sequence like you noticed for repack?

I think the same reasoning would apply; however, we seem to already do
the pack-loose-pack lookup there:

int sha1_object_info_extended(const unsigned char *sha1, struct object_info *oi)
{
[...]
        if (!find_pack_entry(sha1, &e)) {
                /* Most likely it's a loose object. */
                if (!sha1_loose_object_info(sha1, oi)) {
                        oi->whence = OI_LOOSE;
                        return 0;
                }

                /* Not a loose object; someone else may have just packed it. */
                reprepare_packed_git();
                if (!find_pack_entry(sha1, &e))
                        return -1;
        }

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to