>But if you download 1000 files of the 1010 you need, and then your network >goes down, you will need to download those 1000 again when it comes back, >because you can't save them unless you have the full history.
So you could make the temporary object repository persistant between pulls to avoid reloading them across the wire. Something like: get_commit(sha1) { if (sha1 in real_repo) -> done if (!(sha1 in tmp_repo)) load sha1 to tmp_repo get_tree(sha1->tree) for each parent get_commit(sha1->parent) move sha1 from tmp_repo to real_repo } get_tree(sha1) { if (sha1 in real_repo) -> done if (!(sha1 in tmp_repo)) load sha1 to tmp repo for_each (sha1->entry) { case blob: if (!sha1 in real_repo) load to real_repo case tree: get_tree() } move sha1 from tmp_repo to real_repo } The "load sha1 to xxx_repo" needs to be smarter than my dumb wget based script ... it must confirm the sha1 of the object being loaded before installing (even into the tmp_repo). -Tony - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html