On Thu Jan 29 07:13:32 EST 2009, urie...@gmail.com wrote:
> It includes, among other things, the performance of a drunken slug,

i don't know how slow a drunken slug is.  but before rushing
off to replace replica, it would be useful to see where the time
is going.  you may find that your proposed replacements suffer
from the same problems for the same reasons.  in fact, i suspect
that if things are slow, it is likely due to 9p latency from sources.
the same problem would apply to any tool using the high-latency
link.

i've had no problems with the performance of the replica tools.
when using replica to replicate fileservers, i have found that
the performance of replica was very good and seems to increase
fairly linearly with the speed of the source and target.

> and as you well point out, the skils of a schizophrenic monkey for
> managing local changes.

well then, please show me how hg/git or whatever would save
me from the situation outlined.  how would hg/git know that
i was really using some code which i had never locally modified
and was then removed on sources?

if you have any specific problems, i'm sure they could be quickly
fixed.

> All this has been solved by git and hg; and git and hg would *never*
> wipe out your local files simply because the backing store for the
> repository you are pulling from happens to break, the pull simply
> would fail and leave your local repo intact, and when the remote repo
> is brought back, all would work just fine.

i'm not convinced by this argument.  could you demonstrate how hg/git
would magicly apply correct deletions and not apply erroneous
ones and demonstrate how it's faster on the plan 9 codebase.

if you believe this is the way, then put your money where your
mouth is and do the work and show us how it's better.

otherwise, there's just no point in talking about it.

- erik

Reply via email to