From: "Jon Smirl" <[EMAIL PROTECTED]>
Date: Fri, 7 Dec 2007 02:10:49 -0500

> On 12/7/07, Jeff King <[EMAIL PROTECTED]> wrote:
> > On Thu, Dec 06, 2007 at 07:31:21PM -0800, David Miller wrote:
> >
> > # and test multithreaded large depth/window repacking
> > cd test
> > git config pack.threads 4
> 
> 64 threads with 64 CPUs, if they are multicore you want even more.
> you need to adjust chunk_size as mentioned in the other mail.

It's an 8 core system with 64 cpu threads.

> > time git repack -a -d -f --window=250 --depth=250

Didn't work very well, even with the one-liner patch for
chunk_size it died.  I think I need to build 64-bit
binaries.

[EMAIL PROTECTED]:~/src/GCC/git/test$ time git repack -a -d -f --window=250 
--depth=250
Counting objects: 1190671, done.
fatal: Out of memory? mmap failed: Cannot allocate memory

real    58m36.447s
user    289m8.270s
sys     4m40.680s
[EMAIL PROTECTED]:~/src/GCC/git/test$ 

While it did run the load was anywhere between 5 and 9, although it
did create 64 threads, and the size of the process was about 3.2GB
This may be in part why it wasn't able to use all 64 thread
effectively.  Like I said it seemed to have 9 active at best, at any
one time, most of the time only 4 or 5 were busy doing anything.

Also I could end up being performance limited by SHA, it's not very
well tuned on Sparc.  It's been on my TODO list to code up the crypto
unit support for Niagara-2 in the kernel, then work with Herbert Xu on
the userland interfaces to take advantage of that in things like
libssl.  Even a better C/asm version would probably improve GIT
performance a bit.

Is SHA a significant portion of the compute during these repacks?
I should run oprofile...

Reply via email to