-> sqlite3 .svn/wc.db "select count (*) from nodes" 5242 -> sqlite3 .svn/wc.db "select count (*) from nodes where op_depth > 0" 0 -> sqlite3 .svn/wc.db "select count (*) from actual_node" 0
I ran the following command for i in $(find . -type d | grep -v .svn); do ls -1 $i | wc -l; done | sort -n The directory with the largest number of sub-stuff had 211 entries. -----Original Message----- From: Philip Martin [mailto:philip.mar...@wandisco.com] Sent: Thursday, September 01, 2011 11:13 AM To: RYTTING,MICHAEL (A-ColSprings,ex1) Cc: d...@daniel.shahaf.name; dev@subversion.apache.org Subject: Re: Really lousy performance with svn info --depth infinity <michael_rytt...@agilent.com> writes: > And here is the final comparison using an nfs mounted working copy. > This is where the difference gets really bad. > > 1.6.17 > > -> time /file_access/subversion/1.6.17/bin/svn info --depth infinity > > -> /dev/null > > real 0m2.548s > user 0m0.350s > sys 0m0.142s > > 1.7.0-rc2 > > -> time svn info --depth infinity > /dev/null > > real 6m51.036s > user 0m13.947s > sys 0m10.880s I see the opposite on an NFS disk: the single recursive call is 20s and the multiple non-recursive calls are 33s, so the single call is faster as expected. It's still 20x slower than a local disk but that will be because info is still using per-node sqlite transactions. What do these command show: $ sqlite3 .svn/wc.db "select count (*) from nodes" $ sqlite3 .svn/wc.db "select count (*) from nodes where op_depth > 0" $ sqlite3 .svn/wc.db "select count (*) from actual_node" Does your working copy have "large" directories, i.e. a directory with a large number of immediate subdirs/files? (It should be possible to forumulate an SQL statement that tells me, but my SQL isn't good enough). -- uberSVN: Apache Subversion Made Easy http://www.uberSVN.com