Can you tell a bit more about this 'worst case' working copy?
Does it use svn:keywords in many places?
What about svn:needs-lock?
More svn:eol-style keywords than the other working copies?
Bert
From: Ketting, Michael [mailto:[email protected]]
Sent: donderdag 11 augustus 2011 10:54
To: [email protected]
Subject: RE: Significant checkout performance degradation between 1.6.1 and
1.7b2
Just a bit more information:
I've now also tried the chekcout tests with other other big trunks in our
company:
One took 7min (svn 1.6) vs 9min (svn 1.7), the other 4min (svn 1.6) vs 6min
(svn 1.7), so, both are slower but in the range also measured with the
benchmarks.
Looks like my own project really is the worst case scenario :)
Regards, Michael
_____
From: Mark Phippard [[email protected]]
Sent: Tuesday, August 09, 2011 17:05
To: Ketting, Michael
Cc: [email protected]
Subject: Re: Significant checkout performance degradation between 1.6.1 and
1.7b2
On Tue, Aug 9, 2011 at 8:07 AM, Mark Phippard <[email protected]> wrote:
Is this via http? Given that export is slower I'd be willing to bet the
performance difference is from the new http client library - serf. It is
typically slower than Neon. Try switching to neon and run it again.
I updated to the latest Beta of TortoiseSVN and it looks to me like they
have changed the default HTTP client to Neon already. So unless you have
specifically made serf the default client in your servers file it is not
likely that this is your problem.
I developed a set of open-source benchmarks to measure Subversion
performance that you can get here:
https://ctf.open.collab.net/sf/sfmain/do/viewProject/projects.csvn
Perhaps you could set up the repository on your server and run the
benchmarks using 1.6 and 1.7 to see what kind of results you see? When I
run the tests I see considerable performance gain with 1.7. The
"FolderTests" are probably the closes tests to your scenario. It will be
easier to focus on any remaining performance issues if we can identify and
measure them in an open and consistent manner so we can see progress and the
impact of different changes.
If these benchmarks do not show the same problems you see on your real code,
then we need to add more benchmarks so that we can capture whatever the
problem is.
--
Thanks
Mark Phippard
http://markphip.blogspot.com/