g message
> about the memory leak and it is getting for the each class.
>
>
> Please could you suggest on this..
>
> below is the warning we are getting recieved.
>
> WARNING: The web application [/vertex-tcc] appears to have started a thread
> named [Timer-1] but has fa
Hi Apache team,
We need help regarding on issue for one of our application.
We are upgrading our application to tomcat 8V from tomcat 6v.
When the Tomcat server is restarted, we are getting the Warning message
about the memory leak and it is getting for the each class.
Please could you
* Daniel Shahaf :
> Presumably, tcpdump/wireshark of the response headers will be useful.
The problem was very subtle:
Apache delivered a P3P header (our own configuration). This broke the checkout.
Why only a certain file was affected (it worked when the file was not in the
repo) or why it worke
Presumably, tcpdump/wireshark of the response headers will be useful.
Daniel
(no time to dive deeply into this right now, sorry)
Fridtjof Busse wrote on Tue, Mar 06, 2012 at 12:44:06 +0100:
> Hi,
>
> answering my own post:
>
> * Fridtjof Busse :
> >
> > The httpd process starts consuming up to
ut a checkout will contain multiple
independent copies of the files.
Anyhow, the question is what size the checkout really requires. The data
for this checkout is accumulated in memory and then sent to the client.
If this is more than the amount of memory you have, that's bad luck and
a nat
Hi,
answering my own post:
* Fridtjof Busse :
>
> The httpd process starts consuming up to 4GB and then dies, killing the
> svn client process with:
> svn: REPORT of '/svn/prod/!svn/vcc/default': Could not read chunk size:
> connection was closed by server
Setting the configuration to
SVNAllowBu
/default': Could not read chunk size:
connection was closed by server
Even more interesting, apache does not log anything in its error log.
To me, this looks like a memory leak.
Is there any way to debug this further?
Please CC me, thanks.
--
NEU: FreePhone 3-fach-Flat m
> >> It might already be fixed: I see a leak with 1.7.x@r1293812 but
r1293813
> >> appears to fix it.
> >
> > Definitely not fixed, at least for https:// externals. This is what I
> > did:
>
> I see small memory growth during checkout. I see much larger memory
> growth for a second checkout o
kmra...@rockwellcollins.com writes:
>> It might already be fixed: I see a leak with 1.7.x@r1293812 but r1293813
>> appears to fix it.
>
> Definitely not fixed, at least for https:// externals. This is what I
> did:
I see small memory growth during checkout. I see much larger memory
growth for
> > Has anyone else noticed a client memory leak when using
> > svn:externals in the 1.7.3 client?
> >
> > We have a large project with tens of thousands of externals.
> > When checked out with a 1.6 client, it uses 150MB of memory.
> > When checked out with a
> > Has anyone else noticed a client memory leak when using
> > svn:externals in the 1.7.3 client?
> >
> > We have a large project with tens of thousands of externals.
> > When checked out with a 1.6 client, it uses 150MB of memory.
> > When checked out with a
kmra...@rockwellcollins.com writes:
> Has anyone else noticed a client memory leak when using
> svn:externals in the 1.7.3 client?
>
> We have a large project with tens of thousands of externals.
> When checked out with a 1.6 client, it uses 150MB of memory.
> When checked out
On Mon, Feb 27, 2012 at 10:04:59AM -0600, kmra...@rockwellcollins.com wrote:
> All,
>
> Has anyone else noticed a client memory leak when using
> svn:externals in the 1.7.3 client?
>
> We have a large project with tens of thousands of externals.
> When checked out with
All,
Has anyone else noticed a client memory leak when using
svn:externals in the 1.7.3 client?
We have a large project with tens of thousands of externals.
When checked out with a 1.6 client, it uses 150MB of memory.
When checked out with a 1.7 client it uses >1.5GB of memory.
Same memory is
On 14.10.2011 13:17, Stefan Sperling wrote:
> The perl bindings don't abstract away memory pool handling.
> If you don't pass a pool argument to fs->revision_root(),
> it will use the global pool, which can never be cleared.
>
> You need to use an iteration pool in your script and clear it after
>
On Fri, Oct 14, 2011 at 12:33:08AM +0200, Max Voit wrote:
> Hi,
>
> developing an application dealing with many repositories the existence
> of paths within that repositories had to be checked.
> Using something like:
>
> my $repos = SVN::Repos::open($localpath) or die "no such repo";
> m
Does it also reproduce if you remove the revision_root() call?
What does Perl invoke when an object becomes unreferenced or falls out
of scope?
Max Voit wrote on Fri, Oct 14, 2011 at 00:33:08 +0200:
> Hi,
>
> developing an application dealing with many repositories the existence
> of paths withi
Hi,
developing an application dealing with many repositories the existence
of paths within that repositories had to be checked.
Using something like:
my $repos = SVN::Repos::open($localpath) or die "no such repo";
my $fs= $repos->fs;
$ispath =
$repos->fs->revision_root( $f
filled up.
>
> Only other known issue is to make sure you are not using mod_deflate
> in conjunction with Subversion. This results in a known memory leak
> when a Subversion client that does not support deflate is used.
>
Hi all,
I've just come across this issue myself after
using mod_deflate
in conjunction with Subversion. This results in a known memory leak
when a Subversion client that does not support deflate is used.
--
Thanks
Mark Phippard
http://markphip.blogspot.com/
Hi.
I have a problem, apache is eating al my memory. It seems to allocate the
memory while I checkout or commit things. But then it don't deallocate the
memory after it is finish. Instead it just allocate more and more until the
memory is filled up.
Are it anyway I can fix this problem?
Thanks
Hi
I just want to know if any solution has been found to the problem
mentioned in this post:
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=462&dsMessageId=2383415
As far as I can see, the patches have not been applied. Our svn server
crashed today due to insufficient memory and I
22 matches
Mail list logo