Michael,
original document can be parsed under 300M java heap settings, but when
I do importNode(node, true), yes deep copy, 1.5G is not enough, reason
probably because importNode forces the whole subtree to materialized +
full copy.

As I see, problem not in the amount of text data, but in the # of
elements in the source subtree, I have around 3M elements. 

I've tried adoptNode, but that does not work for me, null returns, I'll
try more...

Thanks.


-----Original Message-----
From: Michael Glavassevich [mailto:[EMAIL PROTECTED] 
Sent: Monday, October 08, 2007 7:35 AM
To: j-users@xerces.apache.org
Cc: Likharev, Maksim (TS USA)
Subject: Re: importNode, out of memory, big document

Hi,

importNode(node, true) creates a deep copy of the entire subtree. The
copy
will consume as much memory as the original. If your goal is to move the
Node from one document to another you should use adoptNode() [1].

Thanks.

[1]
http://www.w3.org/TR/2004/REC-DOM-Level-3-Core-20040407/core.html#Docume
nt3-adoptNode

Michael Glavassevich
XML Parser Development
IBM Toronto Lab
E-mail: [EMAIL PROTECTED]
E-mail: [EMAIL PROTECTED]

<[EMAIL PROTECTED]> wrote on 10/04/2007 04:12:54 PM:

> Hi,
> Trying to do importNode, deep, size of the text ~70MB, around 3M
> entities and 3M <br> tags,
> getting out of memory even with 1024M java heap size, is there
> anything can be done?
>
> Thank you.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to