I forgot to mentioned that, sorry, came no surprise to me, document is xalan DTMNodeProxy, read-only instance.
The whole stuff is a xalan extension, that provides one the fly "annotated" view of the original document during xslt transformation. What I think I have to do is to start serializing document fragment into some hard copy, like string, instead of the full blown Node, in any way it will end up in a file, might save me some memory. Thank you. -----Original Message----- From: Michael Glavassevich [mailto:[EMAIL PROTECTED] Sent: Monday, October 08, 2007 7:26 PM To: j-users@xerces.apache.org Cc: Likharev, Maksim (TS USA) Subject: RE: importNode, out of memory, big document <[EMAIL PROTECTED]> wrote on 10/08/2007 07:53:47 PM: > Michael, > original document can be parsed under 300M java heap settings, but when > I do importNode(node, true), yes deep copy, 1.5G is not enough, reason > probably because importNode forces the whole subtree to materialized + > full copy. Right, this might be more than 2x if the original subtree hasn't been fully materialized. > As I see, problem not in the amount of text data, but in the # of > elements in the source subtree, I have around 3M elements. > > I've tried adoptNode, but that does not work for me, null returns, I'll > try more... adoptNode() will return null if the source node comes from an implementation which is not compatible with the target document. Can you check what the names of the DOMImplementation classes (i.e. call docNode.getImplementation().getClass().getName()) are for both the source and target? Any combination of org.apache.xerces.dom.DOMImplementationImpl and org.apache.xerces.dom.DeferredDOMImplementationImpl should be compatible with each other. > Thanks. Michael Glavassevich XML Parser Development IBM Toronto Lab E-mail: [EMAIL PROTECTED] E-mail: [EMAIL PROTECTED] --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]