This may not be your problem, but I've wasted tons of time in the past
because of these symptoms, so here is why it happened to me...
I have seen this happen when a file is read that contains byte order
marks at the beginning. Most editors strip these out and get the
encoding right, so you don't
In the snippet you show below, “(#seconds)”
is not a comment, it is literal text that sits outside of the element you have
shown. If the element you have shown is the only element in the document, then
what you show is not well formed.
Comments must start with “” anything else is NOT a
We have a set of middleware connectors for 30-year-old, non-relational
databases, including a number of connectors that will produce XML from
the data. We encountered the same problem, and used the mixed content
model, almost identical to what David showed (with an attribute). That
way we could t
I ran extensive tests to see if clone would be faster (assumed it would,
at first). I found that reparsing the original file (assuming it hadn't
changed) was significantly faster than clone. If you have to serialize
first, you might lose that advantage but I seem to recall it was
significant.
--
whole document cloning in Xerces2?
/Robert Houben/:
> I ran extensive tests to see if clone would be faster (assumed it
would,
> at first). I found that reparsing the original file (assuming it
hadn't
> changed) was significantly faster than clone. If you have to
serialize
> firs
Not sure if this is your problem, but note
that SAX will often call your character callback method multiple times,
breaking up the text into blocks, and something like an entity is likely to
cause a new “block” of text to be processed. You will have to
gather up all blocks of text passed t
that default behavior for the parser would be to handle these basic
XML entieis.. Do I need an EntityResolver?
Thanks,
August
On 10/6/06, Robert
Houben < [EMAIL PROTECTED]> wrote:
Not sure if this is your problem, but note that SAX will
often call your character callback method
There is one other case where you will get
this error. That’s where you include a prolog that declared the
data as UTF-16, but the actual data written was NOT truly UTF-16. Here is
a snippet I use to write data out:
new
File(filePath).delete();
I've tried to solve this problem myself and can't find an easy solution.
If all you want is /x/y/z type stuff and you aren't worried about
getting a specific instance of 'z' in a specific instance of 'y' it is
really easy to build this yourself. You just keep getting the nodeName
of the parent unt
Hi Inma,
The last line of your first block you have:
return baos.toString();
Note that when you do “toString()” on the byte array it will return a string in
Java internal form, not UTF8. I’m guessing that in your next block of code,
xmlutf8 is the result of the first block. This means that whe
10 matches
Mail list logo