DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUGĀ·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
<http://issues.apache.org/bugzilla/show_bug.cgi?id=38271>.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED ANDĀ·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=38271

           Summary: Token filtering during copy ignores encoding
           Product: Ant
           Version: 1.6.2
          Platform: Other
        OS/Version: other
            Status: NEW
          Severity: normal
          Priority: P2
         Component: Core tasks
        AssignedTo: dev@ant.apache.org
        ReportedBy: [EMAIL PROTECTED]


Files are conceptually a sequence of bytes, and tokens are strings. So when you
talk about "replacing tokens inside a file", those tokens have to be first
turned into sequences of bytes, then those sequences have to be matched against
file images.

But the current implementation is wrong in two ways:

1. it turns file images to sequence of chars, instead of turning tokens into
sequence of bytes. This corrupts binary files (or a binary portion of otherwise
text file, such as control characters.)

2. one needs to be able to specify the encoding of token->byte sequence
conversion, so that (for example) UTF-8 encoded XML files can be correctly 
filtered.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to