When converting Unicode strings to legacy character encodings, it is possible to register a custom error handler that will catch and process all code points that do not have a direct equivalent in the target encoding (as described in PEP 293).
The thing to note here is that the error handler itself is required to return the substitutions as Unicode strings - not as the target encoding bytestrings. Some lower-level gadgetry will silently convert these strings to the target encoding. That is, if the substitution _itself_ doesn't contain illegal code points for the target encoding. Which brings us to the point: if my error handler for some reason returns illegal substitutions (from the viewpoint of the target encoding), how can I catch _these_ errors and make things good again? I thought it would work automatically, by calling the error handler as many times as necessary, and letting it work out the situation, but it apparently doesn't. Sample code follows: --- 8< --- #!/usr/bin/python import codecs # ================================================================== # Here's our error handler # ================================================================== def charset_conversion(error): # error.object = The original unicode string we're trying to # process and which has characters for which # there is no mapping in the built-in tables. # # error.start = The index position in which the error # occurred in the string # # (See PEP 293 for more information) # Here's our simple conversion table: table = { u"\u2022": u"\u00b7", # "BULLET" to "MIDDLE DOT" u"\u00b7": u"*" # "MIDDLE DOT" to "ASTERISK" } try: # If we can find the character in our conversion table, # let's make a substitution substitution = table[error.object[error.start]] except KeyError: # Okay, the character wasn't in our substitution table. # There's nothing we can do. Better print out its # unicode codepoint as a hex string instead: substitution = u"[U+%04x]" % ord(error.object[error.start]) # Return the substituted string and let the built-in codec # continue from the next position: return (substitution,error.start+1) # ================================================================== # Register the above-defined error handler with the name 'practical' # ================================================================== codecs.register_error('practical',charset_conversion) # ================================================================== # TEST # ================================================================== if __name__ == "__main__": print # Here's our test string: Three BULLET symbols, a space, # the word "TEST", a space again, and three BULLET symbols # again. test = u"\u2022\u2022\u2022 TEST \u2022\u2022\u2022" # Let's see how we can print out it with our new error # handler - in various encodings. # The following works - it just converts the internal # Unicode representation of the above-defined string # to UTF-8 without ever hitting the custom error handler: print " UTF-8: "+test.encode('utf-8','practical') # The next one works, too - it converts the Unicode # "BULLET" symbols to Latin 1 "MIDDLE DOTs": print "Latin 1: "+test.encode('iso-8859-1','practical') # This works as well - it converts the Unicode "BULLET" # symbols to IBM Codepage 437 "MIDDLE DOTs": print " CP 437: "+test.encode('cp437','practical') # The following doesn't work. It should convert the # Unicode "BULLET" symbols to "ASTERISKS" by calling # the error handler two times - first time substituting # the BULLET with the MIDDLE DOT, then finding out # that that doesn't work for ASCII either, and falling # back to a yet simpler form (by calling the error # handler again, which will this time substitute the # MIDDLE DOT with the ASTERISK) - but apparently it # doesn't work that way. We'll get a # UnicodeEncodeError instead. print " ASCII: "+test.encode('ascii','practical') # So the question becomes: how can I make this work # in a graceful manner? --- 8< --- -- znark -- http://mail.python.org/mailman/listinfo/python-list