Follow-up Comment #11, bug #55154 (group groff): [comment #10 comment #10:] > Bizarrely, while it accepts the second translation, it doesn't > actually honor it.
It gets worse: even .char fails at this. $ cat char-test .char b \~ abc cba\p $ nroff char-test | cat -s a c c a $ I presume this is due to this explanation in the Texinfo manual: -- Request: .char c ['"'][contents] Every time C is to be output, CONTENTS is processed in a temporary environment and the result encapsulated in a node. The temporary environment being unaware of the rest of the line, it can only turn \~ into a node that is the width of an ordinary unbreakable space. This is frustrating, because it means there is no way within groff to work around bug #62300 (fixed in 1.23.0) for UTF-8 documents that need to work under earlier preconvs. Another tool has to preprocess the file before preconv gets to it. While everything here appears to be working as designed, I'm tempted to open a new bug report anyway, because the design thwarts such a seemingly straightforward way to handle pre-1.23 preconv output for the input character U+00A0. Hopefully I'm wrong about something above, and someone wiser will set me straight. _______________________________________________________ Reply to this item at: <https://savannah.gnu.org/bugs/?55154> _______________________________________________ Message sent via Savannah https://savannah.gnu.org/