On Sat, Aug 20, 2022 at 07:17:32AM +0000, Werner LEMBERG wrote: > > [texinfo.tex 2022-08-21.14] > > Processing > > ``` > \input texinfo > > @documentencoding UTF-8 > > @include ae/lily-651af4bb.texidoc > > @bye > ``` > > (it's not necessary to actually have an include file) with > > ``` > PDFTEX=xetex texi2pdf > ``` > > aborts with > > ``` > Bad character code (185937). > <to be read again> > a > a->@ifpassthroughchars a@else a > @fi > l.5 @include ae/lily-651a > f4bb.texidoc > ``` >
I believe I've fixed this. There were at least two problems, both caused by a recent change to allow ASCII characters to be output with @U, by giving the Unicode code point value. (This is unnecessary, obviously.) I've disabled this code and it should work okay now. The two problems were: * The definition of some characters used \char but I didn't terminate the argument. For example, - would become \char"2d and then in the example, following characters would be read as part of the number, become \char"2d651a which XeTeX is reporting as too big a number. * That was easy to fix, but then files would still not process properly because all characters were typesetting literally, so "@bye" would typeset a literal "@" followed by "bye". While it's consistent to allow @U to be used for ASCII characters, it doesn't appear to offer any benefit. If the code is re-enabled the second point here would still need to be fixed.
