At 31 May 2001 14:04:39 +1000, Brian May wrote: > >>>>> "Cesar" == Cesar Eduardo Barros <[EMAIL PROTECTED]> writes: > > Cesar> - Making sure everything works with UTF-8 charset > > Biggest problem for me, here (unless that has changed in the past > month or so) is xemacs. Probably the same for emacs too, not > sure. Once I opened a message, and Gnus had heart failure when it said > it couldn't find the UTF-8 charset inside xemacs (actually, the > message was ISO-8859-1, so it doesn't entirely make sense),
AFAIK, emacsen could handle UTF-8 with mule-ucs package. If policy claims to make sure everything works with UTF-8 charset, should mule-ucs be merged into each emacsen? > Cesar> - Adding UTF-8 charset for every locale > Cesar> - Converting (in debian/rules) documentation files to UTF-8 > Cesar> - Selecting en_US.UTF-8 (or something like that) as the default > for LANG= > Cesar> - Echoing some magic sequence on every getty to convert the kernel > mode to UTF8 Is kernel support UTF-8 other than latin chars? > These sound like time consuming tasks, so the sooner we start, the > better. Just don't expect to finish for a while (eg. aim for > woody+1). > > First priority should be to ensure that all programs work with > UTF-8. Ideally, this should be done for woody (but may not be > possible). I don't think it's possible for woody. > How do tools (eg. debconf) know what coding set to use when reading a > file (eg. templates file)? Or, is ISO-8859-1 assumed? debconf doesn't assume any encoding, does it? We're usually using EUC-JP charset for debconf. Regards, Fumitoshi UKAI