On Sat, Aug 09, 2008 at 01:30:12AM -0400, Dave Mielke wrote:
 
> >There are other things like certain symbols are kind of duplicated in 
> >Unicode.  
> >But I cant think of any right now, I'd need to check.

As I remember, accented letters are among these: there are so-called
"combining" characters that allow the letter and the accent to be specified
separately, as well as characters that represent the accented letters as
single, combined entities.

One solution to this is to "normalize" the Unicode string before it reaches
the braille translation functions, so that only one representation is ever
used for those characters. Techniques for doing this, and very likely working
code as well, are available, but this is well outside my area of knowledge.
The W3C has published on this subject as well.
> 
> Maybe another feature we need to add to text tables is character equivalence. 
> Then, if a character is written for which the user's table doesn't have a 
> definition, we could try to see if the table has a definition for an 
> equivalent 
> character.

Would this still be useful even if a normalization algorithm is implemented as
suggested above? This is an open question; I don't pretend to have an answer.

_______________________________________________
This message was sent via the BRLTTY mailing list.
To post a message, send an e-mail to: BRLTTY@mielke.cc
For general information, go to: http://mielke.cc/mailman/listinfo/brltty

Reply via email to