* Dominik Wujastyk (wujas...@gmail.com) wrote:
  
  |>  bibliographical information from sites like copac.ac.uk and worldcat.org, 
into
  |>  JabRef for use in my documents.  What I'm finding, though, is that 
several of
  |>  these big online bibliographical databases have their records in 
un-normalized
  |>  Unicode.  And it doesn't print nicely with XeTeX. 
  |>  
  [.....]
  |>  What I'd like, ideally, is a little filter to run on my bib files 
periodically
  |>  to clean up any char+non-spacing-accent glyphs.
 [.....]
  |>  throwing up errors along the lines of
  |>  \begin{verbatim}
  |>  Checking duplicates, takes some time.
  |>  Finished processing character data file(s).
  |>  
  |>  Line 1: Non-Existing codepoints.
  |>  Giving up!
  |>  \end{verbatim}
  |>  
  |>  Has anyone any better suggestions than charlint, or experience getting 
charlint

No suggestion, but a similar problem using Library of Congress and other 
databanks > Bibdesk > TextMate completion. I'm also interested in a solution.
--Gildas



--------------------------------------------------
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex

Reply via email to