Roscoe wrote: > I just tried OCR-A but with limited success. Will add in par2 and see > how things go with that.
That should be interesting. I'm now leaning even more towards hex (base16) rather than base64. There would be less opportunity for confusion for the OCR. I was thinking it would be too inefficient but then I realized that hex gets four bits per char and base64 only gets six, so hex is only 50% bigger. If the error rate was sufficiently low, you might be able to get away with a much smaller font as well. A font half the size would store four times as much per page. I'm thinking about writing a small simple script to print the hex with parity or checksum or a simple error correction value at the side of each row and the bottom of each column. The script that checks the parity could mark the rows and columns with errors, thereby allowing you to do human OCR to correct the errors if the characters hadn't been obliterated and if there weren't too many errors. As long as one is doing such a script one might as well pick 16 chars that are more distinct than the usual 0-9A-F. A short sentence could be put at the bottom of the page to tell how to decode it, or the script itself might be included on another page. _______________________________________________ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users