On 3/8/07, Ron Davies <[EMAIL PROTECTED]> wrote:
Hi Mike,
Yeah, sorry, it sounds really strange to me too. I have to say I am not
responsible for the installation of Perl or any modules on our Sun server so
that it's possible that something has got messed up there, but I can't
really make an
Hi Mike,
Yeah, sorry, it sounds really strange to me too. I have to say I am not
responsible for the installation of Perl or any modules on our Sun server
so that it's possible that something has got messed up there, but I can't
really make any progress in that regard unless I can say that so
On 3/7/07, Ron Davies <[EMAIL PROTECTED]> wrote:
[snip]
There is nothing wrong with the UTF-8 encoding in the input data. The data
displays fine in the ILS, and when I hand check the coding in the MARC
record, it's correct.
What's more, if I take a _smaller_ subset of records (say about 50 recor
On 3/7/07, Bryan Baldus <[EMAIL PROTECTED]> wrote:
On Wednesday, March 07, 2007 2:34 PM, Ron Davies wrote:
>When I do this I get a number of error messages such as :
>"\x{00ce}" does not map to utf8 at myprogram.pl line xxx.
>and in the output file instead of the correct character there is a hex
On Wednesday, March 07, 2007 2:34 PM, Ron Davies wrote:
>When I do this I get a number of error messages such as :
>"\x{00ce}" does not map to utf8 at myprogram.pl line xxx.
>and in the output file instead of the correct character there is a hex
>encoding. This happens with Greek but also perfectl
I am working with an application running on Solaris where I am extracting a
couple of hundred UTF-8 MARC records from an ILS, extracting some basic
citation data and writing it out to a UTF-8 HTML file. I am using
MARC::Record 2.0 and Encode 2.12 which I use to open my output file thus:
ope