Assuming that memory won't be an issue, you could use MARC::Batch to
read in the record set and print out seperate files where you split on
X amount of records. You would have an iterative loop loading each
record from the large batch, and a counter variable that would get
reset after X amo
Please excuse the cross posting.
JOB POSTING
University Libraries of Notre Dame invite applications for the position
Senior Technical Consultant Analyst. Reporting to the Electronic
Resources Librarian, Head of the Electronic Resources & Serials Access
Department, the Senior Analyst will be part
e that this same thing could be implemented with
existing module functionality by simply adding a couple of lines of code,
but if it would be easy to add to the method, I think it would be worthwhile.
Thanks,
Rob
Robert Fox
Sr. Programmer/Analyst
University Libraries of Notre Dame
(574)631-3353
[EMAIL PROTECTED]
modules.
I hope my experience helps some of you out there working on XML projects
involving large data sets.
Rob
Robert Fox
Sr. Programmer/Analyst
University Libraries of Notre Dame
(574)631-3353
[EMAIL PROTECTED]
e already. Machines usually work cheaper :-)
Best of luck
Peter Corrigan
Head of Library Systems
James Hardiman Library
NUI Galway
IRELAND
Tel: +353-91-524411 Ext 2497
Mobile: +353-87-2798505
-Original Message-
From: Robert Fox [mailto:[EMAIL PROTECTED]
Sent: 25 February 2004 20:31
To: [EMAIL PRO
ML
document. There must be a better way.
Any suggestions or help would be much appreciated,
Rob Fox
Robert Fox
Sr. Programmer/Analyst
University Libraries of Notre Dame
(574)631-3353
[EMAIL PROTECTED]
is ability in MARC::Record to read
alpha tags, not create new records using them.
Rob
Robert Fox
Sr. Programmer/Analyst
University Libraries of Notre Dame
(574)631-3353
[EMAIL PROTECTED]