The general solution for processing an arbitrarily large file is to use
read from file xx for yy chars
in a loop until you don't get that many chars back, indicating that you've
reached the end. Of course, you then need some way to process the data in
chunks (not necessarily the same size as you're ready, e.g. you can build up
your own buffer). If it's a natural chunked datafile, where the chunks just
don't happen to include return chars, that's fine; if it's arbitrary XML,
that's trickier. If it's XML that you understand the format of (e.g. KML)
then you're probably OK, albeit you have to work a little harder.
(In modern LC maybe you can use bytes or something else for the chunk type
instead of chars, but the principle is the same.)
Ben
On 19/03/2015 13:25, Michael Doub wrote:
The OS manages virtual memory and will move pages in and out of physical
memory for you. There is no need to worry about the system loosing data.
However you should be aware of this and try to control the way your code
accesses memory. if your code randomly accesses the data, you can cause the
system to spend a lot time moving pages in and out of memory. So processing
in sections would help manage the performance.
-= Mike
On 3/19/15 3:07 AM, j...@souslelogo.com wrote:
Hi list,
SImilar question : how would you proceed if you
had to process a 16 Gb xml file (with no return chars)
with LC on a Mac with only 4 Gb of RAM ?
I managed to process the file in sections, but I always
fear the script would miss some nodes here and there...
Thanks in advance
jbv
_______________________________________________
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode