> 
> I can happily load a 300MB image with the IMAQ Readfile VI, but when I
> try to load larger images (eg 500MB), labview gracefully shuts down
> with a "Not enough memory for requested operation" error
> (-1074396159). The first thing I did was increased my RAM from 1GB to
> 2GB but this made absolutely no difference.
> 
> Would love to know how I can use a little more of the memory that I
> have installed!

And when the OSes and applications move to 64 bit addressing, we should 
all be able to.  Until then, you are working awfully close to the amount 
of VRAM that NT can give the process.  NT is a 32 bit OS, which lets it 
address 4GB.  I believe that it will only give each process 2GB, and 
half that is reserved for stack space.  I'm not certain about that 
number since I've also heard that each process gets 2GB of data/code, 
and 2GB of stack.  And it is also possible to reconfigure the NT kernel 
to give processes a different amount of stack upon creation, though I've 
never looked at how to configure it.

Anyway, my point is that you are reaching a limit of the OS virtual 
memory system.  When this happens, you can't rely on it, and the 
application has to handle it.  Since LV doesn't know much about your 
application, it is actually up to the person writing the diagram.

As I mentioned before, the options are to work on a portion of the data 
at a time.  Process the top 200MB of the image, then the next 200MB, ... 
Then combine the results.  This of course may involve overlapping the 
data being processed, all depending on what processing is going on. 
This is done all the time on time acquired waveforms, where the infinite 
or very long waveform is chopped, windowed, processed, and the results 
are then combined.  It makes the algorithm more complex, but reality 
often encourages people to do clever things.

If you will be more specific about the contents of the image and the 
analysis, there is almost always a way.  There are tons of algorithms 
that trade space for time and allow for processing to be done 
incrementally on huge datasets, but they get very specific and can't 
always be used in the general case.

Greg McKaskle


Reply via email to