Terry Reedy wrote:
Robert LaMarca wrote:
Hi,
I am using numpy and wish to create very large arrays. My system is
AMD 64 x 2 Ubuntu 8.04. Ubuntu should be 64 bit. I have 3gb RAM and a
15 GB swap drive.
The command I have been trying to use is;
g=numpy.ones([1000,1000,1000],numpy.int32)
This returns a memory error. A smaller array ([500,500,500]) worked
fine.. Two smaller arrays again crashed the system.
So... I did the math. a 1000x1000x1000 array at 32 bits should be
around 4gb RAM... Obviously larger than RAM, but much smaller than the
swap drive.
1. So... does Numpy have a really lot of overhead? Or is my system
just not somehow getting to make use of the 15gb swap area. 2. Is
there a way I can access the swap area, or direct numpy to do so? Or
do I have to write out my own numpy cache system... 3. How difficult
is it to use data compression internally on numpy arrays?
I do not know what numpy does, but constant arrays only need to store
the dimensions and the constant value and have a getitem method that
returns that constant value for any valid index. This is at most a few
hundred bytes regardless of the dimensions.
Presumably, he's using numpy.ones() as an example of creating a large array, not
because he actually needs an array full of 1s.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
--
http://mail.python.org/mailman/listinfo/python-list